ForeBlocks logo

ForeBlocks — Modular Time‑Series Forecasting

Flexible PyTorch library for research‑grade forecasting

A clean, extensible library for building state‑of‑the‑art forecasters: LSTM/GRU, Transformers, Fourier/Wavelet, VAE, and more. Includes adaptive preprocessing, attention, multiple forecasting strategies, and a research‑friendly API.

Scroll

Get ForeBlocks

Quick Install (PyPI)

pip install foreblocks

Requires: Python ≥ 3.9, PyTorch, NumPy, pandas.

Install from Source

git clone https://github.com/lseman/foreblocks
cd foreblocks
pip install -e .

Key Features

Multiple Strategies

Seq2Seq, Autoregressive, Direct multi‑step, Transformer Seq2Seq.

Modular Design

Swap encoders/decoders, attention, and heads through a clean API.

Advanced Models

LSTM, GRU, Transformer, VAE; Fourier & Wavelet blocks.

Adaptive Preprocessing

Normalization, differencing, detrending, EWT, outlier handling.

Attention Modules

Pluggable attention layers for temporal alignment and context.

Trainer & Utilities

Callbacks, metrics, early stopping, visualizations.

Model Recipes

1

Seq2Seq (LSTM)

from foreblocks import TimeSeriesSeq2Seq, ModelConfig

cfg = ModelConfig(
    model_type="lstm", strategy="seq2seq",
    input_size=3, output_size=1,
    hidden_size=64, target_len=24,
    num_encoder_layers=2, num_decoder_layers=2,
)
model = TimeSeriesSeq2Seq(model_config=cfg)
2

Transformer Seq2Seq

from foreblocks import TimeSeriesSeq2Seq, ModelConfig

cfg = ModelConfig(
  model_type="transformer", strategy="transformer_seq2seq",
  input_size=4, output_size=4,
  hidden_size=128, dim_feedforward=512,
  nheads=8, num_encoder_layers=3, num_decoder_layers=3,
  target_len=96,
)
model = TimeSeriesSeq2Seq(model_config=cfg)
3

Autoregressive & Direct Multi‑Step

Switch strategies while keeping the same data pipeline.

cfg.strategy = "autoregressive"  # one-step teacher forcing
# or
cfg.strategy = "direct"          # predict all steps at once
# Add attention
from foreblocks.attention import AttentionLayer
attn = AttentionLayer(method="dot", encoder_hidden_size=64, decoder_hidden_size=64)
model = TimeSeriesSeq2Seq(model_config=cfg, attention_module=attn)

Quick Start

Minimal Example

from foreblocks import TimeSeriesSeq2Seq, ModelConfig
import pandas as pd, torch

# Load your time‑series (n_samples × n_features)
data = pd.read_csv('your_data.csv').values

cfg = ModelConfig(model_type="lstm", input_size=data.shape[1], output_size=1,
                  hidden_size=64, target_len=24)
model = TimeSeriesSeq2Seq(model_config=cfg)

X_train, y_train, _ = model.preprocess(data, self_tune=True)
from torch.utils.data import DataLoader, TensorDataset
loader = DataLoader(TensorDataset(torch.tensor(X_train, dtype=torch.float32),
                                  torch.tensor(y_train, dtype=torch.float32)),
                    batch_size=32, shuffle=True)

history = model.train_model(loader)
# predictions = model.predict(X_test)
STEP 1

Choose strategy

seq2seq, autoregressive, direct, or transformer_seq2seq.

STEP 2

Configure

Hidden sizes, attention, preprocessing, trainer options.

STEP 3

Train & Evaluate

Use built‑in metrics, callbacks, and plots to iterate quickly.

API Cheatsheet

Key Classes

  • TimeSeriesSeq2Seq, ForecastingModel
  • ModelConfig, TrainingConfig
  • TimeSeriesPreprocessor
  • AttentionLayer

Common Config

  • model_type: lstm | gru | transformer | ...
  • strategy: seq2seq | autoregressive | direct | transformer_seq2seq
  • target_len, hidden_size, nheads, layers

Preprocessing

  • Normalization, differencing, detrending
  • EWT, Fourier/Wavelet features
  • Outlier removal, missing‑value imputation

Worked Examples

LSTM + Attention

Plug a simple dot‑product attention into a Seq2Seq LSTM.

from foreblocks import TimeSeriesSeq2Seq, ModelConfig
from foreblocks.attention import AttentionLayer

cfg = ModelConfig(model_type="lstm", input_size=3, output_size=1,
                  hidden_size=64, target_len=24)
attn = AttentionLayer(method="dot", encoder_hidden_size=64, decoder_hidden_size=64)
model = TimeSeriesSeq2Seq(model_config=cfg, attention_module=attn)

Transformer (Multi‑variate)

A larger transformer for multi‑feature forecasting.

from foreblocks import TimeSeriesSeq2Seq, ModelConfig, TrainingConfig

cfg = ModelConfig(
  model_type="transformer", strategy="transformer_seq2seq",
  input_size=4, output_size=4, hidden_size=128,
  dim_feedforward=512, nheads=8,
  num_encoder_layers=3, num_decoder_layers=3,
  target_len=96,
)
train = TrainingConfig(num_epochs=100, learning_rate=1e-4, patience=15)
model = TimeSeriesSeq2Seq(model_config=cfg, training_config=train)