Quick Install (PyPI)
pip install foreblocks
Requires: Python ≥ 3.9, PyTorch, NumPy, pandas.
Flexible PyTorch library for research‑grade forecasting
A clean, extensible library for building state‑of‑the‑art forecasters: LSTM/GRU, Transformers, Fourier/Wavelet, VAE, and more. Includes adaptive preprocessing, attention, multiple forecasting strategies, and a research‑friendly API.
pip install foreblocks
Requires: Python ≥ 3.9, PyTorch, NumPy, pandas.
git clone https://github.com/lseman/foreblocks
cd foreblocks
pip install -e .
Seq2Seq, Autoregressive, Direct multi‑step, Transformer Seq2Seq.
Swap encoders/decoders, attention, and heads through a clean API.
LSTM, GRU, Transformer, VAE; Fourier & Wavelet blocks.
Normalization, differencing, detrending, EWT, outlier handling.
Pluggable attention layers for temporal alignment and context.
Callbacks, metrics, early stopping, visualizations.
from foreblocks import TimeSeriesSeq2Seq, ModelConfig
cfg = ModelConfig(
model_type="lstm", strategy="seq2seq",
input_size=3, output_size=1,
hidden_size=64, target_len=24,
num_encoder_layers=2, num_decoder_layers=2,
)
model = TimeSeriesSeq2Seq(model_config=cfg)
from foreblocks import TimeSeriesSeq2Seq, ModelConfig
cfg = ModelConfig(
model_type="transformer", strategy="transformer_seq2seq",
input_size=4, output_size=4,
hidden_size=128, dim_feedforward=512,
nheads=8, num_encoder_layers=3, num_decoder_layers=3,
target_len=96,
)
model = TimeSeriesSeq2Seq(model_config=cfg)
Switch strategies while keeping the same data pipeline.
cfg.strategy = "autoregressive" # one-step teacher forcing
# or
cfg.strategy = "direct" # predict all steps at once
# Add attention
from foreblocks.attention import AttentionLayer
attn = AttentionLayer(method="dot", encoder_hidden_size=64, decoder_hidden_size=64)
model = TimeSeriesSeq2Seq(model_config=cfg, attention_module=attn)
from foreblocks import TimeSeriesSeq2Seq, ModelConfig
import pandas as pd, torch
# Load your time‑series (n_samples × n_features)
data = pd.read_csv('your_data.csv').values
cfg = ModelConfig(model_type="lstm", input_size=data.shape[1], output_size=1,
hidden_size=64, target_len=24)
model = TimeSeriesSeq2Seq(model_config=cfg)
X_train, y_train, _ = model.preprocess(data, self_tune=True)
from torch.utils.data import DataLoader, TensorDataset
loader = DataLoader(TensorDataset(torch.tensor(X_train, dtype=torch.float32),
torch.tensor(y_train, dtype=torch.float32)),
batch_size=32, shuffle=True)
history = model.train_model(loader)
# predictions = model.predict(X_test)
seq2seq, autoregressive, direct, or transformer_seq2seq.
Hidden sizes, attention, preprocessing, trainer options.
Use built‑in metrics, callbacks, and plots to iterate quickly.
TimeSeriesSeq2Seq, ForecastingModelModelConfig, TrainingConfigTimeSeriesPreprocessorAttentionLayermodel_type: lstm | gru | transformer | ...strategy: seq2seq | autoregressive | direct | transformer_seq2seqtarget_len, hidden_size, nheads, layersPlug a simple dot‑product attention into a Seq2Seq LSTM.
from foreblocks import TimeSeriesSeq2Seq, ModelConfig
from foreblocks.attention import AttentionLayer
cfg = ModelConfig(model_type="lstm", input_size=3, output_size=1,
hidden_size=64, target_len=24)
attn = AttentionLayer(method="dot", encoder_hidden_size=64, decoder_hidden_size=64)
model = TimeSeriesSeq2Seq(model_config=cfg, attention_module=attn)
A larger transformer for multi‑feature forecasting.
from foreblocks import TimeSeriesSeq2Seq, ModelConfig, TrainingConfig
cfg = ModelConfig(
model_type="transformer", strategy="transformer_seq2seq",
input_size=4, output_size=4, hidden_size=128,
dim_feedforward=512, nheads=8,
num_encoder_layers=3, num_decoder_layers=3,
target_len=96,
)
train = TrainingConfig(num_epochs=100, learning_rate=1e-4, patience=15)
model = TimeSeriesSeq2Seq(model_config=cfg, training_config=train)