Train a baseline first
Use the safest path through the public API: build a head or encoder/decoder pair, create
dataloaders, train with Trainer, and evaluate before adding complexity.
Modular PyTorch forecasting blocks, robust preprocessing, DARTS architecture search, uncertainty estimation, and a browser Studio — in one package. Start lean, grow deliberately.
pip install foreblocks
pip install "foreblocks[preprocessing]"
pip install "foreblocks[darts]"
Core install first. Extras only when you need them.
foreBlocks covers forecasting, preprocessing, search, and companion tooling. These are the fastest routes to the part you actually need.
Use the safest path through the public API: build a head or encoder/decoder pair, create
dataloaders, train with Trainer, and evaluate before adding complexity.
When your input is a raw [T, D] array rather than ready-made
training windows, reach for the preprocessing stack so scaling, filtering, and slicing stay consistent.
Once the baseline loop is working, move into staged DARTS search, multi-fidelity ranking, BOHB optimization, and retraining workflows.
Install the core package for forecasting and training. Add extras only when you need preprocessing, search, tracking, or analysis.
pip install foreblocks
pip install "foreblocks[preprocessing]"
pip install "foreblocks[darts]"
pip install "foreblocks[mltracker]"
pip install "foreblocks[all]"
git clone https://github.com/lseman/foreblocks.git
cd foreblocks && pip install -e ".[dev]"
A small forecasting core, optional workflow extras, companion tools, and docs that point you to the right layer.
TimeSeriesHandler, transformer blocks, or
composable heads when your data or architecture needs them.ForecastingModel, Trainer, dataloaders, configs, transformer blocks, evaluation, and the
preprocessing bridge.The full training loop, data preparation, model composition, evaluation, and search workflows — all in one place.
foreblocks imports.If you are importing foreBlocks for the first time, begin with the top-level objects. Specialist namespaces stay available for search and advanced work.
ForecastingModel, Trainer, ModelEvaluator, TimeSeriesHandler, dataloaders, and configs are the main starting points.foreblocks.darts, transformer internals, and Hybrid
Mamba are available when you need them — but are not the default starting point.from foreblocks import (
ForecastingModel,
Trainer,
ModelEvaluator,
TimeSeriesHandler,
TimeSeriesDataset,
create_dataloaders,
ModelConfig,
TrainingConfig,
)
from foreblocks.darts import (
DARTSTrainer,
DARTSConfig,
DARTSTrainConfig,
FinalTrainConfig,
MultiFildelitySearchConfig,
)
This baseline checks that your environment, dataloaders, trainer, and evaluator are wired correctly — before you move into heavier workflows.
import numpy as np
import torch
import torch.nn as nn
from foreblocks import (
ForecastingModel,
ModelEvaluator,
Trainer,
TrainingConfig,
create_dataloaders,
)
seq_len, horizon, n_features = 24, 6, 4
rng = np.random.default_rng(0)
X_train = rng.normal(size=(64, seq_len, n_features)).astype("float32")
y_train = rng.normal(size=(64, horizon)).astype("float32")
X_val = rng.normal(size=(16, seq_len, n_features)).astype("float32")
y_val = rng.normal(size=(16, horizon)).astype("float32")
train_loader, val_loader = create_dataloaders(
X_train, y_train, X_val, y_val, batch_size=16
)
head = nn.Sequential(
nn.Flatten(),
nn.Linear(seq_len * n_features, 64),
nn.GELU(),
nn.Linear(64, horizon),
)
model = ForecastingModel(
head=head,
forecasting_strategy="direct",
model_type="head_only",
target_len=horizon,
)
trainer = Trainer(
model,
config=TrainingConfig(
num_epochs=5, batch_size=16,
patience=3, use_amp=False,
),
auto_track=False,
)
history = trainer.train(train_loader, val_loader)
metrics = ModelEvaluator(trainer).compute_metrics(
torch.tensor(X_val), torch.tensor(y_val)
)
print(history.train_losses[-1], metrics)