v0.1.18 · Python 3.10+

Forecasting
block-by-block,
your way.

Modular PyTorch forecasting blocks, robust preprocessing, DARTS architecture search, uncertainty estimation, and a browser Studio — in one package. Start lean, grow deliberately.

PyTorch core Optional extras Research workflows MIT licensed
v0.1.18
quick-install.sh
pip install foreblocks
pip install "foreblocks[preprocessing]"
pip install "foreblocks[darts]"

Core install first. Extras only when you need them.

01 — Choose a route

Three clear ways in

foreBlocks covers forecasting, preprocessing, search, and companion tooling. These are the fastest routes to the part you actually need.

01
Path 01

Train a baseline first

Use the safest path through the public API: build a head or encoder/decoder pair, create dataloaders, train with Trainer, and evaluate before adding complexity.

02
Path 02

Start from raw series

When your input is a raw [T, D] array rather than ready-made training windows, reach for the preprocessing stack so scaling, filtering, and slicing stay consistent.

03
Path 03

Search and compare architectures

Once the baseline loop is working, move into staged DARTS search, multi-fidelity ranking, BOHB optimization, and retraining workflows.

02 — Install

Lean by default, modular by design

Install the core package for forecasting and training. Add extras only when you need preprocessing, search, tracking, or analysis.

Recommended
Start small
Validate one training loop, then add extras for preprocessing or search.
Source
Hack locally
Editable installs are best when modifying blocks, docs, or training internals.
Core + Extras
pip install foreblocks

pip install "foreblocks[preprocessing]"
pip install "foreblocks[darts]"
pip install "foreblocks[mltracker]"
pip install "foreblocks[all]"

git clone https://github.com/lseman/foreblocks.git
cd foreblocks && pip install -e ".[dev]"
Need Install Docs
Core forecasting foreblocks Getting Started →
Preprocessing foreblocks[preprocessing] Preprocessor →
DARTS search foreblocks[darts] DARTS Guide →
Tracking UI foreblocks[mltracker] Web UI →
03 — Workflow

How the stack is organized

A small forecasting core, optional workflow extras, companion tools, and docs that point you to the right layer.

01 →
Build a baseline
Use the public API and verify your shapes, loss loop, and evaluation path before adding architectural complexity.
02 →
Add preprocessing or custom blocks
Bring in TimeSeriesHandler, transformer blocks, or composable heads when your data or architecture needs them.
03 →
Search, compare, and retrain
Move into DARTS, BOHB, uncertainty intervals, and analysis only after the baseline path is already healthy.
What lives where
foreblocks
Core forecasting library
ForecastingModel, Trainer, dataloaders, configs, transformer blocks, evaluation, and the preprocessing bridge.
Extras
Workflow-specific dependency bundles
Preprocessing, DARTS, analysis, MLTracker, VMD, wavelets, benchmarking, and mining.
foretools
Companion utilities
Generators, BOHB search, VMD decomposition, feature engineering, and analysis utilities.
Docs
Tutorials, guides, reference, architecture notes
Start with baseline training, then move into architecture notes, tutorials, and reference pages as needed.
04 — Capabilities

What foreBlocks covers

The full training loop, data preparation, model composition, evaluation, and search workflows — all in one place.

Forecasting core
Direct heads, encoder/decoder models, seq2seq strategies, and a unified training loop exposed through the main foreblocks imports.
Preprocessing bridge
Window generation, scaling, filtering, imputation, and time-feature helpers move you from raw arrays into a consistent training pipeline.
Composable architecture blocks
Attach heads, attention, transformer components, MoE routing, SSM blocks, and specialty modules without rewriting the entire training stack.
Experiment workflow
AMP, schedulers, evaluation helpers, plots, cross-validation, tracking, and local dashboards stay close to the trainer.
Uncertainty & evaluation
Conformal workflows and evaluation guides move you from point predictions to interval-aware reporting and more careful comparisons.
Search & analysis
DARTS, BOHB, synthetic data utilities, decomposition helpers, and analysis tools support broader research workflows.
05 — Public surface

Start with the public API

If you are importing foreBlocks for the first time, begin with the top-level objects. Specialist namespaces stay available for search and advanced work.

Core imports
ForecastingModel, Trainer, ModelEvaluator, TimeSeriesHandler, dataloaders, and configs are the main starting points.
Specialist namespaces
foreblocks.darts, transformer internals, and Hybrid Mamba are available when you need them — but are not the default starting point.
Top-level imports
from foreblocks import (
    ForecastingModel,
    Trainer,
    ModelEvaluator,
    TimeSeriesHandler,
    TimeSeriesDataset,
    create_dataloaders,
    ModelConfig,
    TrainingConfig,
)

from foreblocks.darts import (
    DARTSTrainer,
    DARTSConfig,
    DARTSTrainConfig,
    FinalTrainConfig,
    MultiFildelitySearchConfig,
)
06 — Quick example

A first run should feel small

This baseline checks that your environment, dataloaders, trainer, and evaluator are wired correctly — before you move into heavier workflows.

baseline_run.py
import numpy as np
import torch
import torch.nn as nn

from foreblocks import (
    ForecastingModel,
    ModelEvaluator,
    Trainer,
    TrainingConfig,
    create_dataloaders,
)

seq_len, horizon, n_features = 24, 6, 4

rng = np.random.default_rng(0)
X_train = rng.normal(size=(64, seq_len, n_features)).astype("float32")
y_train = rng.normal(size=(64, horizon)).astype("float32")
X_val   = rng.normal(size=(16, seq_len, n_features)).astype("float32")
y_val   = rng.normal(size=(16, horizon)).astype("float32")

train_loader, val_loader = create_dataloaders(
    X_train, y_train, X_val, y_val, batch_size=16
)

head = nn.Sequential(
    nn.Flatten(),
    nn.Linear(seq_len * n_features, 64),
    nn.GELU(),
    nn.Linear(64, horizon),
)

model = ForecastingModel(
    head=head,
    forecasting_strategy="direct",
    model_type="head_only",
    target_len=horizon,
)

trainer = Trainer(
    model,
    config=TrainingConfig(
        num_epochs=5, batch_size=16,
        patience=3, use_amp=False,
    ),
    auto_track=False,
)

history = trainer.train(train_loader, val_loader)
metrics = ModelEvaluator(trainer).compute_metrics(
    torch.tensor(X_val), torch.tensor(y_val)
)

print(history.train_losses[-1], metrics)
What this proves
  • Your installation is healthy.
  • Dataloader shapes line up with the trainer.
  • The model can train and evaluate without extra subsystems.
  • You can add preprocessing or search from a working baseline.
Next docs to open