Skip to content

Overview

This repository has two layers that work well together but serve different purposes:

  • foreblocks: the main forecasting library
  • foretools: companion utilities for generation, search, decomposition, and analysis

The docs are organized to make that split explicit while still showing how the pieces connect in a single workflow.

Start here if you are newGetting Started is still the safest first read.
Keep the first run smallValidate the public API path before opening the more specialist subsystems.
Branch by workflowUse the guide that matches your actual task instead of reading every subsystem in order.

Best starting page by goal

GoalBest starting page
Run a first end-to-end training loopGetting Started
Start from raw multivariate seriesPreprocessor Guide
Understand stable top-level importsPublic API
Customize model blocks or training internalsCustom Blocks Guide
Work with transformer backbonesTransformer Guide
Enable expert routingMoE Guide
Run neural architecture searchDARTS Guide
Generate synthetic time seriesTime Series Generator
Run budgeted hyperparameter searchBOHB Search
Diagnose install or shape issuesTroubleshooting

Mental model of the repo

Layer 01

Stable public surface

The safest imports live at the top level of foreblocks: model assembly, trainer loop, dataloaders, configs, evaluator, and preprocessing bridge.

Layer 02

Optional workflow extras

Preprocessing, DARTS, tracking, VMD, and other heavier dependencies are packaged as extras so the base install stays lean.

Layer 03

Specialist subsystems

Transformer internals, MoE, Hybrid Mamba, uncertainty, and architecture notes are best treated as focused branches once the baseline path is healthy.

What is stable today

The most reliable public surface is still the top-level foreblocks import path:

python
from foreblocks import (
    ForecastingModel,
    Trainer,
    ModelEvaluator,
    TimeSeriesHandler,
    TimeSeriesDataset,
    create_dataloaders,
    ModelConfig,
    TrainingConfig,
)

The DARTS stack has its own public namespace:

python
from foreblocks.darts import DARTSTrainer

Treat deeper imports as subsystem-level APIs unless a topic guide explicitly tells you to use them directly.

Install map

The packaging reflects real feature boundaries:

NeedSuggested install
Minimal forecasting corepip install foreblocks
Preprocessing, filtering, statisticspip install "foreblocks[preprocessing]"
DARTS training, search, and analysispip install "foreblocks[darts]"
MLTracker UI and API clientspip install "foreblocks[mltracker]"
VMD utilitiespip install "foreblocks[vmd]"
All runtime extraspip install "foreblocks[all]"

How the docs are layered

Tutorials

Runnable paths first. Use these when you want a clear success condition and a smaller number of moving parts.

Guides

Subsystem pages that explain capabilities, important configuration knobs, and how modules are meant to be composed.

Architecture notes

Pages that explain internal structure and code layout. These are more useful when you are extending, debugging, or reviewing implementation choices.

Reference

Stable surfaces, configuration maps, and repository orientation.

Repository landmarks

AreaPurpose
foreblocks/coremodel assembly, heads, conformal utilities
foreblocks/trainingtrainer loop, optimizer/scheduler integration
foreblocks/evaluationevaluator, metrics, benchmark helpers
foreblocks/ts_handlerpreprocessing, filtering, imputation, window creation
foreblocks/tftransformer stack, attention variants, MoE, norms, embeddings
foreblocks/dartsarchitecture search configs, search loops, analysis
foreblocks/mltrackerexperiment tracking and local dashboards
foretoolssynthetic data, BOHB, VMD, exploratory tooling

Track A: I just want a model training

  1. Getting Started
  2. Public API
  3. Evaluation & Metrics

Track B: I have raw data and need preprocessing

  1. Getting Started
  2. Preprocessor Guide
  3. Feature Engineering

Track C: I want automated search or more advanced architectures

  1. Getting Started
  2. Transformer Guide or MoE Guide
  3. DARTS Guide
  4. DARTS Search Pipeline

Practical notes

  • The project is broad. Not every internal module should be treated as stable public API.
  • ForecastingModel plus Trainer is still the best first path for a new user.
  • TimeSeriesHandler is the main bridge from raw arrays into the trainer loop.
  • DARTS is a staged workflow, not just a single training function.
  • foretools is worth browsing even if you primarily use foreblocks, especially for data generation and search tooling.

MIT License