Run A DARTS Search¶
This tutorial shows the intended end-to-end DARTS workflow in ForeBlocks: configure a search trainer, run a small multi-fidelity search, inspect the promoted candidates, and optionally analyze the final run.
Install¶
Core DARTS workflow:
Optional analyzer:
Step 1: create the trainer¶
from foreblocks.darts import DARTSTrainer
trainer = DARTSTrainer(
input_dim=6,
hidden_dims=[32, 64, 128],
forecast_horizon=12,
seq_length=48,
device="auto",
)
At this point you have a search controller, not just a single model. It knows how to generate candidates, train searched models, derive discrete architectures, and retrain the best one.
Step 2: run a small multi-fidelity search¶
results = trainer.multi_fidelity_search(
train_loader=train_loader,
val_loader=val_loader,
test_loader=test_loader,
num_candidates=12,
search_epochs=8,
final_epochs=40,
max_samples=32,
top_k=4,
use_amp=False,
)
Recommended first-run strategy:
- keep
num_candidatessmall - keep
search_epochssmall - disable AMP until the loop is stable
- only scale up after the result structure looks correct
Step 3: inspect the result dictionary¶
print(results.keys())
print(results["final_results"]["final_metrics"])
print(len(results["candidates"]), len(results["top_candidates"]))
The most useful keys are:
final_model: retrained fixed modelbest_candidate: the promoted/search-trained winnerfinal_results: metrics and training informationtrained_candidates: per-candidate search artifacts
Step 4: save the winning discrete model¶
This saves the best retrained final model together with the recorded metrics and search configuration.
Step 5: inspect search behavior directly¶
If you want to debug the search space before a full run, call the intermediate APIs on a single candidate.
Zero-cost metrics¶
metrics = trainer.evaluate_zero_cost_metrics(
model=candidate_model,
dataloader=val_loader,
max_samples=32,
num_batches=1,
fast_mode=True,
)
Bilevel search for one candidate¶
search_run = trainer.train_darts_model(
model=candidate_model,
train_loader=train_loader,
val_loader=val_loader,
epochs=15,
arch_learning_rate=3e-3,
model_learning_rate=1e-3,
)
Convert to a fixed architecture¶
Retrain that fixed model¶
final_run = trainer.train_final_model(
model=fixed_model,
train_loader=train_loader,
val_loader=val_loader,
test_loader=test_loader,
epochs=50,
)
Optional: analyze the final search result¶
If darts-analysis is installed:
from foreblocks.darts import StreamlinedDARTSAnalyzer
analyzer = StreamlinedDARTSAnalyzer(results)
print(analyzer.analysis_df.head())
Use this when you want:
- architectural feature summaries
- simple statistical inspection of promoted candidates
- plots that help explain why some candidates won
Reading the result like a practitioner¶
Focus on these questions:
- Did zero-cost ranking surface plausible candidates?
- Did the promoted models improve after short DARTS training?
- Did the final retrained model preserve that advantage?
- Did the search collapse onto one family too early?
If the answer to any of those is no, tighten the search space before you increase the budget.