Catalyst¶
PyTorch framework for Deep Learning R&D.¶
It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write yet another train loop. Break the cycle - use the Catalyst!
Getting started¶
import os
from torch import nn, optim
from torch.utils.data import DataLoader
from catalyst import dl, utils
from catalyst.contrib.datasets import MNIST
model = nn.Sequential(nn.Flatten(), nn.Linear(28 * 28, 10))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.02)
loaders = {
"train": DataLoader(MNIST(os.getcwd(), train=True), batch_size=32),
"valid": DataLoader(MNIST(os.getcwd(), train=False), batch_size=32),
}
runner = dl.SupervisedRunner(
input_key="features", output_key="logits", target_key="targets", loss_key="loss"
)
# model training
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
num_epochs=1,
callbacks=[
dl.AccuracyCallback(input_key="logits", target_key="targets", topk=(1, 3, 5)),
dl.PrecisionRecallF1SupportCallback(input_key="logits", target_key="targets"),
],
logdir="./logs",
valid_loader="valid",
valid_metric="loss",
minimize_valid_metric=True,
verbose=True,
)
# model evaluation
metrics = runner.evaluate_loader(
loader=loaders["valid"],
callbacks=[dl.AccuracyCallback(input_key="logits", target_key="targets", topk=(1, 3, 5))],
)
# model inference
for prediction in runner.predict_loader(loader=loaders["valid"]):
assert prediction["logits"].detach().cpu().numpy().shape[-1] == 10
# model post-processing
model = runner.model.cpu()
batch = next(iter(loaders["valid"]))[0]
utils.trace_model(model=model, batch=batch)
utils.quantize_model(model=model)
utils.prune_model(model=model, pruning_fn="l1_unstructured", amount=0.8)
utils.onnx_export(model=model, batch=batch, file="./logs/mnist.onnx", verbose=True)
Step by step guide¶
Start with Catalyst — A PyTorch Framework for Accelerated Deep Learning R&D introduction.
Try notebook tutorials or check minimal examples for first deep dive.
Read blogposts with use-cases and guides.
Learn machine learning with our “Deep Learning with Catalyst” course.
And do not forget to join our slack for collaboration.
Overview¶
Catalyst helps you write compact but full-featured Deep Learning pipelines in a few lines of code. You get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate.
Installation¶
Common installation:
pip install -U catalyst
More specific with additional requirements:
pip install catalyst[ml] # installs ML-based Catalyst
pip install catalyst[cv] # installs CV-based Catalyst
# master version installation
pip install git+https://github.com/catalyst-team/catalyst@master --upgrade
Catalyst is compatible with: Python 3.7+. PyTorch 1.4+.
Tested on Ubuntu 16.04/18.04/20.04, macOS 10.15, Windows 10 and Windows Subsystem for Linux.
Tests¶
All Catalyst code, features and pipelines are fully tested with our own catalyst-codestyle. During testing, we train a variety of different models: image classification, image segmentation, text classification, GANs, and much more. We then compare their convergence metrics in order to verify the correctness of the training procedure and its reproducibility. As a result, Catalyst provides fully tested and reproducible best practices for your deep learning research and development.
Indices and tables¶
- Callbacks
- Run-based
- BackwardCallback
- BatchOverfitCallback
- BatchTransformCallback
- CheckpointCallback
- CheckRunCallback
- ControlFlowCallbackWrapper
- CriterionCallback
- EarlyStoppingCallback
- LRFinder
- MetricAggregationCallback
- MixupCallback
- OptimizerCallback
- OptunaPruningCallback
- PeriodicLoaderCallback
- ProfilerCallback
- SchedulerCallback
- TimerCallback
- TqdmCallback
- Metric-based Interfaces
- Metric-based
- AccuracyCallback
- AUCCallback
- CMCScoreCallback
- ConfusionMatrixCallback
- DiceCallback
- FunctionalMetricCallback
- HitrateCallback
- IOUCallback
- MAPCallback
- MultilabelAccuracyCallback
- MultilabelPrecisionRecallF1SupportCallback
- MRRCallback
- NDCGCallback
- PrecisionRecallF1SupportCallback
- R2SquaredCallback
- ReidCMCScoreCallback
- SklearnBatchCallback
- SklearnLoaderCallback
- SklearnModelCallback
- TrevskyCallback
- Run-based
- Contrib
- Core
- Data
- Engines
- Loggers
- Metrics
- Runners
- Utils