Catalyst¶
High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather then write another regular train loop.
Break the cycle - use the Catalyst!
Catalyst is compatible with: Python 3.6+. PyTorch 1.0.0+.
Installation¶
Common installation:
pip install -U catalyst
More specific with additional requirements:
pip install catalyst[dl] # installs DL based catalyst with Weights & Biases support
pip install catalyst[rl] # installs DL+RL based catalyst
pip install catalyst[drl] # installs DL+RL based catalyst with Weights & Biases support
pip install catalyst[contrib] # installs DL+contrib based catalyst
pip install catalyst[all] # installs everything. Very convenient to deploy on a new server
Catalyst is compatible with: Python 3.6+. PyTorch 1.0.0+.
Docs and examples¶
Detailed classification tutorial
Advanced segmentation tutorial
Comprehensive classification pipeline
Binary and semantic segmentation pipeline
In the examples of the repository, you can find advanced tutorials and Catalyst best practices.
Blog¶
To learn more about Catalyst internals and to be aware of the most important features, you can read Catalyst-info, our blog where we regularly write facts about the framework.
Awesome list of Catalyst-powered repositories¶
We supervise the Awesome Catalyst list. You can make a PR with your project to the list.
Releases¶
We release a major release once a month with a name like YY.MM. And micro-releases with hotfixes and framework improvements in the format YY.MM.#.
You can view the changelog on the GitHub Releases page.
Overview¶
Catalyst helps you write compact but full-featured DL & RL pipelines in a few lines of code. You get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate.
Features¶
Universal train/inference loop.
Configuration files for model/data hyperparameters.
Reproducibility – all source code and environment variables will be saved.
Callbacks – reusable train/inference pipeline parts.
Training stages support.
Easy customization.
PyTorch best practices (SWA, AdamW, Ranger optimizer, OneCycleLRWithWarmup, FP16 and more)
Structure¶
DL – runner for training and inference, all of the classic machine learning and computer vision metrics and a variety of callbacks for training, validation and inference of neural networks.
RL – scalable Reinforcement Learning, on-policy & off-policy algorithms and their improvements with distributed training support.
contrib - additional modules contributed by Catalyst users.
data - useful tools and scripts for data processing.
Getting started: 30 seconds with Catalyst¶
import torch
from catalyst.dl.experiments import SupervisedRunner
# experiment setup
logdir = "./logdir"
num_epochs = 42
# data
loaders = {"train": ..., "valid": ...}
# model, criterion, optimizer
model = Net()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
# model runner
runner = SupervisedRunner()
# model training
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
logdir=logdir,
num_epochs=num_epochs,
verbose=True
)
Contribution guide¶
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.
Please see the contribution guide for more information.
By participating in this project, you agree to abide by its Code of Conduct.
License¶
This project is licensed under the Apache License, Version 2.0 see the LICENSE file for details
Citation¶
Please use this bibtex if you want to cite this repository in your publications:
@misc{catalyst,
author = {Kolesnikov, Sergey},
title = {Reproducible and fast DL & RL.},
year = {2018},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/catalyst-team/catalyst}},
}