DL

Core

class catalyst.dl.core.callback.CallbackOrder[source]

Bases: enum.IntFlag

An enumeration.

Criterion = 20
External = 100
Internal = 0
Metric = 80
Optimizer = 40
Other = 200
Scheduler = 60
Unknown = -100
class catalyst.dl.core.callback.Callback(order: int)[source]

Bases: object

Abstract class that all callback (e.g., Logger) classes extends from. Must be extended before usage.

usage example:

-- stage start
---- epoch start (one epoch - one run of every loader)
------ loader start
-------- batch start
-------- batch handler
-------- batch end
------ loader end
---- epoch end
-- stage end

exception – if an Exception was raised

All callbacks has order value from CallbackOrder

__init__(order: int)[source]

For order see CallbackOrder class

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_batch_start(state: catalyst.dl.core.state.RunnerState)[source]
on_epoch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_epoch_start(state: catalyst.dl.core.state.RunnerState)[source]
on_exception(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_end(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.core.callback.MetricCallback(prefix: str, metric_fn: Callable, input_key: str = 'targets', output_key: str = 'logits', **metric_params)[source]

Bases: catalyst.dl.core.callback.Callback

A callback that returns single metric on state.on_batch_end

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.core.callback.MultiMetricCallback(prefix: str, metric_fn: Callable, list_args: List, input_key: str = 'targets', output_key: str = 'logits', **metric_params)[source]

Bases: catalyst.dl.core.callback.Callback

A callback that returns multiple metrics on state.on_batch_end

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.core.callback.LoggerCallback(order: int = None)[source]

Bases: catalyst.dl.core.callback.Callback

Loggers are executed on start before all callbacks, and on end after all callbacks.

class catalyst.dl.core.callback.MeterMetricsCallback(metric_names: List[str], meter_list: List, input_key: str = 'targets', output_key: str = 'logits', class_names: List[str] = None, num_classes: int = 2, activation: str = 'Sigmoid')[source]

Bases: catalyst.dl.core.callback.Callback

A callback that tracks metrics through meters and prints metrics for each class on state.on_loader_end. This callback works for both single metric and multi-metric meters.

__init__(metric_names: List[str], meter_list: List, input_key: str = 'targets', output_key: str = 'logits', class_names: List[str] = None, num_classes: int = 2, activation: str = 'Sigmoid')[source]
Parameters
  • metric_names (List[str]) – of metrics to print Make sure that they are in the same order that metrics are outputted by the meters in meter_list

  • meter_list (list-like) – List of meters.meter.Meter instances len(meter_list) == n_classes

  • input_key (str) – input key to use for metric calculation specifies our y_true.

  • output_key (str) – output key to use for metric calculation; specifies our y_pred

  • class_names (List[str]) – class names to display in the logs. If None, defaults to indices for each class, starting from 0.

  • num_classes (int) – Number of classes; must be > 1

  • activation (str) – An torch.nn activation applied to the logits. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_start(state)[source]
class catalyst.dl.core.experiment.Experiment[source]

Bases: abc.ABC

Object containing all information required to run the experiment

Abstract, look for implementations

abstract property distributed_params

Dict with the parameters for distributed and FP16 methond

abstract get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Returns the callbacks for a given stage

abstract get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage

get_datasets(stage: str, **kwargs) → OrderedDict[str, Dataset][source]

Returns the datasets for a given stage and kwargs

get_experiment_components(model: torch.nn.modules.module.Module, stage: str) → Tuple[torch.nn.modules.module.Module, torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler._LRScheduler][source]

Returns the tuple containing criterion, optimizer and scheduler by giving model and stage.

abstract get_loaders(stage: str) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage

abstract get_model(stage: str) → torch.nn.modules.module.Module[source]

Returns the model for a given stage

get_native_batch(stage: str, loader: Union[str, int] = 0, data_index: int = 0)[source]

Returns a batch from experiment loader

Parameters
  • stage (str) – stage name

  • loader (Union[str, int]) – loader name or its index, default is the first loader

  • data_index (int) – index in dataset from the loader

abstract get_optimizer(stage: str, model: torch.nn.modules.module.Module) → torch.optim.optimizer.Optimizer[source]

Returns the optimizer for a given stage

abstract get_scheduler(stage: str, optimizer: torch.optim.optimizer.Optimizer) → torch.optim.lr_scheduler._LRScheduler[source]

Returns the scheduler for a given stage

abstract get_state_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage

static get_transforms(stage: str = None, mode: str = None)[source]

Returns the data transforms for a given stage and mode

abstract property initial_seed

Experiment’s initial seed value

abstract property logdir

Path to the directory where the experiment logs

abstract property monitoring_params

Dict with the parameters for monitoring services

abstract property stages

Experiment’s stage names

class catalyst.dl.core.metric_manager.TimerManager[source]

Bases: object

reset() → None[source]

Reset all previous timers

start(name: str) → None[source]

Starts timer name :param name: name of a timer :type name: str

stop(name: str) → None[source]

Stops timer name :param name: name of a timer :type name: str

class catalyst.dl.core.metric_manager.MetricManager(valid_loader: str = 'valid', main_metric: str = 'loss', minimize: bool = True, batch_consistant_metrics: bool = True)[source]

Bases: object

add_batch_value(name: str = None, value: Any = None, metrics_dict: Dict[str, Any] = None)[source]
property batch_values
begin_batch()[source]
begin_epoch()[source]
begin_loader(name: str)[source]
end_batch()[source]
end_epoch_train()[source]
end_loader()[source]
property main_metric_value
class catalyst.dl.core.runner.Runner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]

Bases: abc.ABC

Abstract class for all runners inherited from

__init__(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]
Parameters
  • model (Model) – Torch model object

  • device (Device) – Torch device

property device

Returns the runner’s device instance

abstract forward(batch: Mapping[str, Any], **kwargs) → Mapping[str, Any][source]

Forward method for your Runner

Parameters
  • batch – Key-value batch items

  • **kwargs – kwargs to pass to the model

property model

Returns the runner’s model instance

predict_batch(batch: Mapping[str, Any], **kwargs) → Mapping[str, Any][source]

Run model for a batch of elements WARN: You should not override this method. If you need specific model call, override forward() method :param batch: Key-value batch items :param **kwargs: kwargs to pass to the model

Returns

model output key-value

run_experiment(experiment: catalyst.dl.core.experiment.Experiment, check: bool = False)[source]

Starts the experiment

class catalyst.dl.core.state.RunnerState(*, device=None, model=None, criterion=None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler=None, logdir: str = None, stage: str = 'infer', num_epochs: int = 1, main_metric: str = 'loss', minimize_metric: bool = True, valid_loader: str = 'valid', verbose: bool = False, checkpoint_data: Dict = None, batch_consistant_metrics: bool = True, **kwargs)[source]

Bases: catalyst.utils.frozen.FrozenClass

An object that is used to pass internal state during train/valid/infer.

property epoch_log
get_key(key, inner_key=None)[source]
on_batch_end_post()[source]
on_batch_end_pre()[source]
on_batch_start_post()[source]
on_batch_start_pre()[source]
on_epoch_end_post()[source]
on_epoch_end_pre()[source]
on_epoch_start_post()[source]
on_epoch_start_pre()[source]
on_exception_post()[source]
on_exception_pre()[source]
on_loader_end_post()[source]
on_loader_end_pre()[source]
on_loader_start_post()[source]
on_loader_start_pre()[source]
on_stage_end_post()[source]
on_stage_end_pre()[source]
on_stage_start_post()[source]
on_stage_start_pre()[source]
set_key(value, key, inner_key=None)[source]
property stage_epoch_log

Callbacks

class catalyst.dl.callbacks.checkpoint.CheckpointCallback(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metric_filename: str = '_metrics.json')[source]

Bases: catalyst.dl.callbacks.checkpoint.BaseCheckpointCallback

Checkpoint callback to save/restore your model/criterion/optimizer/metrics.

__init__(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metric_filename: str = '_metrics.json')[source]
Parameters
  • save_n_best (int) – number of best checkpoint to keep

  • resume (str) – path to checkpoint to load and initialize runner state

  • metric_filename (str) – filename to save metrics in checkpoint folder. Must ends on .json or .yml

get_checkpoint_suffix(checkpoint: dict) → str[source]
get_metric(last_valid_metrics) → Dict[source]
static load_checkpoint(*, filename, state: catalyst.dl.core.state.RunnerState)[source]
on_epoch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_end(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]
process_checkpoint(logdir: str, checkpoint: Dict, is_best: bool, main_metric: str = 'loss', minimize_metric: bool = True)[source]
truncate_checkpoints(minimize_metric: bool) → None[source]
class catalyst.dl.callbacks.checkpoint.IterationCheckpointCallback(save_n_last: int = 3, num_iters: int = 100, stage_restart: bool = True, metric_filename: str = '_metrics_iter.json')[source]

Bases: catalyst.dl.callbacks.checkpoint.BaseCheckpointCallback

Iteration checkpoint callback to save your model/criterion/optimizer

__init__(save_n_last: int = 3, num_iters: int = 100, stage_restart: bool = True, metric_filename: str = '_metrics_iter.json')[source]
Parameters
  • save_n_last (int) – number of last checkpoint to keep

  • num_iters (int) – save the checkpoint every num_iters

  • stage_restart (bool) – restart counter every stage or not

  • metric_filename (str) – filename to save metrics in checkpoint folder. Must ends on .json or .yml

get_checkpoint_suffix(checkpoint: dict) → str[source]
get_metric(**kwargs) → Dict[source]
on_batch_end(state)[source]
on_stage_start(state)[source]
process_checkpoint(logdir: str, checkpoint: Dict, batch_values: Dict[str, float])[source]
truncate_checkpoints(**kwargs) → None[source]
class catalyst.dl.callbacks.criterion.CriterionCallback(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0)[source]

Bases: catalyst.dl.core.callback.Callback

Callback for that measures loss with specified criterion.

__init__(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0)[source]
Parameters
  • input_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If None, the whole input will be passed to the criterion.

  • output_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If None, the whole output will be passed to the criterion.

  • prefix (str) – prefix for metrics and output key for loss in state.loss dictionary

  • criterion_key (str) – A key to take a criterion in case there are several of them and they are in a dictionary format.

  • multiplier (float) – scale factor for the output loss.

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]

Computes the loss and add it to the metrics

on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]

Checks that the current stage has correct criterion

class catalyst.dl.callbacks.criterion.CriterionOutputOnlyCallback(output_key: Union[Dict[str, str], List[str]], **kwargs)[source]

Bases: catalyst.dl.callbacks.criterion.CriterionCallback

Callback for that measures loss with specified criterion. Based on model output only. @TODO: merge logic with CriterionCallback.

__init__(output_key: Union[Dict[str, str], List[str]], **kwargs)[source]
Parameters
  • output_key (Union[List[str]], Dict[str, str]) – dict or list of keys that takes values from the output dictionary If None, the whole output will be passed to the criterion.

  • **kwargs – CriterionCallback init parameters

class catalyst.dl.callbacks.criterion.CriterionAggregatorCallback(prefix: str, loss_keys: Union[str, List[str], Dict[str, float]] = None, loss_aggregate_fn: str = 'sum', multiplier: float = 1.0)[source]

Bases: catalyst.dl.core.callback.Callback

This callback allows you to aggregate the values of the loss (with different aggregation strategies) and put the value back into state.loss.

__init__(prefix: str, loss_keys: Union[str, List[str], Dict[str, float]] = None, loss_aggregate_fn: str = 'sum', multiplier: float = 1.0) → None[source]
Parameters
  • prefix (str) – new key for aggregated loss.

  • loss_keys (Union[str, List[str], Dict[str, float]]) – If not empty, it aggregates only the values from the loss by these keys. for weighted_sum aggregation it must be a Dict[str, float].

  • loss_aggregate_fn (str) – function for aggregation. Must be either sum, mean or weighted_sum.

  • multiplier (float) – scale factor for the aggregated loss.

on_batch_end(state: catalyst.dl.core.state.RunnerState) → None[source]

Computes the loss and add it to the metrics

class catalyst.dl.callbacks.inference.InferCallback(out_dir=None, out_prefix=None)[source]

Bases: catalyst.dl.core.callback.Callback

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.inference.InferMaskCallback(out_dir=None, out_prefix=None, input_key=None, output_key=None, name_key=None, mean=None, std=None, threshold: float = 0.5, mask_strength: float = 0.5, mask_type: str = 'soft')[source]

Bases: catalyst.dl.core.callback.Callback

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.logging.ConsoleLogger[source]

Bases: catalyst.dl.core.callback.LoggerCallback

Logger callback, translates state.metrics to console and text file

__init__()[source]

Init ConsoleLogger

on_epoch_end(state)[source]

Translate state.metrics to console and text file at the end of an epoch

on_stage_end(state)[source]

Called at the end of each stage

on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]

Prepare state.logdir for the current stage

class catalyst.dl.callbacks.logging.TelegramLogger(token: str = None, chat_id: str = None, metric_names: List[str] = None, log_on_stage_start: bool = True, log_on_loader_start: bool = True, log_on_loader_end: bool = True, log_on_stage_end: bool = True, log_on_exception: bool = True)[source]

Bases: catalyst.dl.core.callback.LoggerCallback

Logger callback, translates state.metrics to telegram channel

__init__(token: str = None, chat_id: str = None, metric_names: List[str] = None, log_on_stage_start: bool = True, log_on_loader_start: bool = True, log_on_loader_end: bool = True, log_on_stage_end: bool = True, log_on_exception: bool = True)[source]
Parameters
  • token (str) – telegram bot’s token, see https://core.telegram.org/bots

  • chat_id (str) – Chat unique identifier

  • metric_names – List of metric names to log. if none - logs everything.

  • log_on_stage_start (bool) – send notification on stage start

  • log_on_loader_start (bool) – send notification on loader start

  • log_on_loader_end (bool) – send notification on loader end

  • log_on_stage_end (bool) – send notification on stage end

  • log_on_exception (bool) – send notification on exception

on_exception(state: catalyst.dl.core.state.RunnerState)[source]

Notify about raised Exception

on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]

Translate state.metrics to telegram channel

on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]

Notify about starting running the new loader

on_stage_end(state: catalyst.dl.core.state.RunnerState)[source]

Notify about finishing a stage

on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]

Notify about starting a new stage

class catalyst.dl.callbacks.logging.TensorboardLogger(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]

Bases: catalyst.dl.core.callback.LoggerCallback

Logger callback, translates state.metrics to tensorboard

__init__(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]
Parameters
  • metric_names (List[str]) – list of metric names to log, if none - logs everything

  • log_on_batch_end (bool) – logs per-batch metrics if set True

  • log_on_epoch_end (bool) – logs per-epoch metrics if set True

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]

Translate batch metrics to tensorboard

on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]

Translate epoch metrics to tensorboard

on_loader_start(state)[source]

Prepare tensorboard writers for the current stage

on_stage_end(state: catalyst.dl.core.state.RunnerState)[source]

Close opened tensorboard writers

class catalyst.dl.callbacks.logging.VerboseLogger(always_show: List[str] = None, never_show: List[str] = None)[source]

Bases: catalyst.dl.core.callback.LoggerCallback

Logs the params into console

__init__(always_show: List[str] = None, never_show: List[str] = None)[source]
Parameters
  • always_show (List[str]) – list of metrics to always show if None default is ["_timers/_fps"] to remove always_show metrics set it to an empty list []

  • never_show (List[str]) – list of metrics which will not be shown

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]

Update tqdm progress bar at the end of each batch

on_exception(state: catalyst.dl.core.state.RunnerState)[source]

Called if an Exception was raised

on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]

Cleanup and close tqdm progress bar

on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]

Init tqdm progress bar

class catalyst.dl.callbacks.misc.EarlyStoppingCallback(patience: int, metric: str = 'loss', minimize: bool = True, min_delta: float = 1e-06)[source]

Bases: catalyst.dl.core.callback.Callback

on_epoch_end(state: catalyst.dl.core.state.RunnerState) → None[source]
class catalyst.dl.callbacks.misc.ConfusionMatrixCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'confusion_matrix', version: str = 'tnt', class_names: List[str] = None, num_classes: int = None, plot_params: Dict = None)[source]

Bases: catalyst.dl.core.callback.Callback

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.misc.RaiseExceptionCallback[source]

Bases: catalyst.dl.core.callback.LoggerCallback

on_exception(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.mixup.MixupCallback(fields: List[str] = ('features', ), alpha=1.0, on_train_only=True, **kwargs)[source]

Bases: catalyst.dl.callbacks.criterion.CriterionCallback

Callback to do mixup augmentation.

Paper: https://arxiv.org/abs/1710.09412

Note

MixupCallback is inherited from CriterionCallback and does its work.

You may not use them together.

__init__(fields: List[str] = ('features', ), alpha=1.0, on_train_only=True, **kwargs)[source]
Parameters
  • fields (List[str]) – list of features which must be affected.

  • alpha (float) – beta distribution a=b parameters. Must be >=0. The more alpha closer to zero the less effect of the mixup.

  • on_train_only (bool) – Apply to train only. As the mixup use the proxy inputs, the targets are also proxy. We are not interested in them, are we? So, if on_train_only is True, use a standard output/metric for validation.

on_batch_start(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.optimizer.OptimizerCallback(grad_clip_params: Dict = None, accumulation_steps: int = 1, optimizer_key: str = None, loss_key: str = 'loss', decouple_weight_decay: bool = True)[source]

Bases: catalyst.dl.core.callback.Callback

Optimizer callback, abstraction over optimizer step.

__init__(grad_clip_params: Dict = None, accumulation_steps: int = 1, optimizer_key: str = None, loss_key: str = 'loss', decouple_weight_decay: bool = True)[source]
Parameters
  • grad_clip_params (dict) – params for gradient clipping

  • accumulation_steps (int) – number of steps before model.zero_grad()

  • optimizer_key (str) – A key to take a optimizer in case there are several of them and they are in a dictionary format.

  • loss_key (str) – key to get loss from state.loss

  • decouple_weight_decay (bool) – If True - decouple weight decay regularization.

static grad_step(*, optimizer: torch.optim.optimizer.Optimizer, optimizer_wds: List[float] = 0, grad_clip_fn: Callable = None)[source]

Makes a gradient step for a given optimizer

Parameters
  • optimizer (Optimizer) – the optimizer

  • optimizer_wds (List[float]) – list of weight decay parameters for each param group

  • grad_clip_fn (Callable) – function for gradient clipping

on_batch_end(state)[source]

On batch end event

on_batch_start(state)[source]

On batch start event

on_epoch_end(state)[source]

On epoch end event

on_epoch_start(state)[source]

On epoch start event

on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]

On stage start event

class catalyst.dl.callbacks.scheduler.SchedulerCallback(scheduler_key: str = None, mode: str = None, reduce_metric: str = 'loss')[source]

Bases: catalyst.dl.core.callback.Callback

on_batch_end(state)[source]
on_epoch_end(state)[source]
on_loader_start(state: catalyst.dl.core.state.RunnerState)[source]
on_stage_start(state: catalyst.dl.core.state.RunnerState)[source]
step(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.scheduler.LRUpdater(optimizer_key: str = None)[source]

Bases: catalyst.dl.core.callback.Callback

Basic class that all Lr updaters inherit from

__init__(optimizer_key: str = None)[source]
Parameters

optimizer_key – which optimizer key to use for learning rate scheduling

calc_lr()[source]
calc_momentum()[source]
on_batch_end(state)[source]
on_loader_start(state)[source]
on_stage_start(state)[source]
update_optimizer(state)[source]
class catalyst.dl.callbacks.scheduler.LRFinder(final_lr, scale='log', num_steps=None, optimizer_key=None)[source]

Bases: catalyst.dl.callbacks.scheduler.LRUpdater

Helps you find an optimal learning rate for a model, as per suggestion of 2015 CLR paper. Learning rate is increased in linear or log scale, depending on user input.

https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html

__init__(final_lr, scale='log', num_steps=None, optimizer_key=None)[source]
Parameters
  • final_lr – final learning rate to try with

  • scale – learning rate increasing scale (“log” or “linear”)

  • num_steps – number of batches to try; if None - whole loader would be used.

  • optimizer_key – which optimizer key to use for learning rate scheduling

calc_lr()[source]
on_batch_end(state)[source]
on_loader_start(state)[source]

Metrics

class catalyst.dl.callbacks.metrics.accuracy.AccuracyCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'accuracy', accuracy_args: List[int] = None, num_classes: int = None, threshold: float = None, activation: str = None)[source]

Bases: catalyst.dl.core.callback.MultiMetricCallback

Accuracy metric callback.

It can be used either for
  • multi-class task:

    -you can use accuracy_args. -threshold and activation are not required. -input_key point on tensor: batch_size. -output_key point on tensor: batch_size x num_classes.

  • OR multi-label task, in this case:

    -you must specify threshold and activation. -accuracy_args and num_classes will not be used (because of there is no method to apply top-k in multi-label classification). -input_key, output_key point on tensor: batch_size x num_classes. -output_key point on a tensor with binary vectors.

There is no need to choose a type (multi-class/multi label). An appropriate type will be chosen automatically via shape of tensors.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'accuracy', accuracy_args: List[int] = None, num_classes: int = None, threshold: float = None, activation: str = None)[source]
Parameters
  • input_key (str) – input key to use for accuracy calculation; specifies our y_true.

  • output_key (str) – output key to use for accuracy calculation; specifies our y_pred.

  • prefix (str) – key for the metric’s name

  • accuracy_args (List[int]) – specifies which accuracy@K to log. [1] - accuracy [1, 3] - accuracy at 1 and 3 [1, 3, 5] - accuracy at 1, 3 and 5

  • num_classes (int) – number of classes to calculate accuracy_args if accuracy_args is None

  • threshold (float) – threshold for outputs binarization.

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [“none”, “Sigmoid”, “Softmax”].

class catalyst.dl.callbacks.metrics.accuracy.MapKCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'map', map_args: List[int] = None, num_classes: int = None)[source]

Bases: catalyst.dl.core.callback.MultiMetricCallback

mAP@k metric callback.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'map', map_args: List[int] = None, num_classes: int = None)[source]
Parameters
  • input_key (str) – input key to use for calculation mean average accuracy at k; specifies our y_true.

  • output_key (str) – output key to use for calculation mean average accuracy at k; specifies our y_pred.

  • prefix (str) – key for the metric’s name

  • map_args (List[int]) – specifies which map@K to log. [1] - map@1 [1, 3] - map@1 and map@3 [1, 3, 5] - map@1, map@3 and map@5

  • num_classes (int) – number of classes to calculate map_args if map_args is None

class catalyst.dl.callbacks.metrics.auc.AUCCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'auc', class_names: List[str] = None, num_classes: int = 2, activation: str = 'Sigmoid')[source]

Bases: catalyst.dl.core.callback.MeterMetricsCallback

Calculates the AUC per class for each loader. Currently, supports binary and multi-label cases.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'auc', class_names: List[str] = None, num_classes: int = 2, activation: str = 'Sigmoid')[source]
Parameters
  • input_key (str) – input key to use for auc calculation specifies our y_true.

  • output_key (str) – output key to use for auc calculation; specifies our y_pred

  • prefix (str) – name to display for auc when printing

  • class_names (List[str]) – class names to display in the logs. If None, defaults to indices for each class, starting from 0.

  • num_classes (int) – Number of classes; must be > 1

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

class catalyst.dl.callbacks.metrics.dice.DiceCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'dice', eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: catalyst.dl.core.callback.MetricCallback

Dice metric callback.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'dice', eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]
Parameters
  • input_key (str) – input key to use for dice calculation; specifies our y_true.

  • output_key (str) – output key to use for dice calculation; specifies our y_pred.

class catalyst.dl.callbacks.metrics.dice.MulticlassDiceMetricCallback(prefix: str = 'dice', input_key: str = 'targets', output_key: str = 'logits', class_names=None, class_prefix='', **metric_params)[source]

Bases: catalyst.dl.core.callback.Callback

on_batch_end(state: catalyst.dl.core.state.RunnerState)[source]
on_loader_end(state: catalyst.dl.core.state.RunnerState)[source]
class catalyst.dl.callbacks.metrics.f1_score.F1ScoreCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'f1_score', beta: float = 1.0, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: catalyst.dl.core.callback.MetricCallback

F1 score metric callback.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'f1_score', beta: float = 1.0, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]
Parameters
  • input_key (str) – input key to use for iou calculation specifies our y_true.

  • output_key (str) – output key to use for iou calculation; specifies our y_pred

  • prefix (str) – key to store in logs

  • beta (float) – beta param for f_score

  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

class catalyst.dl.callbacks.metrics.iou.IouCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'iou', eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: catalyst.dl.core.callback.MetricCallback

IoU (Jaccard) metric callback.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'iou', eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]
Parameters
  • input_key (str) – input key to use for iou calculation specifies our y_true.

  • output_key (str) – output key to use for iou calculation; specifies our y_pred

  • prefix (str) – key to store in logs

  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

catalyst.dl.callbacks.metrics.iou.JaccardCallback

alias of catalyst.dl.callbacks.metrics.iou.IouCallback

class catalyst.dl.callbacks.metrics.iou.ClasswiseIouCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'iou', classes: List[str] = None, num_classes: int = None, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: catalyst.dl.core.callback.MultiMetricCallback

Classwise IoU (Jaccard) metric callback.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'iou', classes: List[str] = None, num_classes: int = None, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]
Parameters
  • input_key (str) – input key to use for iou calculation specifies our y_true.

  • output_key (str) – output key to use for iou calculation; specifies our y_pred

  • prefix (str) – key to store in logs (will be prefix_class_name)

  • classes (List[str]) – list of class names You should specify either ‘classes’ or ‘num_classes’

  • num_classes (int) – number of classes You should specify either ‘classes’ or ‘num_classes’

  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

catalyst.dl.callbacks.metrics.iou.ClasswiseJaccardCallback

alias of catalyst.dl.callbacks.metrics.iou.ClasswiseIouCallback

Experiment

class catalyst.dl.experiment.base.BaseExperiment(model: torch.nn.modules.module.Module, loaders: OrderedDict[str, DataLoader], callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, state_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, monitoring_params: Dict = None, initial_seed: int = 42)[source]

Bases: catalyst.dl.core.experiment.Experiment

Super-simple one-staged experiment

you can use to declare experiment in code

__init__(model: torch.nn.modules.module.Module, loaders: OrderedDict[str, DataLoader], callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, state_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, monitoring_params: Dict = None, initial_seed: int = 42)[source]
Parameters
  • model (Model) – model

  • loaders (dict) – dictionary containing one or several torch.utils.data.DataLoader for training and validation

  • callbacks (List[catalyst.dl.Callback]) – list of callbacks

  • logdir (str) – path to output directory

  • stage (str) – current stage

  • criterion (Criterion) – criterion function

  • optimizer (Optimizer) – optimizer

  • scheduler (Scheduler) – scheduler

  • num_epochs (int) – number of experiment’s epochs

  • valid_loader (str) – loader name used to calculate the metrics and save the checkpoints. For example, you can pass train and then the metrics will be taken from train loader.

  • main_metric (str) – the key to the name of the metric by which the checkpoints will be selected.

  • minimize_metric (bool) – flag to indicate whether the main_metric should be minimized.

  • verbose (bool) – ff true, it displays the status of the training to the console.

  • state_kwargs (dict) – additional state params to RunnerState

  • checkpoint_data (dict) – additional data to save in checkpoint, for example: class_names, date_of_training, etc

  • distributed_params (dict) – dictionary with the parameters for distributed and FP16 methond

  • monitoring_params (dict) – dict with the parameters for monitoring services

  • initial_seed (int) – experiment’s initial seed value

property distributed_params

Dict with the parameters for distributed and FP16 methond

get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Returns the callbacks for a given stage

get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage

get_loaders(stage: str) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage

get_model(stage: str) → torch.nn.modules.module.Module[source]

Returns the model for a given stage

get_optimizer(stage: str, model: torch.nn.modules.module.Module) → torch.optim.optimizer.Optimizer[source]

Returns the optimizer for a given stage

get_scheduler(stage: str, optimizer=None) → torch.optim.lr_scheduler._LRScheduler[source]

Returns the scheduler for a given stage

get_state_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage

property initial_seed

Experiment’s initial seed value

property logdir

Path to the directory where the experiment logs

property monitoring_params

Dict with the parameters for monitoring services

property stages

Experiment’s stage names (array with one value)

class catalyst.dl.experiment.config.ConfigExperiment(config: Dict)[source]

Bases: catalyst.dl.core.experiment.Experiment

Experiment created from a configuration file

STAGE_KEYWORDS = ['criterion_params', 'optimizer_params', 'scheduler_params', 'data_params', 'state_params', 'callbacks_params']
__init__(config: Dict)[source]
Parameters

config (dict) – dictionary of parameters

property distributed_params

Dict with the parameters for distributed and FP16 methond

get_callbacks(stage: str) → OrderedDict[Callback][source]

Returns the callbacks for a given stage

get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage

get_loaders(stage: str) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage

get_model(stage: str)[source]

Returns the model for a given stage

get_optimizer(stage: str, model: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]]) → Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]][source]

Returns the optimizer for a given stage

Parameters
  • stage (str) – stage name

  • model (Union[Model, Dict[str, Model]]) – model or a dict of models

get_scheduler(stage: str, optimizer: torch.optim.optimizer.Optimizer) → torch.optim.lr_scheduler._LRScheduler[source]

Returns the scheduler for a given stage

get_state_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage

property initial_seed

Experiment’s initial seed value

property logdir

Path to the directory where the experiment logs

property monitoring_params

Dict with the parameters for monitoring services

property stages

Experiment’s stage names

class catalyst.dl.experiment.supervised.SupervisedExperiment(model: torch.nn.modules.module.Module, loaders: OrderedDict[str, DataLoader], callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, state_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, monitoring_params: Dict = None, initial_seed: int = 42)[source]

Bases: catalyst.dl.experiment.base.BaseExperiment

Supervised experiment used mostly in Notebook API

The main difference with BaseExperiment that it will add several callbacks by default if you haven’t.

Here are list of callbacks by default:
CriterionCallback:

measures loss with specified criterion.

OptimizerCallback:

abstraction over optimizer step.

SchedulerCallback:

only in case if you provided scheduler to your experiment does lr_scheduler.step

CheckpointCallback:

saves model and optimizer state each epoch callback to save/restore your model/criterion/optimizer/metrics.

ConsoleLogger:

standard Catalyst logger, translates state.metrics to console and text file

TensorboardLogger:

will write state.metrics to tensorboard

RaiseExceptionCallback:

will raise exception if needed

get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Override of BaseExperiment.get_callbacks method. Will add several of callbacks by default in case they missed.

Parameters

stage (str) – name of stage. It should start with infer if you don’t need default callbacks, as they required only for training stages.

Returns

list of callbacks for experiment

Return type

List[Callback]

Runner

class catalyst.dl.runner.supervised.SupervisedRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]

Bases: catalyst.dl.core.runner.Runner

Runner for experiments with supervised model

__init__(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]
Parameters
  • model (Module) – Torch model object

  • device (Device) – Torch device

  • input_key (Any) – Key in batch dict mapping for model input

  • output_key (Any) – Key in output dict model output will be stored under

  • input_target_key (str) – Key in batch dict mapping for target

forward(batch, **kwargs)[source]

Should not be called directly outside of runner. If your model has specific interface, override this method to use it

infer(model: torch.nn.modules.module.Module, loaders: OrderedDict[str, DataLoader], callbacks: Union[List[Callback], OrderedDict[str, Callback]] = None, verbose: bool = False, state_kwargs: Dict = None, fp16: Union[Dict, bool] = None, check: bool = False) → None[source]

Makes the inference on the model.

Parameters
  • model (Model) – model to infer

  • loaders (dict) – dictionary containing one or several torch.utils.data.DataLoader for inference

  • callbacks (List[catalyst.dl.Callback]) – list of inference callbacks

  • verbose (bool) – ff true, it displays the status of the inference to the console.

  • state_kwargs (dict) – additional state params to RunnerState

  • fp16 (Union[Dict, bool]) – If not None, then sets inference to FP16. See https://nvidia.github.io/apex/amp.html#properties if fp16=True, params by default will be {"opt_level": "O1"}

  • check (bool) – if True, then only checks that pipeline is working (3 epochs only)

predict_loader(model: torch.nn.modules.module.Module, loader: torch.utils.data.dataloader.DataLoader, resume: str = None, verbose: bool = False, state_kwargs: Dict = None, fp16: Union[Dict, bool] = None, check: bool = False) → Any[source]

Makes a prediction on the whole loader with the specified model.

Parameters
  • model (Model) – model to infer

  • loader (DataLoader) – dictionary containing only one torch.utils.data.DataLoader for inference

  • resume (str) – path to checkpoint for model

  • verbose (bool) – ff true, it displays the status of the inference to the console.

  • state_kwargs (dict) – additional state params to RunnerState

  • fp16 (Union[Dict, bool]) – If not None, then sets inference to FP16. See https://nvidia.github.io/apex/amp.html#properties if fp16=True, params by default will be {"opt_level": "O1"}

  • check (bool) – if True, then only checks that pipeline is working (3 epochs only)

trace(model: torch.nn.modules.module.Module = None, batch=None, logdir: str = None, loader: torch.utils.data.dataloader.DataLoader = None, method_name: str = 'forward', mode: str = 'eval', requires_grad: bool = False, fp16: Union[Dict, bool] = None, device: Union[str, torch.device] = 'cpu', predict_params: dict = None) → torch.jit.ScriptModule[source]

Traces model using Torch Jit

Parameters
  • model (Model) – model to trace

  • batch – batch to forward through the model to trace

  • logdir (str, optional) – If specified, the result will be written to the directory

  • loader (DataLoader, optional) – if batch is not specified, the batch will be next(iter(loader))

  • method_name (str) – model’s method name that will be traced

  • mode (str) – train or eval

  • requires_grad (bool) – flag to trace with gradients

  • fp16 (Union[Dict, bool]) – If not None, then sets tracing params to FP16

  • deivice (Device) – Torch deivice or a string

  • predict_params (dict) – additional parameters for model forward

train(model: torch.nn.modules.module.Module, criterion: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer, loaders: OrderedDict[str, DataLoader], logdir: str, callbacks: Union[List[Callback], OrderedDict[str, Callback]] = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, resume: str = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, state_kwargs: Dict = None, checkpoint_data: Dict = None, fp16: Union[Dict, bool] = None, monitoring_params: Dict = None, check: bool = False) → None[source]

Starts the training process of the model.

Parameters
  • model (Model) – model to train

  • criterion (Criterion) – criterion function for training

  • optimizer (Optimizer) – optimizer for training

  • loaders (dict) – dictionary containing one or several torch.utils.data.DataLoader for training and validation

  • logdir (str) – path to output directory

  • callbacks (List[catalyst.dl.Callback]) – list of callbacks

  • scheduler (Scheduler) – scheduler for training

  • resume (str) – path to checkpoint for model

  • num_epochs (int) – number of training epochs

  • valid_loader (str) – loader name used to calculate the metrics and save the checkpoints. For example, you can pass train and then the metrics will be taken from train loader.

  • main_metric (str) – the key to the name of the metric by which the checkpoints will be selected.

  • minimize_metric (bool) – flag to indicate whether the main_metric should be minimized.

  • verbose (bool) – ff true, it displays the status of the training to the console.

  • state_kwargs (dict) – additional state params to RunnerState

  • checkpoint_data (dict) – additional data to save in checkpoint, for example: class_names, date_of_training, etc

  • fp16 (Union[Dict, bool]) – If not None, then sets training to FP16. See https://nvidia.github.io/apex/amp.html#properties if fp16=True, params by default will be {"opt_level": "O1"}

  • monitoring_params (dict) – If not None, then create monitoring through Alchemy or Weights&Biases. For example, {"token": "api_token", "experiment": "experiment_name"}

  • check (bool) – if True, then only checks that pipeline is working (3 epochs only)

Utils

class catalyst.dl.utils.formatters.MetricsFormatter(message_prefix)[source]

Bases: abc.ABC, logging.Formatter

Abstract metrics formatter

__init__(message_prefix)[source]
Parameters

message_prefix – logging format string that will be prepended to message

format(record: logging.LogRecord)[source]

Format message string

class catalyst.dl.utils.formatters.TxtMetricsFormatter[source]

Bases: catalyst.dl.utils.formatters.MetricsFormatter

Translate batch metrics in human-readable format.

This class is used by logging.Logger to make a string from record. For details refer to official docs for ‘logging’ module.

Note

This is inner class used by Logger callback, no need to use it directly!

__init__()[source]

Initializes the TxtMetricsFormatter

class catalyst.dl.utils.formatters.JsonMetricsFormatter[source]

Bases: catalyst.dl.utils.formatters.MetricsFormatter

Translate batch metrics in json format.

This class is used by logging.Logger to make a string from record. For details refer to official docs for ‘logging’ module.

Note

This is inner class used by Logger callback, no need to use it directly!

__init__()[source]

Initializes the JsonMetricsFormatter

catalyst.dl.utils.scripts.import_experiment_and_runner(expdir: pathlib.Path)[source]
catalyst.dl.utils.scripts.dump_base_experiment_code(src: pathlib.Path, dst: pathlib.Path)[source]
catalyst.dl.utils.torch.process_components(model: torch.nn.modules.module.Module, criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, distributed_params: Dict = None, device: Union[str, torch.device] = None) → Tuple[torch.nn.modules.module.Module, torch.nn.modules.module.Module, torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler._LRScheduler, Union[str, torch.device]][source]

Returns the processed model, criterion, optimizer, scheduler and device

Parameters
  • model (Model) – torch model

  • criterion (Criterion) – criterion function

  • optimizer (Optimizer) – optimizer

  • scheduler (Scheduler) – scheduler

  • distributed_params (dict, optional) – dict with the parameters for distributed and FP16 methond

  • device (Device, optional) – device

catalyst.dl.utils.torch.get_loader(data_source: Iterable[dict], open_fn: Callable, dict_transform: Callable = None, sampler=None, collate_fn: Callable = <function default_collate>, batch_size: int = 32, num_workers: int = 4, shuffle: bool = False, drop_last: bool = False)[source]

Creates a DataLoader from given source and its open/transform params

Parameters
  • data_source (Iterable[dict]) – and iterable containing your data annotations, (for example path to images, labels, bboxes, etc)

  • open_fn (Callable) – function, that can open your annotations dict and transfer it to data, needed by your network (for example open image by path, or tokenize read string)

  • dict_transform (callable) – transforms to use on dict (for example normalize image, add blur, crop/resize/etc)

  • sampler (Sampler, optional) – defines the strategy to draw samples from the dataset

  • collate_fn (callable, optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset

  • batch_size (int, optional) – how many samples per batch to load

  • num_workers (int, optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process

  • shuffle (bool, optional) – set to True to have the data reshuffled at every epoch (default: False).

  • drop_last (bool, optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)

Returns

DataLoader with catalyst.data.ListDataset

catalyst.dl.utils.trace.trace_model(model: torch.nn.modules.module.Module, runner: catalyst.dl.core.runner.Runner, batch=None, method_name: str = 'forward', mode: str = 'eval', requires_grad: bool = False, opt_level: str = None, device: Union[str, torch.device] = 'cpu', predict_params: dict = None) → torch.jit.ScriptModule[source]

Traces model using runner and batch

Parameters
  • model – Model to trace

  • runner – Model’s native runner that was used to train model

  • batch – Batch to trace the model

  • method_name (str) – Model’s method name that will be used as entrypoint during tracing

  • mode (str) – Mode for model to trace (train or eval)

  • requires_grad (bool) – Flag to use grads

  • opt_level (str) – Apex FP16 init level, optional

  • device (str) – Torch device

  • predict_params (dict) – additional parameters for model forward

Returns

Traced model

Return type

(ScriptModule)

catalyst.dl.utils.trace.get_trace_name(method_name: str, mode: str = 'eval', requires_grad: bool = False, opt_level: str = None, additional_string: str = None)[source]

Creates a file name for the traced model.

Parameters
  • method_name (str) – model’s method name

  • mode (str) – train or eval

  • requires_grad (bool) – flag if model was traced with gradients

  • opt_level (str) – opt_level if model was traced in FP16

  • additional_string (str) – any additional information

catalyst.dl.utils.trace.load_traced_model(model_path: Union[str, pathlib.Path], device: Union[str, torch.device] = 'cpu', opt_level: str = None) → torch.jit.ScriptModule[source]

Loads a traced model

Parameters
  • model_path – Path to traced model

  • device (str) – Torch device

  • opt_level (str) – Apex FP16 init level, optional

Returns

Traced model

Return type

(ScriptModule)

catalyst.dl.utils.visualization.plot_metrics(logdir: Union[str, pathlib.Path], step: Optional[str] = 'epoch', metrics: Optional[List[str]] = None, height: Optional[int] = None, width: Optional[int] = None) → None[source]

Plots your learning results.

Parameters
  • logdir – the logdir that was specified during training.

  • step – ‘batch’ or ‘epoch’ - what logs to show: for batches or for epochs

  • metrics – list of metrics to plot. The loss should be specified as ‘loss’, learning rate = ‘_base/lr’ and other metrics should be specified as names in metrics dict that was specified during training

  • height – the height of the whole resulting plot

  • width – the width of the whole resulting plot

Criterion

catalyst.dl.utils.criterion.accuracy.accuracy(outputs, targets, topk=(1, ), threshold: float = None, activation: str = None)[source]

Computes the accuracy.

It can be used either for:
  • multi-class task:

    -you can use topk. -threshold and activation are not required. -targets is a tensor: batch_size -outputs is a tensor: batch_size x num_classes -computes the accuracy@k for the specified values of k.

  • OR multi-label task, in this case:

    -you must specify threshold and activation -topk will not be used (because of there is no method to apply top-k in multi-label classification). -outputs, targets are tensors with shape: batch_size x num_classes -targets is a tensor with binary vectors

catalyst.dl.utils.criterion.accuracy.average_accuracy(outputs, targets, k=10)[source]

Computes the average accuracy at k. This function computes the average accuracy at k between two lists of items.

Parameters
  • outputs (list) – A list of predicted elements

  • targets (list) – A list of elements that are to be predicted

  • k (int, optional) – The maximum number of predicted elements

Returns

The average accuracy at k over the input lists

Return type

double

catalyst.dl.utils.criterion.accuracy.mean_average_accuracy(outputs, targets, topk=(1, ))[source]

Computes the mean average accuracy at k. This function computes the mean average accuracy at k between two lists of lists of items.

Parameters
  • outputs (list) – A list of lists of predicted elements

  • targets (list) – A list of lists of elements that are to be predicted

  • topk (int, optional) – The maximum number of predicted elements

Returns

The mean average accuracy at k over the input lists

Return type

double

catalyst.dl.utils.criterion.dice.dice(outputs: torch.Tensor, targets: torch.Tensor, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Computes the dice metric

Parameters
  • outputs (list) – A list of predicted elements

  • targets (list) – A list of elements that are to be predicted

  • eps (float) – epsilon

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [“none”, “Sigmoid”, “Softmax2d”]

Returns

Dice score

Return type

double

catalyst.dl.utils.criterion.f1_score.f1_score(outputs: torch.Tensor, targets: torch.Tensor, beta: float = 1.0, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Source https://github.com/qubvel/segmentation_models.pytorch

Parameters
  • outputs (torch.Tensor) – A list of predicted elements

  • targets (torch.Tensor) – A list of elements that are to be predicted

  • eps (float) – epsilon to avoid zero division

  • beta (float) – beta param for f_score

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [“none”, “Sigmoid”, “Softmax2d”]

Returns

F_1 score

Return type

float

catalyst.dl.utils.criterion.focal.sigmoid_focal_loss(outputs: torch.Tensor, targets: torch.Tensor, gamma: float = 2.0, alpha: float = 0.25, reduction: str = 'mean')[source]

Compute binary focal loss between target and output logits.

Source https://github.com/BloodAxe/pytorch-toolbelt See losses for details.

Parameters
  • outputs – Tensor of arbitrary shape

  • targets – Tensor of the same shape as input

  • reduction (string, optional) – Specifies the reduction to apply to the output: “none” | “mean” | “sum” | “batchwise_mean”. “none”: no reduction will be applied, “mean”: the sum of the output will be divided by the number of elements in the output, “sum”: the output will be summed.

See https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/loss/losses.py # noqa: E501

catalyst.dl.utils.criterion.focal.reduced_focal_loss(outputs: torch.Tensor, targets: torch.Tensor, threshold: float = 0.5, gamma: float = 2.0, reduction='mean')[source]

Compute reduced focal loss between target and output logits.

Source https://github.com/BloodAxe/pytorch-toolbelt See losses for details.

Parameters
  • outputs – Tensor of arbitrary shape

  • targets – Tensor of the same shape as input

  • reduction (string, optional) –

    Specifies the reduction to apply to the output: “none” | “mean” | “sum” | “batchwise_mean”.

    ”none”: no reduction will be applied, “mean”: the sum of the output will be divided by the number of elements in the output, “sum”: the output will be summed.

    Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction.

    ”batchwise_mean” computes mean loss per sample in batch. Default: “mean”

See https://arxiv.org/abs/1903.01347

catalyst.dl.utils.criterion.iou.iou(outputs: torch.Tensor, targets: torch.Tensor, classes: List[str] = None, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid') → Union[float, List[float]][source]
Parameters
  • outputs (torch.Tensor) – A list of predicted elements

  • targets (torch.Tensor) – A list of elements that are to be predicted

  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [“none”, “Sigmoid”, “Softmax2d”]

Returns

IoU (Jaccard) score(s)

Return type

Union[float, List[float]]

catalyst.dl.utils.criterion.iou.jaccard(outputs: torch.Tensor, targets: torch.Tensor, classes: List[str] = None, eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid') → Union[float, List[float]]
Parameters
  • outputs (torch.Tensor) – A list of predicted elements

  • targets (torch.Tensor) – A list of elements that are to be predicted

  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [“none”, “Sigmoid”, “Softmax2d”]

Returns

IoU (Jaccard) score(s)

Return type

Union[float, List[float]]

Meters

The meters from torchnet.meters

class catalyst.dl.meters.meter.Meter[source]

Bases: object

Meters provide a way to keep track of important statistics in an online manner.

This class is abstract, but provides a standard interface for all meters to follow.

add(value)[source]

Log a new value to the meter

Parameters

value – Next restult to include.

reset()[source]

Resets the meter to default settings.

value()[source]

Get the value of the meter in the current state.

class catalyst.dl.meters.apmeter.APMeter[source]

Bases: catalyst.dl.meters.meter.Meter

The APMeter measures the average precision per class.

The APMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where (1) the output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function); (2) the target contains only values 0 (for negative examples) and 1 (for positive examples); and (3) the weight ( > 0) represents weight for each sample.

add(output, target, weight=None)[source]

Add a new observation

Parameters
  • output (Tensor) – NxK tensor that for each of the N examples indicates the probability of the example belonging to each of the K classes, according to the model. The probabilities should sum to one over all classes

  • target (Tensor) – binary NxK tensort that encodes which of the K classes are associated with the N-th input (eg: a row [0, 1, 0, 1] indicates that the example is associated with classes 2 and 4)

  • weight (optional, Tensor) – Nx1 tensor representing the weight for each example (each weight > 0)

reset()[source]

Resets the meter with empty member variables

value()[source]

Returns the model”s average precision for each class

Returns

1xK tensor, with avg precision for each class k

Return type

ap (FloatTensor)

class catalyst.dl.meters.aucmeter.AUCMeter[source]

Bases: catalyst.dl.meters.meter.Meter

The AUCMeter measures the area under the receiver-operating characteristic (ROC) curve for binary classification problems. The area under the curve (AUC) can be interpreted as the probability that, given a randomly selected positive example and a randomly selected negative example, the positive example is assigned a higher score by the classification model than the negative example.

The AUCMeter is designed to operate on one-dimensional Tensors output and target, where (1) the output contains model output scores that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a signoid function); and (2) the target contains only values 0 (for negative examples) and 1 (for positive examples).

add(output, target)[source]
reset()[source]
value()[source]
class catalyst.dl.meters.averagevaluemeter.AverageValueMeter[source]

Bases: catalyst.dl.meters.meter.Meter

add(value, n=1)[source]
reset()[source]
value()[source]
class catalyst.dl.meters.classerrormeter.ClassErrorMeter(topk=[1], accuracy=False)[source]

Bases: catalyst.dl.meters.meter.Meter

add(output, target)[source]
reset()[source]
value(k=-1)[source]
class catalyst.dl.meters.confusionmeter.ConfusionMeter(k, normalized=False)[source]

Bases: catalyst.dl.meters.meter.Meter

Maintains a confusion matrix for a given calssification problem.

The ConfusionMeter constructs a confusion matrix for a multi-class classification problems. It does not support multi-label, multi-class problems: for such problems, please use MultiLabelConfusionMeter.

Parameters
  • k (int) – number of classes in the classification problem

  • normalized (boolean) – Determines whether or not the confusion matrix is normalized or not

add(predicted, target)[source]

Computes the confusion matrix of K x K size where K is no of classes

Args: predicted (tensor): Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1. target (tensor): Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors

reset()[source]
value()[source]
Returns

Confustion matrix of K rows and K columns, where rows corresponds to ground-truth targets and columns corresponds to predicted targets.

class catalyst.dl.meters.mapmeter.mAPMeter[source]

Bases: catalyst.dl.meters.meter.Meter

The mAPMeter measures the mean average precision over all classes.

The mAPMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where (1) the output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function); (2) the target contains only values 0 (for negative examples) and 1 (for positive examples); and (3) the weight ( > 0) represents weight for each sample.

add(output, target, weight=None)[source]
reset()[source]
value()[source]
class catalyst.dl.meters.movingaveragevaluemeter.MovingAverageValueMeter(windowsize)[source]

Bases: catalyst.dl.meters.meter.Meter

add(value)[source]
reset()[source]
value()[source]
class catalyst.dl.meters.msemeter.MSEMeter(root=False)[source]

Bases: catalyst.dl.meters.meter.Meter

add(output, target)[source]
reset()[source]
value()[source]