Core

class catalyst.core.callback.CallbackOrder[source]

Bases: enum.IntFlag

An enumeration.

Criterion = 20
External = 100
Internal = 0
Metric = 80
Optimizer = 40
Other = 200
Scheduler = 60
Unknown = -100
class catalyst.core.callback.LoggerCallback(order: int = None)[source]

Bases: catalyst.core.callback.Callback

Loggers are executed on start before all callbacks, and on end after all callbacks.

class catalyst.core.callback.MetricCallback(prefix: str, metric_fn: Callable, input_key: str = 'targets', output_key: str = 'logits', **metric_params)[source]

Bases: catalyst.core.callback.Callback

A callback that returns single metric on state.on_batch_end

on_batch_end(state: catalyst.core.state._State)[source]
class catalyst.core.callback.MultiMetricCallback(prefix: str, metric_fn: Callable, list_args: List, input_key: str = 'targets', output_key: str = 'logits', **metric_params)[source]

Bases: catalyst.core.callback.Callback

A callback that returns multiple metrics on state.on_batch_end

on_batch_end(state: catalyst.core.state._State)[source]
catalyst.core.registry.Callback(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Criterion(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Optimizer(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Scheduler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Module(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Model(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Sampler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Transform(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

Callbacks

class catalyst.core.callbacks.checkpoint.CheckpointCallback(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metric_filename: str = '_metrics.json')[source]

Bases: catalyst.core.callbacks.checkpoint.BaseCheckpointCallback

Checkpoint callback to save/restore your model/criterion/optimizer/metrics.

__init__(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metric_filename: str = '_metrics.json')[source]
Parameters
  • save_n_best (int) – number of best checkpoint to keep

  • resume (str) – path to checkpoint to load and initialize runner state

  • metric_filename (str) – filename to save metrics in checkpoint folder. Must ends on .json or .yml

get_checkpoint_suffix(checkpoint: dict) → str[source]
get_metric(last_valid_metrics) → Dict[source]
static load_checkpoint(*, filename, state: catalyst.core.state._State)[source]
on_epoch_end(state: catalyst.core.state._State)[source]
on_stage_end(state: catalyst.core.state._State)[source]
on_stage_start(state: catalyst.core.state._State)[source]
process_checkpoint(logdir: str, checkpoint: Dict, is_best: bool, main_metric: str = 'loss', minimize_metric: bool = True)[source]
truncate_checkpoints(minimize_metric: bool) → None[source]
class catalyst.core.callbacks.checkpoint.IterationCheckpointCallback(save_n_last: int = 3, period: int = 100, stage_restart: bool = True, metric_filename: str = '_metrics_iter.json')[source]

Bases: catalyst.core.callbacks.checkpoint.BaseCheckpointCallback

Iteration checkpoint callback to save your model/criterion/optimizer

__init__(save_n_last: int = 3, period: int = 100, stage_restart: bool = True, metric_filename: str = '_metrics_iter.json')[source]
Parameters
  • save_n_last (int) – number of last checkpoint to keep

  • period (int) – save the checkpoint every period

  • stage_restart (bool) – restart counter every stage or not

  • metric_filename (str) – filename to save metrics in checkpoint folder. Must ends on .json or .yml

get_checkpoint_suffix(checkpoint: dict) → str[source]
get_metric(**kwargs) → Dict[source]
on_batch_end(state: catalyst.core.state._State)[source]
on_stage_start(state: catalyst.core.state._State)[source]
process_checkpoint(logdir: str, checkpoint: Dict, batch_values: Dict[str, float])[source]
truncate_checkpoints(**kwargs) → None[source]
class catalyst.core.callbacks.criterion.CriterionCallback(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0)[source]

Bases: catalyst.core.callback.Callback

Callback for that measures loss with specified criterion.

__init__(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0)[source]
Parameters
  • input_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If None, the whole input will be passed to the criterion.

  • output_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If None, the whole output will be passed to the criterion.

  • prefix (str) – prefix for metrics and output key for loss in state.loss dictionary

  • criterion_key (str) – A key to take a criterion in case there are several of them and they are in a dictionary format.

  • multiplier (float) – scale factor for the output loss.

on_batch_end(state: catalyst.core.state._State)[source]

Computes the loss and add it to the metrics

on_stage_start(state: catalyst.core.state._State)[source]

Checks that the current stage has correct criterion

class catalyst.core.callbacks.criterion.CriterionOutputOnlyCallback(output_key: Union[Dict[str, str], List[str]], **kwargs)[source]

Bases: catalyst.core.callbacks.criterion.CriterionCallback

Callback for that measures loss with specified criterion. Based on model output only. @TODO: merge logic with CriterionCallback.

__init__(output_key: Union[Dict[str, str], List[str]], **kwargs)[source]
Parameters
  • output_key (Union[List[str]], Dict[str, str]) – dict or list of keys that takes values from the output dictionary If None, the whole output will be passed to the criterion.

  • **kwargs – CriterionCallback init parameters

class catalyst.core.callbacks.criterion.CriterionAggregatorCallback(prefix: str, loss_keys: Union[str, List[str], Dict[str, float]] = None, loss_aggregate_fn: str = 'sum', multiplier: float = 1.0)[source]

Bases: catalyst.core.callback.Callback

This callback allows you to aggregate the values of the loss (with different aggregation strategies) and put the value back into state.loss.

__init__(prefix: str, loss_keys: Union[str, List[str], Dict[str, float]] = None, loss_aggregate_fn: str = 'sum', multiplier: float = 1.0) → None[source]
Parameters
  • prefix (str) – new key for aggregated loss.

  • loss_keys (Union[str, List[str], Dict[str, float]]) – If not empty, it aggregates only the values from the loss by these keys. for weighted_sum aggregation it must be a Dict[str, float].

  • loss_aggregate_fn (str) – function for aggregation. Must be either sum, mean or weighted_sum.

  • multiplier (float) – scale factor for the aggregated loss.

on_batch_end(state: catalyst.core.state._State) → None[source]

Computes the loss and add it to the metrics

class catalyst.core.callbacks.formatters.MetricsFormatter(message_prefix)[source]

Bases: abc.ABC, logging.Formatter

Abstract metrics formatter

__init__(message_prefix)[source]
Parameters

message_prefix – logging format string that will be prepended to message

format(record: logging.LogRecord)[source]

Format message string

class catalyst.core.callbacks.formatters.TxtMetricsFormatter[source]

Bases: catalyst.core.callbacks.formatters.MetricsFormatter

Translate batch metrics in human-readable format.

This class is used by logging.Logger to make a string from record. For details refer to official docs for ‘logging’ module.

Note

This is inner class used by Logger callback, no need to use it directly!

__init__()[source]

Initializes the TxtMetricsFormatter

class catalyst.core.callbacks.formatters.JsonMetricsFormatter[source]

Bases: catalyst.core.callbacks.formatters.MetricsFormatter

Translate batch metrics in json format.

This class is used by logging.Logger to make a string from record. For details refer to official docs for ‘logging’ module.

Note

This is inner class used by Logger callback, no need to use it directly!

__init__()[source]

Initializes the JsonMetricsFormatter

class catalyst.core.callbacks.logging.ConsoleLogger[source]

Bases: catalyst.core.callback.LoggerCallback

Logger callback, translates state.metric_manager to console and text file

__init__()[source]

Init ConsoleLogger

on_epoch_end(state)[source]

Translate state.metric_manager to console and text file at the end of an epoch

on_stage_end(state)[source]

Called at the end of each stage

on_stage_start(state: catalyst.core.state._State)[source]

Prepare state.logdir for the current stage

class catalyst.core.callbacks.logging.TelegramLogger(token: str = None, chat_id: str = None, metric_names: List[str] = None, log_on_stage_start: bool = True, log_on_loader_start: bool = True, log_on_loader_end: bool = True, log_on_stage_end: bool = True, log_on_exception: bool = True)[source]

Bases: catalyst.core.callback.LoggerCallback

Logger callback, translates state.metric_manager to telegram channel

__init__(token: str = None, chat_id: str = None, metric_names: List[str] = None, log_on_stage_start: bool = True, log_on_loader_start: bool = True, log_on_loader_end: bool = True, log_on_stage_end: bool = True, log_on_exception: bool = True)[source]
Parameters
  • token (str) – telegram bot’s token, see https://core.telegram.org/bots

  • chat_id (str) – Chat unique identifier

  • metric_names – List of metric names to log. if none - logs everything.

  • log_on_stage_start (bool) – send notification on stage start

  • log_on_loader_start (bool) – send notification on loader start

  • log_on_loader_end (bool) – send notification on loader end

  • log_on_stage_end (bool) – send notification on stage end

  • log_on_exception (bool) – send notification on exception

on_exception(state: catalyst.core.state._State)[source]

Notify about raised Exception

on_loader_end(state: catalyst.core.state._State)[source]

Translate state.metric_manager to telegram channel

on_loader_start(state: catalyst.core.state._State)[source]

Notify about starting running the new loader

on_stage_end(state: catalyst.core.state._State)[source]

Notify about finishing a stage

on_stage_start(state: catalyst.core.state._State)[source]

Notify about starting a new stage

class catalyst.core.callbacks.logging.TensorboardLogger(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]

Bases: catalyst.core.callback.LoggerCallback

Logger callback, translates state.metric_manager to tensorboard

__init__(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]
Parameters
  • metric_names (List[str]) – list of metric names to log, if none - logs everything

  • log_on_batch_end (bool) – logs per-batch metrics if set True

  • log_on_epoch_end (bool) – logs per-epoch metrics if set True

on_batch_end(state: catalyst.core.state._State)[source]

Translate batch metrics to tensorboard

on_loader_end(state: catalyst.core.state._State)[source]

Translate epoch metrics to tensorboard

on_loader_start(state)[source]

Prepare tensorboard writers for the current stage

on_stage_end(state: catalyst.core.state._State)[source]

Close opened tensorboard writers

class catalyst.core.callbacks.logging.VerboseLogger(always_show: List[str] = None, never_show: List[str] = None)[source]

Bases: catalyst.core.callback.LoggerCallback

Logs the params into console

__init__(always_show: List[str] = None, never_show: List[str] = None)[source]
Parameters
  • always_show (List[str]) – list of metrics to always show if None default is ["_timers/_fps"] to remove always_show metrics set it to an empty list []

  • never_show (List[str]) – list of metrics which will not be shown

on_batch_end(state: catalyst.core.state._State)[source]

Update tqdm progress bar at the end of each batch

on_exception(state: catalyst.core.state._State)[source]

Called if an Exception was raised

on_loader_end(state: catalyst.core.state._State)[source]

Cleanup and close tqdm progress bar

on_loader_start(state: catalyst.core.state._State)[source]

Init tqdm progress bar

class catalyst.core.callbacks.optimizer.OptimizerCallback(grad_clip_params: Dict = None, accumulation_steps: int = 1, optimizer_key: str = None, loss_key: str = 'loss', decouple_weight_decay: bool = True, save_model_grads: bool = False)[source]

Bases: catalyst.core.callback.Callback

Optimizer callback, abstraction over optimizer step.

__init__(grad_clip_params: Dict = None, accumulation_steps: int = 1, optimizer_key: str = None, loss_key: str = 'loss', decouple_weight_decay: bool = True, save_model_grads: bool = False)[source]
Parameters
  • grad_clip_params (dict) – params for gradient clipping

  • accumulation_steps (int) – number of steps before model.zero_grad()

  • optimizer_key (str) – A key to take a optimizer in case there are several of them and they are in a dictionary format.

  • loss_key (str) – key to get loss from state.loss

  • decouple_weight_decay (bool) – If True - decouple weight decay regularization.

  • save_model_grads (bool) – If True - State.model_grads will contain gradients calculated on backward propagation on current batch

static grad_step(*, optimizer: torch.optim.optimizer.Optimizer, optimizer_wds: List[float] = 0, grad_clip_fn: Callable = None)[source]

Makes a gradient step for a given optimizer

Parameters
  • optimizer (Optimizer) – the optimizer

  • optimizer_wds (List[float]) – list of weight decay parameters for each param group

  • grad_clip_fn (Callable) – function for gradient clipping

on_batch_end(state: catalyst.core.state._State)[source]

On batch end event

on_batch_start(state: catalyst.core.state._State)[source]

On batch start event

on_epoch_end(state: catalyst.core.state._State)[source]

On epoch end event

on_epoch_start(state: catalyst.core.state._State)[source]

On epoch start event

on_stage_start(state: catalyst.core.state._State)[source]

On stage start event

class catalyst.core.callbacks.phase.PhaseManagerCallback(train_phases: OrderedDict[str, int] = None, valid_phases: OrderedDict[str, int] = None, valid_mode: str = None)[source]

Bases: catalyst.core.callback.Callback

PhaseManagerCallback updates state.phase

VALIDATION_MODE_ALL = 'all'
VALIDATION_MODE_SAME = 'same'
allowed_valid_modes = ['same', 'all']
on_batch_end(state)[source]
on_batch_start(state)[source]
class catalyst.core.callbacks.scheduler.SchedulerCallback(scheduler_key: str = None, mode: str = None, reduce_metric: str = 'loss')[source]

Bases: catalyst.core.callback.Callback

on_batch_end(state: catalyst.core.state._State)[source]
on_epoch_end(state: catalyst.core.state._State)[source]
on_loader_start(state: catalyst.core.state._State)[source]
on_stage_start(state: catalyst.core.state._State)[source]
step(state: catalyst.core.state._State)[source]
class catalyst.core.callbacks.scheduler.LRUpdater(optimizer_key: str = None)[source]

Bases: catalyst.core.callback.Callback

Basic class that all Lr updaters inherit from

__init__(optimizer_key: str = None)[source]
Parameters

optimizer_key – which optimizer key to use for learning rate scheduling

calc_lr()[source]
calc_momentum()[source]
on_batch_end(state: catalyst.core.state._State)[source]
on_loader_start(state: catalyst.core.state._State)[source]
on_stage_start(state: catalyst.core.state._State)[source]
update_optimizer(state: catalyst.core.state._State)[source]
class catalyst.core.callbacks.wrappers.PhaseWrapperCallback(base_callback: catalyst.core.callback.Callback, active_phases: List[str] = None, inactive_phases: List[str] = None)[source]

Bases: catalyst.core.callback.Callback

CallbackWrapper which disables/enables handlers dependant on current phase and event type

May be useful i.e. to disable/enable optimizers & losses

LEVEL_BATCH = 'batch'
LEVEL_EPOCH = 'epoch'
LEVEL_LOADER = 'loader'
LEVEL_STAGE = 'stage'
TIME_END = 'end'
TIME_START = 'start'
is_active_on_phase(phase, level, time)[source]
on_batch_end(state: catalyst.core.state._State)[source]
on_batch_start(state: catalyst.core.state._State)[source]
on_epoch_end(state: catalyst.core.state._State)[source]
on_epoch_start(state: catalyst.core.state._State)[source]
on_exception(state: catalyst.core.state._State)[source]
on_loader_end(state: catalyst.core.state._State)[source]
on_loader_start(state: catalyst.core.state._State)[source]
on_stage_end(state: catalyst.core.state._State)[source]
on_stage_start(state: catalyst.core.state._State)[source]
class catalyst.core.callbacks.wrappers.PhaseBatchWrapperCallback(base_callback: catalyst.core.callback.Callback, active_phases: List[str] = None, inactive_phases: List[str] = None)[source]

Bases: catalyst.core.callbacks.wrappers.PhaseWrapperCallback

is_active_on_phase(phase, level, time)[source]