Shortcuts

Core

class catalyst.core.experiment._Experiment[source]

Bases: abc.ABC

Object containing all information required to run the experiment

Abstract, look for implementations

abstract property distributed_params

Dict with the parameters for distributed and FP16 methond

abstract get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Returns the callbacks for a given stage

abstract get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage

get_datasets(stage: str, epoch: int = None, **kwargs) → OrderedDict[str, Dataset][source]

Returns the datasets for a given stage and kwargs

get_experiment_components(model: torch.nn.modules.module.Module, stage: str) → Tuple[torch.nn.modules.module.Module, torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler._LRScheduler][source]

Returns the tuple containing criterion, optimizer and scheduler by giving model and stage.

abstract get_loaders(stage: str, epoch: int = None) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage

abstract get_model(stage: str) → torch.nn.modules.module.Module[source]

Returns the model for a given stage

abstract get_optimizer(stage: str, model: torch.nn.modules.module.Module) → torch.optim.optimizer.Optimizer[source]

Returns the optimizer for a given stage

abstract get_scheduler(stage: str, optimizer: torch.optim.optimizer.Optimizer) → torch.optim.lr_scheduler._LRScheduler[source]

Returns the scheduler for a given stage

abstract get_state_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage

get_transforms(stage: str = None, dataset: str = None)[source]

Returns the data transforms for a given stage and mode

abstract property initial_seed

Experiment’s initial seed value

abstract property logdir

Path to the directory where the experiment logs

abstract property monitoring_params

Dict with the parameters for monitoring services

abstract property stages

Experiment’s stage names

class catalyst.core.runner._Runner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]

Bases: abc.ABC

Abstract class for all runners inherited from

__init__(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]
Parameters
  • model (Model) – Torch model object

  • device (Device) – Torch device

property device

Returns the runner’s device instance

abstract forward(batch: Mapping[str, Any], **kwargs) → Mapping[str, Any][source]

Forward method for your Runner

Parameters
  • batch – Key-value batch items

  • **kwargs – kwargs to pass to the model

property model

Returns the runner’s model instance

predict_batch(batch: Mapping[str, Any], **kwargs) → Mapping[str, Any][source]

Run model for a batch of elements WARN: You should not override this method. If you need specific model call, override forward() method :param batch: Key-value batch items :param **kwargs: kwargs to pass to the model

Returns

model output key-value

run_experiment(experiment: catalyst.core.experiment._Experiment)[source]

Starts the experiment

class catalyst.core.state.State(*, device: Union[str, torch.device] = None, model: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]] = None, criterion: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]] = None, optimizer: Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]] = None, scheduler: Union[torch.optim.lr_scheduler._LRScheduler, Dict[str, torch.optim.lr_scheduler._LRScheduler]] = None, callbacks: Dict[str, Callback] = None, logdir: str = None, stage: str = 'infer', num_epochs: int = None, main_metric: str = 'loss', minimize_metric: bool = True, valid_loader: str = 'valid', checkpoint_data: Dict = None, is_check_run: bool = False, **kwargs)[source]

Bases: catalyst.utils.tools.frozen_class.FrozenClass

Object containing all information about current state of the experiment.

state.loaders - ordered dictionary with torch.DataLoaders
  • “train” prefix is used for training loaders (metrics computations, backward pass, optimization)

  • “valid” prefix is used for validation loaders - metrics only

  • “infer” prefix is used for inference loaders - dataset prediction

state.loaders = {
    "train": MnistTrainLoader(),
    "valid": MnistValidLoader()
}
state.model - an instance of torch.nn.Module class

should implement forward method

state.model = torch.nn.Linear(10, 10)
state.criterion - an instance of torch.nn.Module class or torch.nn.modules.loss._Loss

should implement forward method

state.criterion = torch.nn.CrossEntropyLoss()
state.optimizer - an instance of torch.optim.optimizer.Optimizer

should implement step method

state.optimizer = torch.optim.Adam()
state.scheduler - an instance of torch.optim.lr_scheduler._LRScheduler

should implement step method

state.scheduler = htorch.optim.lr_scheduler.ReduceLROnPlateau()
state.device - an instance of torch.device (CPU, GPU, TPU)
state.device = torch.device("cpu")
state.callbacks - ordered dictionary with Catalyst.Callback instances
state.callbacks = {
    "accuracy": AccuracyCallback(),
    "criterion": CriterionCallback(),
    "optim": OptimizerCallback(),
    "saver": CheckpointCallback()
}
state.batch_in - dictionary, containing current batch of data from DataLoader
state.batch_in = {
    "images": np.ndarray(batch_size, c, h, w),
    "targets": np.ndarray(batch_size, 1),
}
state.batch_out - dictionary, containing model output based on current batch
state.batch_out = {"logits": torch.Tensor(batch_size, num_classes)}
state.batch_metrics - dictionary, flatten storage for batch metrics
state.batch_metrics = {"loss": ..., "accuracy": ..., "iou": ...}
state.loader_metrics - dictionary with aggregated batch statistics for loader (mean over all batches) and global loader metrics, like AUC
state.loader_metrics = {"loss": ..., "accuracy": ..., "auc": ...}
state.epoch_metrics - dictionary with summarized metrics for different loaders and global epoch metrics, like lr, momentum
state.epoch_metrics = {
    "train_loss": ..., "train_auc": ..., "valid_loss": ...,
    "lr": ..., "momentum": ...,
}
state.is_best_valid - bool, indicator flag
  • True if this training epoch is best over all epochs

  • False if not

state.valid_metrics - dictionary with validation metrics for currect epoch

just a subdictionary of epoch_metrics

state.valid_metrics = {"loss": ..., "accuracy": ..., "auc": ...}

state.best_valid_metrics - dictionary with best validation metrics during whole training process

state.distributed_rank

state.is_distributed_worker

state.stage_name

state.epoch

state.num_epochs

state.loader_name

state.loader_step

state.loader_len

state.batch_size

state.global_step

state.global_epoch

state.main_metric

state.minimize_metric

state.valid_loader

state.logdir - path to logging directory to save

all logs, metrics, checkpoints and artifacts

state.checkpoint_data - dictionary

with all extra data for experiment tracking

state.is_check_run - bool, indicator flag
  • True if you want to check you pipeline and run only 2 batches per loader and 2 epochs per stage

  • False (default) if you want to just the pipeline

state.need_backward_pass - bool, indicator flag
  • True for training loaders

  • False otherwise

state.need_early_stop - bool, indicator flag

used for EarlyStopping and CheckRun Callbacks

  • True if we need to stop the training

  • False (default) otherwise

state.need_exception_reraise - bool, indicator flag
  • True (default) if you want to show exception during pipeline and stop the training process

  • False otherwise

state.exception - python Exception instance to raise

(or not ;) )

get_attr(key, inner_key=None)[source]
property input
property need_backward_pass
property output
set_attr(value, key, inner_key=None)[source]
class catalyst.core.callback.Callback(order: int, node: int = <CallbackNode.All: 0>, scope: int = <CallbackScope.Stage: 0>)[source]

Bases: object

Abstract class that all callback (e.g., Logger) classes extends from. Must be extended before usage.

usage example:

-- stage start
---- epoch start (one epoch - one run of every loader)
------ loader start
-------- batch start
-------- batch handler
-------- batch end
------ loader end
---- epoch end
-- stage end

exception – if an Exception was raised

All callbacks has order value from CallbackOrder and node value from CallbackNode

__init__(order: int, node: int = <CallbackNode.All: 0>, scope: int = <CallbackScope.Stage: 0>)[source]

For order see CallbackOrder class

on_batch_end(state: State)[source]
on_batch_start(state: State)[source]
on_epoch_end(state: State)[source]
on_epoch_start(state: State)[source]
on_exception(state: State)[source]
on_loader_end(state: State)[source]
on_loader_start(state: State)[source]
on_stage_end(state: State)[source]
on_stage_start(state: State)[source]
class catalyst.core.callback.CallbackNode[source]

Bases: enum.IntFlag

An enumeration.

All = 0
Master = 1
Worker = 2
class catalyst.core.callback.CallbackOrder[source]

Bases: enum.IntFlag

An enumeration.

External = 200
Internal = 0
Logging = 120
Metric = 20
MetricAggregation = 40
Optimizer = 60
Scheduler = 100
Validation = 80
class catalyst.core.callback.CallbackScope[source]

Bases: enum.IntFlag

An enumeration.

Experiment = 1
Stage = 0

Callbacks

class catalyst.core.callbacks.checkpoint.CheckpointCallback(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metrics_filename: str = '_metrics.json')[source]

Bases: catalyst.core.callbacks.checkpoint.BaseCheckpointCallback

Checkpoint callback to save/restore your model/criterion/optimizer/metrics.

__init__(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metrics_filename: str = '_metrics.json')[source]
Parameters
  • save_n_best (int) – number of best checkpoint to keep

  • resume (str) – path to checkpoint to load and initialize runner state

  • metrics_filename (str) – filename to save metrics in checkpoint folder. Must ends on .json or .yml

get_checkpoint_suffix(checkpoint: dict) → str[source]
on_epoch_end(state: catalyst.core.state.State)[source]
on_stage_end(state: catalyst.core.state.State)[source]
on_stage_start(state: catalyst.core.state.State)[source]
process_checkpoint(logdir: Union[str, pathlib.Path], checkpoint: Dict, is_best: bool, main_metric: str = 'loss', minimize_metric: bool = True)[source]
process_metrics(last_valid_metrics) → Dict[source]
truncate_checkpoints(minimize_metric: bool) → None[source]
class catalyst.core.callbacks.checkpoint.IterationCheckpointCallback(save_n_last: int = 1, period: int = 100, stage_restart: bool = True, metrics_filename: str = '_metrics_iter.json')[source]

Bases: catalyst.core.callbacks.checkpoint.BaseCheckpointCallback

Iteration checkpoint callback to save your model/criterion/optimizer

__init__(save_n_last: int = 1, period: int = 100, stage_restart: bool = True, metrics_filename: str = '_metrics_iter.json')[source]
Parameters
  • save_n_last (int) – number of last checkpoint to keep

  • period (int) – save the checkpoint every period

  • stage_restart (bool) – restart counter every stage or not

  • metrics_filename (str) – filename to save metrics in checkpoint folder. Must ends on .json or .yml

get_checkpoint_suffix(checkpoint: dict) → str[source]
on_batch_end(state: catalyst.core.state.State)[source]
on_stage_start(state: catalyst.core.state.State)[source]
process_checkpoint(logdir: Union[str, pathlib.Path], checkpoint: Dict, batch_metrics: Dict[str, float])[source]
process_metrics() → Dict[source]
truncate_checkpoints(**kwargs) → None[source]
class catalyst.core.callbacks.criterion.CriterionCallback(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0, **metric_kwargs)[source]

Bases: catalyst.core.callbacks.metrics._MetricCallback

Callback for that measures loss with specified criterion.

__init__(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0, **metric_kwargs)[source]
Parameters
  • input_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If ‘__all__’, the whole input will be passed to the criterion If None, empty dict will be passed to the criterion.

  • output_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If ‘__all__’, the whole output will be passed to the criterion If None, empty dict will be passed to the criterion.

  • prefix (str) – prefix for metrics and output key for loss in state.batch_metrics dictionary

  • criterion_key (str) – A key to take a criterion in case there are several of them and they are in a dictionary format.

  • multiplier (float) – scale factor for the output loss.

property metric_fn
on_stage_start(state: catalyst.core.state.State)[source]

Checks that the current stage has correct criterion

class catalyst.core.callbacks.early_stop.CheckRunCallback(num_batch_steps: int = 2, num_epoch_steps: int = 2)[source]

Bases: catalyst.core.callback.Callback

on_batch_end(state: catalyst.core.state.State)[source]
on_epoch_end(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.early_stop.EarlyStoppingCallback(patience: int, metric: str = 'loss', minimize: bool = True, min_delta: float = 1e-06)[source]

Bases: catalyst.core.callback.Callback

on_epoch_end(state: catalyst.core.state.State) → None[source]
class catalyst.core.callbacks.exception.ExceptionCallback[source]

Bases: catalyst.core.callback.Callback

on_exception(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.logging.ConsoleLogger[source]

Bases: catalyst.core.callback.Callback

Logger callback, translates state.*_metrics to console and text file

__init__()[source]

Init ConsoleLogger

on_epoch_end(state: catalyst.core.state.State)[source]

Translate state.metric_manager to console and text file at the end of an epoch

on_stage_end(state: catalyst.core.state.State)[source]

Called at the end of each stage

on_stage_start(state: catalyst.core.state.State)[source]

Prepare state.logdir for the current stage

class catalyst.core.callbacks.logging.TensorboardLogger(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]

Bases: catalyst.core.callback.Callback

Logger callback, translates state.metric_manager to tensorboard

__init__(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]
Parameters
  • metric_names (List[str]) – list of metric names to log, if none - logs everything

  • log_on_batch_end (bool) – logs per-batch metrics if set True

  • log_on_epoch_end (bool) – logs per-epoch metrics if set True

on_batch_end(state: catalyst.core.state.State)[source]

Translate batch metrics to tensorboard

on_epoch_end(state: catalyst.core.state.State)[source]

Translate epoch metrics to tensorboard

on_loader_start(state: catalyst.core.state.State)[source]

Prepare tensorboard writers for the current stage

on_stage_end(state: catalyst.core.state.State)[source]

Close opened tensorboard writers

on_stage_start(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.logging.VerboseLogger(always_show: List[str] = None, never_show: List[str] = None)[source]

Bases: catalyst.core.callback.Callback

Logs the params into console

__init__(always_show: List[str] = None, never_show: List[str] = None)[source]
Parameters
  • always_show (List[str]) – list of metrics to always show if None default is ["_timer/_fps"] to remove always_show metrics set it to an empty list []

  • never_show (List[str]) – list of metrics which will not be shown

on_batch_end(state: catalyst.core.state.State)[source]

Update tqdm progress bar at the end of each batch

on_exception(state: catalyst.core.state.State)[source]

Called if an Exception was raised

on_loader_end(state: catalyst.core.state.State)[source]

Cleanup and close tqdm progress bar

on_loader_start(state: catalyst.core.state.State)[source]

Init tqdm progress bar

class catalyst.core.callbacks.metrics._MetricCallback(prefix: str, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metrics_kwargs)[source]

Bases: abc.ABC, catalyst.core.callback.Callback

abstract property metric_fn
on_batch_end(state: catalyst.core.state.State)[source]

Computes the metric and add it to batch metrics

class catalyst.core.callbacks.metrics.MetricCallback(prefix: str, metric_fn: Callable, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metric_kwargs)[source]

Bases: catalyst.core.callbacks.metrics._MetricCallback

A callback that returns single metric on state.on_batch_end

property metric_fn
class catalyst.core.callbacks.metrics.MultiMetricCallback(prefix: str, metric_fn: Callable, list_args: List, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metrics_kwargs)[source]

Bases: catalyst.core.callbacks.metrics.MetricCallback

A callback that returns multiple metrics on state.on_batch_end

on_batch_end(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.metrics.MetricAggregationCallback(prefix: str, metrics: Union[str, List[str], Dict[str, float]] = None, mode: str = 'mean', multiplier: float = 1.0)[source]

Bases: catalyst.core.callback.Callback

A callback to aggregate several metrics in one value.

__init__(prefix: str, metrics: Union[str, List[str], Dict[str, float]] = None, mode: str = 'mean', multiplier: float = 1.0) → None[source]
Parameters
  • prefix (str) – new key for aggregated metric.

  • metrics (Union[str, List[str], Dict[str, float]]) – If not None, it aggregates only the values from the metric by these keys. for weighted_sum aggregation it must be a Dict[str, float].

  • mode (str) – function for aggregation. Must be either sum, mean or weighted_sum.

  • multiplier (float) – scale factor for the aggregated metric.

on_batch_end(state: catalyst.core.state.State) → None[source]

Computes the metric and add it to the metrics

class catalyst.core.callbacks.metrics.MetricManagerCallback[source]

Bases: catalyst.core.callback.Callback

Prepares metrics for logging, transferring values from PyTorch to numpy

on_batch_end(state: catalyst.core.state.State)[source]
on_batch_start(state: catalyst.core.state.State)[source]
on_epoch_start(state: catalyst.core.state.State)[source]
on_loader_end(state: catalyst.core.state.State)[source]
on_loader_start(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.optimizer.OptimizerCallback(loss_key: str = 'loss', optimizer_key: str = None, accumulation_steps: int = 1, grad_clip_params: Dict = None, decouple_weight_decay: bool = True)[source]

Bases: catalyst.core.callback.Callback

Optimizer callback, abstraction over optimizer step.

__init__(loss_key: str = 'loss', optimizer_key: str = None, accumulation_steps: int = 1, grad_clip_params: Dict = None, decouple_weight_decay: bool = True)[source]
Parameters
  • grad_clip_params (dict) – params for gradient clipping

  • accumulation_steps (int) – number of steps before model.zero_grad()

  • optimizer_key (str) – A key to take a optimizer in case there are several of them and they are in a dictionary format.

  • loss_key (str) – key to get loss from state.loss

  • decouple_weight_decay (bool) – If True - decouple weight decay regularization.

  • save_model_grads (#) – If True - State.model_grads will

  • contain gradients calculated (#) –

  • on backward propagation on current (#) –

  • batch (#) –

static grad_step(*, optimizer: torch.optim.optimizer.Optimizer, optimizer_wds: List[float] = 0, grad_clip_fn: Callable = None)[source]

Makes a gradient step for a given optimizer

Parameters
  • optimizer (Optimizer) – the optimizer

  • optimizer_wds (List[float]) – list of weight decay parameters for each param group

  • grad_clip_fn (Callable) – function for gradient clipping

on_batch_end(state: catalyst.core.state.State)[source]

On batch end event

on_epoch_end(state: catalyst.core.state.State)[source]

On epoch end event

on_epoch_start(state: catalyst.core.state.State)[source]

On epoch start event

on_stage_start(state: catalyst.core.state.State)[source]

Checks that the current stage has correct optimizer

class catalyst.core.callbacks.scheduler.SchedulerCallback(scheduler_key: str = None, mode: str = None, reduced_metric: str = None)[source]

Bases: catalyst.core.callback.Callback

on_batch_end(state: catalyst.core.state.State)[source]
on_epoch_end(state: catalyst.core.state.State)[source]
on_loader_start(state: catalyst.core.state.State)[source]
on_stage_start(state: catalyst.core.state.State)[source]
step_batch(state: catalyst.core.state.State)[source]
step_epoch(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.scheduler.LRUpdater(optimizer_key: str = None)[source]

Bases: catalyst.core.callback.Callback

Basic class that all Lr updaters inherit from

__init__(optimizer_key: str = None)[source]
Parameters

optimizer_key – which optimizer key to use for learning rate scheduling

calc_lr()[source]
calc_momentum()[source]
on_batch_end(state: catalyst.core.state.State)[source]
on_loader_start(state: catalyst.core.state.State)[source]
on_stage_start(state: catalyst.core.state.State)[source]
update_optimizer(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.timer.TimerCallback[source]

Bases: catalyst.core.callback.Callback

Logs pipeline execution time

on_batch_end(state: catalyst.core.state.State)[source]
on_batch_start(state: catalyst.core.state.State)[source]
on_loader_end(state: catalyst.core.state.State)[source]
on_loader_start(state: catalyst.core.state.State)[source]
class catalyst.core.callbacks.validation.ValidationManagerCallback[source]

Bases: catalyst.core.callback.Callback

A callback to aggregate state.valid_metrics from state.epoch_metrics.

on_epoch_end(state: catalyst.core.state.State)[source]
on_epoch_start(state: catalyst.core.state.State)[source]

Registry

catalyst.core.registry.Callback(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Criterion(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Optimizer(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Scheduler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Module(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Model(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Sampler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.core.registry.Transform(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

Utils

catalyst.core.utils.callbacks.process_callbacks(callbacks: Union[list, collections.OrderedDict]) → collections.OrderedDict[source]

Creates an sequence of callbacks and sort them :param callbacks: either list of callbacks or ordered dict

Returns

sequence of callbacks sorted by callback order