Core¶
Core¶
Experiment¶
-
class
catalyst.core.experiment.
_Experiment
[source]¶ Bases:
abc.ABC
An abstraction that contains information about the experiment – a model, a criterion, an optimizer, a scheduler, and their hyperparameters. It also contains information about the data and transformations used. In general, the Experiment knows what you would like to run.
Note
To learn more about Catalyst Core concepts, please check out
Abstraction, please check out the implementations:
catalyst.dl.experiment.base.BaseExperiment
-
abstract property
distributed_params
¶ Dictionary with the parameters for distributed and half-precision training.
Used in
catalyst.utils.distributed.process_components
to setup Nvidia Apex or PyTorch distributed.Example:
>>> experiment.distributed_params {"opt_level": "O1", "syncbn": True} # Apex variant
-
abstract
get_callbacks
(stage: str) → OrderedDict[str, Callback][source]¶ Returns callbacks for a given stage.
Note
To learn more about Catalyst Callbacks mechanism, please follow
catalyst.core.callback.Callback
documentation.Note
We need ordered dictionary to guarantee the correct dataflow and order of metrics optimization. For example, to compute loss before optimization, or to compute all the metrics before logging :)
- Parameters
stage (str) – stage name of interest like “pretraining” / “training” / “finetuning” / etc
- Returns
Ordered dictionary with callbacks for current stage.
- Return type
OrderedDict[str, Callback]
Note
To learn more about Catalyst Core concepts, please check out
-
abstract
get_criterion
(stage: str) → torch.nn.modules.module.Module[source]¶ Returns the criterion for a given stage.
Example:
# for typical classification task >>> experiment.get_criterion(stage="training") nn.CrossEntropyLoss()
- Parameters
stage (str) – stage name of interest like “pretraining” / “training” / “finetuning” / etc
- Returns
criterion for a given stage.
- Return type
Criterion
-
get_datasets
(stage: str, epoch: int = None, **kwargs) → OrderedDict[str, Dataset][source]¶ Returns the datasets for a given stage and epoch.
Note
For Deep Learning cases you have the same dataset during whole stage.
For Reinforcement Learning it common to change the dataset (experiment) every training epoch.
- Parameters
stage (str) – stage name of interest, like “pretraining” / “training” / “finetuning” / etc
epoch (int) – epoch index
**kwargs (dict) – additional parameters to use during dataset creation
- Returns
- Ordered dictionary
with datasets for current stage and epoch.
- Return type
OrderedDict[str, Dataset]
Note
We need ordered dictionary to guarantee the correct dataflow and order of our training datasets. For example, to run through train data before validation one :)
Example:
>>> experiment.get_datasets( >>> stage="training", >>> in_csv_train="path/to/train/csv", >>> in_csv_valid="path/to/valid/csv", >>> ) OrderedDict({ "train": CsvDataset(in_csv=in_csv_train, ...), "valid": CsvDataset(in_csv=in_csv_valid, ...), })
-
get_experiment_components
(model: torch.nn.modules.module.Module, stage: str) → Tuple[torch.nn.modules.module.Module, torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler._LRScheduler][source]¶ Returns the tuple containing criterion, optimizer and scheduler by giving model and stage.
Aggregation method, based on,
- Parameters
model (Model) – model to optimize with stage optimizer
stage (str) – stage name of interest, like “pretraining” / “training” / “finetuning” / etc
- Returns
criterion, optimizer, scheduler for a given stage and model
- Return type
tuple
-
abstract
get_loaders
(stage: str, epoch: int = None) → OrderedDict[str, DataLoader][source]¶ Returns the loaders for a given stage.
Note
Wrapper for
catalyst.core.experiment._Experiment.get_datasets
. For most of your experiments you need to rewrite get_datasets method only.- Parameters
stage (str) – stage name of interest, like “pretraining” / “training” / “finetuning” / etc
epoch (int) – epoch index
**kwargs (dict) – additional parameters to use during dataset creation
- Returns
- Ordered dictionary
with loaders for current stage and epoch.
- Return type
OrderedDict[str, DataLoader]
-
abstract
get_model
(stage: str) → torch.nn.modules.module.Module[source]¶ Returns the model for a given stage.
Example:
# suppose we have typical MNIST model, like # nn.Sequential(nn.Linear(28*28, 128), nn.Linear(128, 10)) >>> experiment.get_model(stage="training") Sequential( (0): Linear(in_features=784, out_features=128, bias=True) (1): Linear(in_features=128, out_features=10, bias=True) )
- Parameters
stage (str) – stage name of interest like “pretraining” / “training” / “finetuning” / etc
- Returns
model for a given stage.
- Return type
Model
-
abstract
get_optimizer
(stage: str, model: torch.nn.modules.module.Module) → torch.optim.optimizer.Optimizer[source]¶ Returns the optimizer for a given stage and model.
Example:
>>> experiment.get_optimizer(stage="training", model=model) torch.optim.Adam(model.parameters())
- Parameters
stage (str) – stage name of interest like “pretraining” / “training” / “finetuning” / etc
model (Model) – model to optimize with stage optimizer
- Returns
optimizer for a given stage and model.
- Return type
-
abstract
get_scheduler
(stage: str, optimizer: torch.optim.optimizer.Optimizer) → torch.optim.lr_scheduler._LRScheduler[source]¶ Returns the scheduler for a given stage and optimizer.
- Example::
>>> experiment.get_scheduler(stage="training", optimizer=optimizer) torch.optim.lr_scheduler.StepLR(optimizer)
- Parameters
stage (str) – stage name of interest like “pretraining” / “training” / “finetuning” / etc
optimizer (Optimizer) – optimizer to schedule with stage scheduler
- Returns
scheduler for a given stage and optimizer.
- Return type
Scheduler
-
abstract
get_state_params
(stage: str) → Mapping[str, Any][source]¶ Returns State parameters for a given stage.
To learn more about State, please follow
catalyst.core.state.State
documentation.Example:
>>> experiment.get_state_params(stage="training") { "logdir": "./logs/training", "num_epochs": 42, "valid_loader": "valid", "main_metric": "loss", "minimize_metric": True, "checkpoint_data": {"comment": "we are going to make it!"} }
- Parameters
stage (str) – stage name of interest like “pretraining” / “training” / “finetuning” / etc
- Returns
State parameters for a given stage.
- Return type
dict
-
get_transforms
(stage: str = None, dataset: str = None)[source]¶ Returns the data transforms for a given stage and dataset.
- Parameters
stage (str) – stage name of interest, like “pretraining” / “training” / “finetuning” / etc
dataset (str) – dataset name of interest, like “train” / “valid” / “infer”
Note
For datasets/loaders nameing please follow
catalyst.core.state.State
documentation.- Returns
Data transformations to use for specified dataset.
-
abstract property
initial_seed
¶ Experiment’s initial seed, used to setup global seed at the beginning of each stage. Additionally, Catalyst Runner setups experiment.initial_seed + state.global_epoch + 1 as global seed each epoch. Used for experiment reproducibility.
Example:
>>> experiment.initial_seed 42
-
abstract property
logdir
¶ Path to the directory where the experiment logs would be saved.
Example:
>>> experiment.logdir ./path/to/my/experiment/logs
-
abstract property
monitoring_params
¶ Dictionary with the parameters for monitoring services, like Alchemy
Example:
>>> experiment.monitoring_params { "token": None, # insert your personal token here "project": "classification_example", "group": "first_trial", "experiment": "first_experiment", }
Warning
Deprecated, saved for backward compatibility. Please use
catalyst.contrib.dl.callbacks.alchemy.AlchemyLogger
instead.
-
abstract property
stages
¶ Experiment’s stage names.
Example:
>>> experiment.stages ["pretraining", "training", "finetuning"]
Note
To understand stages concept, please follow Catalyst documentation, for example,
catalyst.core.callback.Callback
-
class
catalyst.core.experiment.
StageBasedExperiment
[source]¶ Bases:
catalyst.core.experiment._Experiment
Experiment that provides constant datasources during training/inference stage.
Runner¶
-
class
catalyst.core.runner.
_Runner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]¶ Bases:
abc.ABC
An abstraction that knows how to run an experiment. It contains all the logic of how to run the experiment, stages, epoch and batches.
Note
To learn more about Catalyst Core concepts, please check out
Abstraction, please check out the implementations:
-
__init__
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]¶ - Parameters
model (Model) – Torch model object
device (Device) – Torch device
-
property
device
¶ Returns the runner’s device instance.
-
property
model
¶ Returns the runner’s model instance.
-
run_experiment
(experiment: catalyst.core.experiment._Experiment = None) → catalyst.core.runner._Runner[source]¶ Starts the experiment.
- Parameters
experiment (_Experiment) – Experiment instance to use for Runner.
-
-
class
catalyst.core.runner.
StageBasedRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]¶ Bases:
catalyst.core.runner._Runner
Runner that suppose to have constant datasources during training/inference stage.
Callback¶
-
class
catalyst.core.callback.
Callback
(order: int, node: int = <CallbackNode.All: 0>, scope: int = <CallbackScope.Stage: 0>)[source]¶ Bases:
object
An abstraction that lets you customize your experiment run logic. To give users maximum flexibility and extensibility Catalyst supports callback execution anywhere in the training loop:
-- stage start ---- epoch start ------ loader start -------- batch start ---------- batch handler (Runner logic) -------- batch end ------ loader end ---- epoch end -- stage end exception – if an Exception was raised
- All callbacks have
order
fromCallbackOrder
node
fromCallbackNode
scope
fromCallbackScope
Note
To learn more about Catalyst Core concepts, please check out
Abstraction, please check out the implementations:
-
__init__
(order: int, node: int = <CallbackNode.All: 0>, scope: int = <CallbackScope.Stage: 0>)[source]¶ Callback initializer.
- Parameters
order – flag from
CallbackOrder
node – flag from
CallbackNode
scope – flag from
CallbackScope
-
on_batch_end
(state: State)[source]¶ Event handler for batch end.
- Parameters
state ("State") – State instance.
-
on_batch_start
(state: State)[source]¶ Event handler for batch start.
- Parameters
state ("State") – State instance.
-
on_epoch_end
(state: State)[source]¶ Event handler for epoch end.
- Parameters
state ("State") – State instance.
-
on_epoch_start
(state: State)[source]¶ Event handler for epoch start.
- Parameters
state ("State") – State instance.
-
on_exception
(state: State)[source]¶ Event handler for exception case.
- Parameters
state ("State") – State instance.
-
on_loader_end
(state: State)[source]¶ Event handler for loader end.
- Parameters
state ("State") – State instance.
-
on_loader_start
(state: State)[source]¶ Event handler for loader start.
- Parameters
state ("State") – State instance.
-
class
catalyst.core.callback.
CallbackNode
[source]¶ Bases:
enum.IntFlag
Callback node usage flag during distributed training.
All (0) - use on all nodes, botch master and worker.
Master (1) - use only on master node.
Worker (2) - use only in worker nodes.
-
All
= 0¶
-
Master
= 1¶
-
Worker
= 2¶
-
class
catalyst.core.callback.
CallbackOrder
[source]¶ Bases:
enum.IntFlag
Callback usage order during training.
Catalyst executes Callbacks with low CallbackOrder before Callbacks with high CallbackOrder.
Predefined orders:
Internal (0) - some Catalyst Extras, like PhaseCallbacks (used in GANs).
Metric (20) - Callbacks with metrics and losses computation.
MetricAggregation (40) - metrics aggregation callbacks, like sum different losses into one.
Optimizer (60) - optimizer step, requires computed metrics for optimization.
Validation (80) - validation step, computes validation metrics subset based on all metrics.
Scheduler (100) - scheduler step, in ReduceLROnPlateau case requires computed validation metrics for optimizer schedule.
Logging (120) - logging step, logs metrics to Console/Tensorboard/Alchemy, requires computed metrics.
External (200) - additional callbacks with custom logic, like InferenceCallbacks
Nevertheless, you always can create CustomCallback with any order, for example:
>>> class MyCustomCallback(Callback): >>> def __init__(self): >>> super().__init__(order=42) >>> ... # MyCustomCallback will be executed after all `Metric`-Callbacks # but before all `MetricAggregation`-Callbacks.
-
External
= 200¶
-
Internal
= 0¶
-
Logging
= 120¶
-
Metric
= 20¶
-
MetricAggregation
= 40¶
-
Optimizer
= 60¶
-
Scheduler
= 100¶
-
Validation
= 80¶
State¶
-
class
catalyst.core.state.
State
(*, device: Union[str, torch.device] = None, model: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]] = None, criterion: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]] = None, optimizer: Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]] = None, scheduler: Union[torch.optim.lr_scheduler._LRScheduler, Dict[str, torch.optim.lr_scheduler._LRScheduler]] = None, callbacks: Dict[str, Callback] = None, logdir: str = None, stage: str = 'infer', num_epochs: int = None, main_metric: str = 'loss', minimize_metric: bool = True, valid_loader: str = 'valid', checkpoint_data: Dict = None, is_check_run: bool = False, **kwargs)[source]¶ Bases:
catalyst.utils.tools.frozen_class.FrozenClass
Some intermediate storage between Experiment and Runner that saves the current state of the Experiments – model, criterion, optimizer, schedulers, metrics, loggers, loaders, etc
Note
To learn more about Catalyst Core concepts, please check out
state.loaders - ordered dictionary with torch.DataLoaders; for example,
state.loaders = { "train": MnistTrainLoader(), "valid": MnistValidLoader() }
Note
“train” prefix is used for training loaders - metrics computations, backward pass, optimization
“valid” prefix is used for validation loaders - metrics computations only
“infer” prefix is used for inference loaders - dataset prediction
state.model - an instance of torch.nn.Module class, (should implement
forward
method); for example,state.model = torch.nn.Linear(10, 10)
state.criterion - an instance of torch.nn.Module class or torch.nn.modules.loss._Loss (should implement
forward
method); for example,state.criterion = torch.nn.CrossEntropyLoss()
state.optimizer - an instance of torch.optim.optimizer.Optimizer (should implement
step
method); for example,state.optimizer = torch.optim.Adam()
state.scheduler - an instance of torch.optim.lr_scheduler._LRScheduler (should implement
step
method); for example,state.scheduler = htorch.optim.lr_scheduler.ReduceLROnPlateau()
state.device - an instance of torch.device (CPU, GPU, TPU); for example,
state.device = torch.device("cpu")
state.callbacks - ordered dictionary with Catalyst.Callback instances; for example,
state.callbacks = { "accuracy": AccuracyCallback(), "criterion": CriterionCallback(), "optim": OptimizerCallback(), "saver": CheckpointCallback() }
state.batch_in - dictionary, containing batch of data from currents DataLoader; for example,
state.batch_in = { "images": np.ndarray(batch_size, c, h, w), "targets": np.ndarray(batch_size, 1), }
state.batch_out - dictionary, containing model output for current batch; for example,
state.batch_out = {"logits": torch.Tensor(batch_size, num_classes)}
state.batch_metrics - dictionary, flatten storage for batch metrics; for example,
state.batch_metrics = {"loss": ..., "accuracy": ..., "iou": ...}
state.loader_metrics - dictionary with aggregated batch statistics for loader (mean over all batches) and global loader metrics, like AUC; for example,
state.loader_metrics = {"loss": ..., "accuracy": ..., "auc": ...}
state.epoch_metrics - dictionary with summarized metrics for different loaders and global epoch metrics, like lr, momentum; for example,
state.epoch_metrics = { "train_loss": ..., "train_auc": ..., "valid_loss": ..., "lr": ..., "momentum": ..., }
state.is_best_valid - bool, indicator flag
True
if this training epoch is best over all epochsFalse
if not
state.valid_metrics - dictionary with validation metrics for currect epoch; for example,
state.valid_metrics = {"loss": ..., "accuracy": ..., "auc": ...}
Note
subdictionary of epoch_metrics
state.best_valid_metrics - dictionary with best validation metrics during whole training process
state.distributed_rank - distributed rank of current worker
state.is_distributed_worker - bool, indicator flag
True
if is worker node (state.distributed_rank > 0)False
if is master node (state.distributed_rank == 0)
state.stage_name - string, current stage name, for example,
state.stage_name = "pretraining" / "training" / "finetuning" / etc
state.epoch - int, numerical indicator for current stage epoch
state.num_epochs - int, maximum number of epochs, required for this stage
state.loader_name - string, current loader name for example,
state.loader_name = "train_dataset1" / "valid_data2" / "infer_golden"
state.loader_step - int, numerical indicator for batch index in current loader
state.loader_len - int, maximum number of batches in current loaders
state.batch_size - int, typical Deep Learning batch size parameter
state.global_step - int, numerical indicator, counter for all batches, that passes through our model during training, validation and inference stages
state.global_epoch - int, numerical indicator, counter for all epochs, that have passed during model training, validation and inference stages
state.main_metric - string, containing name of metric of interest for optimization, validation and checkpointing during training
state.minimize_metric - bool, indicator flag
True
if we need to minimize metric during training, like Cross Entropy lossFalse
if we need to maximize metric during training, like Accuracy or Intersection over Union
state.valid_loader - string, name of validation loader for metric selection, validation and model checkpoining
state.logdir - string, path to logging directory to save all logs, metrics, checkpoints and artifacts
state.checkpoint_data - dictionary with all extra data for experiment tracking
state.is_check_run - bool, indicator flag
True
if you want to check you pipeline and run only 2 batches per loader and 2 epochs per stageFalse
(default) if you want to just the pipeline
state.is_train_loader - bool, indicator flag
True
for training loadersFalse
otherwise
state.is_valid_loader - bool, indicator flag
True
for validation loadersFalse
otherwise
state.is_infer_loader - bool, indicator flag
True
for inference loadersFalse
otherwise
state.is_infer_stage - bool, indicator flag
True
for inference stagesFalse
otherwise
state.need_early_stop - bool, indicator flag used for EarlyStopping and CheckRun Callbacks
True
if we need to stop the trainingFalse
(default) otherwise
state.need_exception_reraise - bool, indicator flag
True
(default) if you want to show exception during pipeline and stop the training processFalse
otherwise
state.exception - python Exception instance to raise (or not ;) )
-
__init__
(*, device: Union[str, torch.device] = None, model: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]] = None, criterion: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]] = None, optimizer: Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]] = None, scheduler: Union[torch.optim.lr_scheduler._LRScheduler, Dict[str, torch.optim.lr_scheduler._LRScheduler]] = None, callbacks: Dict[str, Callback] = None, logdir: str = None, stage: str = 'infer', num_epochs: int = None, main_metric: str = 'loss', minimize_metric: bool = True, valid_loader: str = 'valid', checkpoint_data: Dict = None, is_check_run: bool = False, **kwargs)[source]¶ - Parameters
@TODO – Docs. Contribution is welcome
-
get_attr
(key: str, inner_key: str = None) → Any[source]¶ Alias for python getattr method. Useful for Callbacks preparation and cases with multi-criterion, multi-optimizer setup. For example, when you would like to train multi-task classification.
Used to get a named attribute from a State by key keyword; for example
# example 1 state.get_attr("criterion") # is equivalent to state.criterion # example 2 state.get_attr("optimizer") # is equivalent to state.optimizer # example 3 state.get_attr("scheduler") # is equivalent to state.scheduler
With inner_key usage, it suppose to find a dictionary under key and would get inner_key from this dict; for example,
# example 1 state.get_attr("criterion", "bce") # is equivalent to state.criterion["bce"] # example 2 state.get_attr("optimizer", "adam") # is equivalent to state.optimizer["adam"] # example 3 state.get_attr("scheduler", "adam") # is equivalent to state.scheduler["adam"]
- Parameters
key (str) – name for attribute of interest, like criterion, optimizer, scheduler
inner_key (str) – name of inner dictionary key
-
property
input
¶ Alias for state.batch_in.
Warning
Deprecated, saved for backward compatibility. Please use state.batch_in instead.
-
property
need_backward_pass
¶ Alias for state.is_train_loader.
Warning
Deprecated, saved for backward compatibility. Please use state.is_train_loader instead.
-
property
output
¶ Alias for state.batch_out.
Warning
Deprecated, saved for backward compatibility. Please use state.batch_out instead.
Callbacks¶
Checkpoint¶
-
class
catalyst.core.callbacks.checkpoint.
CheckpointCallback
(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metrics_filename: str = '_metrics.json')[source]¶ Bases:
catalyst.core.callbacks.checkpoint.BaseCheckpointCallback
Checkpoint callback to save/restore your model/criterion/optimizer/metrics.
-
__init__
(save_n_best: int = 1, resume: str = None, resume_dir: str = None, metrics_filename: str = '_metrics.json')[source]¶ - Parameters
save_n_best (int) – number of best checkpoint to keep
resume (str) – path to checkpoint to load and initialize runner state
metrics_filename (str) – filename to save metrics in checkpoint folder. Must ends on
.json
or.yml
-
-
class
catalyst.core.callbacks.checkpoint.
IterationCheckpointCallback
(save_n_last: int = 1, period: int = 100, stage_restart: bool = True, metrics_filename: str = '_metrics_iter.json')[source]¶ Bases:
catalyst.core.callbacks.checkpoint.BaseCheckpointCallback
Iteration checkpoint callback to save your model/criterion/optimizer.
-
__init__
(save_n_last: int = 1, period: int = 100, stage_restart: bool = True, metrics_filename: str = '_metrics_iter.json')[source]¶ - Parameters
save_n_last (int) – number of last checkpoint to keep
period (int) – save the checkpoint every period
stage_restart (bool) – restart counter every stage or not
metrics_filename (str) – filename to save metrics in checkpoint folder. Must ends on
.json
or.yml
-
Criterion¶
-
class
catalyst.core.callbacks.criterion.
CriterionCallback
(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0, **metric_kwargs)[source]¶ Bases:
catalyst.core.callbacks.metrics._MetricCallback
Callback for that measures loss with specified criterion.
-
__init__
(input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', prefix: str = 'loss', criterion_key: str = None, multiplier: float = 1.0, **metric_kwargs)[source]¶ - Parameters
input_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If ‘__all__’, the whole input will be passed to the criterion If None, empty dict will be passed to the criterion.
output_key (Union[str, List[str], Dict[str, str]]) – key/list/dict of keys that takes values from the input dictionary If ‘__all__’, the whole output will be passed to the criterion If None, empty dict will be passed to the criterion.
prefix (str) – prefix for metrics and output key for loss in
state.batch_metrics
dictionarycriterion_key (str) – A key to take a criterion in case there are several of them and they are in a dictionary format.
multiplier (float) – scale factor for the output loss.
-
property
metric_fn
¶ Docs. Contribution is welcome.
- Type
@TODO
-
Early Stop¶
-
class
catalyst.core.callbacks.early_stop.
CheckRunCallback
(num_batch_steps: int = 2, num_epoch_steps: int = 2)[source]¶ Bases:
catalyst.core.callback.Callback
@TODO: Docs. Contribution is welcome.
-
class
catalyst.core.callbacks.early_stop.
EarlyStoppingCallback
(patience: int, metric: str = 'loss', minimize: bool = True, min_delta: float = 1e-06)[source]¶ Bases:
catalyst.core.callback.Callback
@TODO: Docs. Contribution is welcome.
Logging¶
-
class
catalyst.core.callbacks.logging.
ConsoleLogger
[source]¶ Bases:
catalyst.core.callback.Callback
Logger callback, translates
state.*_metrics
to console and text file.
-
class
catalyst.core.callbacks.logging.
TensorboardLogger
(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]¶ Bases:
catalyst.core.callback.Callback
Logger callback, translates
state.metric_manager
to tensorboard.-
__init__
(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True)[source]¶ - Parameters
metric_names (List[str]) – list of metric names to log, if none - logs everything
log_on_batch_end (bool) – logs per-batch metrics if set True
log_on_epoch_end (bool) – logs per-epoch metrics if set True
-
-
class
catalyst.core.callbacks.logging.
VerboseLogger
(always_show: List[str] = None, never_show: List[str] = None)[source]¶ Bases:
catalyst.core.callback.Callback
Logs the params into console.
-
__init__
(always_show: List[str] = None, never_show: List[str] = None)[source]¶ - Parameters
always_show (List[str]) – list of metrics to always show if None default is
["_timer/_fps"]
to remove always_show metrics set it to an empty list[]
never_show (List[str]) – list of metrics which will not be shown
-
Metrics¶
-
class
catalyst.core.callbacks.metrics.
_MetricCallback
(prefix: str, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metrics_kwargs)[source]¶ Bases:
abc.ABC
,catalyst.core.callback.Callback
@TODO: Docs. Contribution is welcome.
-
__init__
(prefix: str, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metrics_kwargs)[source]¶ @TODO: Docs. Contribution is welcome.
-
abstract property
metric_fn
¶ Docs. Contribution is welcome.
- Type
@TODO
-
-
class
catalyst.core.callbacks.metrics.
MetricCallback
(prefix: str, metric_fn: Callable, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metric_kwargs)[source]¶ Bases:
catalyst.core.callbacks.metrics._MetricCallback
A callback that returns single metric on state.on_batch_end.
-
__init__
(prefix: str, metric_fn: Callable, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metric_kwargs)[source]¶ @TODO: Docs. Contribution is welcome.
-
property
metric_fn
¶ Docs. Contribution is welcome.
- Type
@TODO
-
-
class
catalyst.core.callbacks.metrics.
MultiMetricCallback
(prefix: str, metric_fn: Callable, list_args: List, input_key: Union[str, List[str], Dict[str, str]] = 'targets', output_key: Union[str, List[str], Dict[str, str]] = 'logits', multiplier: float = 1.0, **metrics_kwargs)[source]¶ Bases:
catalyst.core.callbacks.metrics.MetricCallback
A callback that returns multiple metrics on state.on_batch_end.
-
class
catalyst.core.callbacks.metrics.
MetricAggregationCallback
(prefix: str, metrics: Union[str, List[str], Dict[str, float]] = None, mode: str = 'mean', multiplier: float = 1.0)[source]¶ Bases:
catalyst.core.callback.Callback
A callback to aggregate several metrics in one value.
-
__init__
(prefix: str, metrics: Union[str, List[str], Dict[str, float]] = None, mode: str = 'mean', multiplier: float = 1.0) → None[source]¶ - Parameters
prefix (str) – new key for aggregated metric.
metrics (Union[str, List[str], Dict[str, float]]) – If not None, it aggregates only the values from the metric by these keys. for
weighted_sum
aggregation it must be a Dict[str, float].mode (str) – function for aggregation. Must be either
sum
,mean
orweighted_sum
.multiplier (float) – scale factor for the aggregated metric.
-
-
class
catalyst.core.callbacks.metrics.
MetricManagerCallback
[source]¶ Bases:
catalyst.core.callback.Callback
Prepares metrics for logging, transferring values from PyTorch to numpy.
-
on_batch_end
(state: catalyst.core.state.State) → None[source]¶ Batch end hook.
- Parameters
state (State) – current state
-
on_batch_start
(state: catalyst.core.state.State) → None[source]¶ Batch start hook.
- Parameters
state (State) – current state
-
on_epoch_start
(state: catalyst.core.state.State) → None[source]¶ Epoch start hook.
- Parameters
state (State) – current state
-
Optimizer¶
-
class
catalyst.core.callbacks.optimizer.
OptimizerCallback
(loss_key: str = 'loss', optimizer_key: str = None, accumulation_steps: int = 1, grad_clip_params: Dict = None, decouple_weight_decay: bool = True)[source]¶ Bases:
catalyst.core.callback.Callback
Optimizer callback, abstraction over optimizer step.
-
__init__
(loss_key: str = 'loss', optimizer_key: str = None, accumulation_steps: int = 1, grad_clip_params: Dict = None, decouple_weight_decay: bool = True)[source]¶ - Parameters
grad_clip_params (dict) – params for gradient clipping
accumulation_steps (int) – number of steps before
model.zero_grad()
optimizer_key (str) – A key to take a optimizer in case there are several of them and they are in a dictionary format.
loss_key (str) – key to get loss from
state.loss
decouple_weight_decay (bool) – If True - decouple weight decay regularization.
save_model_grads (#) – If True - State.model_grads will
contain gradients calculated (#) –
on backward propagation on current (#) –
batch (#) –
-
static
grad_step
(*, optimizer: torch.optim.optimizer.Optimizer, optimizer_wds: List[float] = 0, grad_clip_fn: Callable = None) → None[source]¶ Makes a gradient step for a given optimizer.
- Parameters
optimizer (Optimizer) – the optimizer
optimizer_wds (List[float]) – list of weight decay parameters for each param group
grad_clip_fn (Callable) – function for gradient clipping
-
on_batch_end
(state: catalyst.core.state.State) → None[source]¶ On batch end event
- Parameters
state (State) – current state
-
on_epoch_end
(state: catalyst.core.state.State) → None[source]¶ On epoch end event.
- Parameters
state (State) – current state
-
Scheduler¶
-
class
catalyst.core.callbacks.scheduler.
SchedulerCallback
(scheduler_key: str = None, mode: str = None, reduced_metric: str = None)[source]¶ Bases:
catalyst.core.callback.Callback
@TODO: Docs. Contribution is welcome.
-
__init__
(scheduler_key: str = None, mode: str = None, reduced_metric: str = None)[source]¶ @TODO: Docs. Contribution is welcome.
-
on_batch_end
(state: catalyst.core.state.State) → None[source]¶ Batch end hook.
- Parameters
state (State) – current state
-
on_epoch_end
(state: catalyst.core.state.State) → None[source]¶ Epoch end hook.
- Parameters
state (State) – current state
-
on_loader_start
(state: catalyst.core.state.State) → None[source]¶ Loader start hook.
- Parameters
state (State) – current state
-
on_stage_start
(state: catalyst.core.state.State) → None[source]¶ Stage start hook.
- Parameters
state (State) – current state
-
-
class
catalyst.core.callbacks.scheduler.
LRUpdater
(optimizer_key: str = None)[source]¶ Bases:
catalyst.core.callback.Callback
Basic class that all Lr updaters inherit from.
-
__init__
(optimizer_key: str = None)[source]¶ - Parameters
optimizer_key (str) – which optimizer key to use for learning rate scheduling
-
on_batch_end
(state: catalyst.core.state.State) → None[source]¶ Batch end hook.
- Parameters
state (State) – current state
-
on_loader_start
(state: catalyst.core.state.State) → None[source]¶ Loader start hook.
- Parameters
state (State) – current state
-
Timer¶
-
class
catalyst.core.callbacks.timer.
TimerCallback
[source]¶ Bases:
catalyst.core.callback.Callback
Logs pipeline execution time.
-
on_batch_end
(state: catalyst.core.state.State) → None[source]¶ Batch end hook.
- Parameters
state (State) – current state
-
on_batch_start
(state: catalyst.core.state.State) → None[source]¶ Batch start hook.
- Parameters
state (State) – current state
-
Validation¶
-
class
catalyst.core.callbacks.validation.
ValidationManagerCallback
[source]¶ Bases:
catalyst.core.callback.Callback
A callback to aggregate state.valid_metrics from state.epoch_metrics.
Registry¶
-
catalyst.core.registry.
Callback
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Criterion
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Optimizer
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Scheduler
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Module
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Model
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Sampler
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.core.registry.
Transform
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
Utils¶
-
catalyst.core.utils.callbacks.
sort_callbacks_by_order
(callbacks: Union[list, collections.OrderedDict]) → collections.OrderedDict[source]¶ Creates an sequence of callbacks and sort them.
- Parameters
callbacks – either list of callbacks or ordered dict
- Returns
sequence of callbacks sorted by
callback order
-
catalyst.core.utils.callbacks.
filter_callbacks_by_node
(callbacks: Union[Dict, collections.OrderedDict]) → Union[Dict, collections.OrderedDict][source]¶ Filters callbacks based on running node. Deletes worker-only callbacks from
CallbackNode.Master
and master-only callbacks fromCallbackNode.Worker
.- Parameters
callbacks (Union[Dict, OrderedDict]) – callbacks
- Returns
filtered callbacks dictionary.
- Return type
Union[Dict, OrderedDict]