Shortcuts

Experiments

class catalyst.experiments.ConfigExperiment(config: Dict)[source]

Bases: catalyst.core.experiment.IExperiment

Experiment created from a configuration file.

__init__(config: Dict)[source]
Parameters

config – dictionary with parameters

property distributed_params

Dict with the parameters for distributed and FP16 methond.

get_callbacks(stage: str) → OrderedDict[Callback][source]

Returns the callbacks for a given stage.

get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage.

get_loaders(stage: str, epoch: int = None) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage.

get_model(stage: str)[source]

Returns the model for a given stage.

get_optimizer(stage: str, model: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]]) → Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]][source]

Returns the optimizer for a given stage.

Parameters
  • stage – stage name

  • model (Union[Model, Dict[str, Model]]) – model or a dict of models

Returns

optimizer for selected stage

get_scheduler(stage: str, optimizer: Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]]) → Union[torch.optim.lr_scheduler._LRScheduler, Dict[str, torch.optim.lr_scheduler._LRScheduler]][source]

Returns the scheduler for a given stage.

get_stage_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage.

get_transforms(stage: str = None, dataset: str = None) → Callable[source]

Returns transform for a given stage and dataset.

Parameters
  • stage – stage name

  • dataset – dataset name (e.g. “train”, “valid”), will be used only if the value of _key_value` is True

Returns

transform function

Return type

Callable

property hparams

Returns hyperparameters

property initial_seed

Experiment’s initial seed value.

property logdir

Path to the directory where the experiment logs.

property stages

Experiment’s stage names.

property trial

Returns hyperparameter trial for current experiment. Could be usefull for Optuna/HyperOpt/Ray.tune hyperparameters optimizers.

Returns

trial

Example:

>>> experiment.trial
optuna.trial._trial.Trial  # Optuna variant
class catalyst.experiments.Experiment(model: torch.nn.modules.module.Module, datasets: OrderedDict[str, Union[Dataset, Dict, Any]] = None, loaders: OrderedDict[str, DataLoader] = None, callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, trial: Any = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, check_time: bool = False, check_run: bool = False, overfit: bool = False, stage_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, initial_seed: int = 42)[source]

Bases: catalyst.core.experiment.IExperiment

One-staged experiment, you can use it to declare experiments in code.

__init__(model: torch.nn.modules.module.Module, datasets: OrderedDict[str, Union[Dataset, Dict, Any]] = None, loaders: OrderedDict[str, DataLoader] = None, callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, trial: Any = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, check_time: bool = False, check_run: bool = False, overfit: bool = False, stage_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, initial_seed: int = 42)[source]
Parameters
  • model – model

  • datasets (OrderedDict[str, Union[Dataset, Dict, Any]]) – dictionary with one or several torch.utils.data.Dataset for training, validation or inference used for Loaders automatic creation preferred way for distributed training setup

  • loaders (OrderedDict[str, DataLoader]) – dictionary with one or several torch.utils.data.DataLoader for training, validation or inference

  • callbacks (Union[List[Callback], OrderedDict[str, Callback]]) – list or dictionary with Catalyst callbacks

  • logdir – path to output directory

  • stage – current stage

  • criterion – criterion function

  • optimizer – optimizer

  • scheduler – scheduler

  • trial – hyperparameters optimization trial. Used for integrations with Optuna/HyperOpt/Ray.tune.

  • num_epochs – number of experiment’s epochs

  • valid_loader – loader name used to calculate the metrics and save the checkpoints. For example, you can pass train and then the metrics will be taken from train loader.

  • main_metric – the key to the name of the metric by which the checkpoints will be selected.

  • minimize_metric – flag to indicate whether the main_metric should be minimized.

  • verbose – if True, it displays the status of the training to the console.

  • check_time – if True, computes the execution time of training process and displays it to the console.

  • check_run – if True, we run only 3 batches per loader and 3 epochs per stage to check pipeline correctness

  • overfit – if True, then takes only one batch per loader for model overfitting, for advance usage please check BatchOverfitCallback

  • stage_kwargs – additional stage params

  • checkpoint_data – additional data to save in checkpoint, for example: class_names, date_of_training, etc

  • distributed_params – dictionary with the parameters for distributed and FP16 method

  • initial_seed – experiment’s initial seed value

property distributed_params

Dict with the parameters for distributed and FP16 method.

get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Returns the callbacks for a given stage.

get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage.

get_loaders(stage: str, epoch: int = None) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage.

get_model(stage: str) → torch.nn.modules.module.Module[source]

Returns the model for a given stage.

get_optimizer(stage: str, model: torch.nn.modules.module.Module) → torch.optim.optimizer.Optimizer[source]

Returns the optimizer for a given stage.

get_scheduler(stage: str, optimizer=None) → torch.optim.lr_scheduler._LRScheduler[source]

Returns the scheduler for a given stage.

get_stage_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage.

property hparams

Returns hyper parameters

property initial_seed

Experiment’s initial seed value.

property logdir

Path to the directory where the experiment logs.

property stages

Experiment’s stage names (array with one value).

property trial

Returns hyperparameter trial for current experiment. Could be usefull for Optuna/HyperOpt/Ray.tune hyperparameters optimizers.

Returns

trial

Example:

>>> experiment.trial
optuna.trial._trial.Trial  # Optuna variant
class catalyst.experiments.AutoCallbackExperiment(model: torch.nn.modules.module.Module, datasets: OrderedDict[str, Union[Dataset, Dict, Any]] = None, loaders: OrderedDict[str, DataLoader] = None, callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, trial: Any = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, check_time: bool = False, check_run: bool = False, overfit: bool = False, stage_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, initial_seed: int = 42)[source]

Bases: catalyst.experiments.experiment.Experiment

Auto-optimized experiment.

The main difference with Experiment that it will add several callbacks by default if you haven’t.

Here are list of callbacks by default:
CriterionCallback:

measures loss with specified criterion.

OptimizerCallback:

abstraction over optimizer step.

SchedulerCallback:

only in case if you provided scheduler to your experiment does lr_scheduler.step

CheckpointCallback:

saves model and optimizer state each epoch callback to save/restore your model/criterion/optimizer/metrics.

ConsoleLogger:

translates runner.*_metrics to console and text file.

TensorboardLogger:

writes runner.*_metrics to tensorboard.

RaiseExceptionCallback:

will raise exception if needed.

get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Override of BaseExperiment.get_callbacks method. Will add several of callbacks by default in case they missed.

Parameters

stage – name of stage. It should start with infer if you don’t need default callbacks, as they required only for training stages.

Returns

Ordered dictionary of callbacks

for experiment

Return type

OrderedDict[str, Callback]

AutoCallbackExperiment

class catalyst.experiments.auto.AutoCallbackExperiment(model: torch.nn.modules.module.Module, datasets: OrderedDict[str, Union[Dataset, Dict, Any]] = None, loaders: OrderedDict[str, DataLoader] = None, callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, trial: Any = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, check_time: bool = False, check_run: bool = False, overfit: bool = False, stage_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, initial_seed: int = 42)[source]

Bases: catalyst.experiments.experiment.Experiment

Auto-optimized experiment.

The main difference with Experiment that it will add several callbacks by default if you haven’t.

Here are list of callbacks by default:
CriterionCallback:

measures loss with specified criterion.

OptimizerCallback:

abstraction over optimizer step.

SchedulerCallback:

only in case if you provided scheduler to your experiment does lr_scheduler.step

CheckpointCallback:

saves model and optimizer state each epoch callback to save/restore your model/criterion/optimizer/metrics.

ConsoleLogger:

translates runner.*_metrics to console and text file.

TensorboardLogger:

writes runner.*_metrics to tensorboard.

RaiseExceptionCallback:

will raise exception if needed.

get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Override of BaseExperiment.get_callbacks method. Will add several of callbacks by default in case they missed.

Parameters

stage – name of stage. It should start with infer if you don’t need default callbacks, as they required only for training stages.

Returns

Ordered dictionary of callbacks

for experiment

Return type

OrderedDict[str, Callback]

ConfigExperiment

class catalyst.experiments.config.ConfigExperiment(config: Dict)[source]

Bases: catalyst.core.experiment.IExperiment

Experiment created from a configuration file.

STAGE_KEYWORDS = ['criterion_params', 'optimizer_params', 'scheduler_params', 'data_params', 'transform_params', 'stage_params', 'callbacks_params']
__init__(config: Dict)[source]
Parameters

config – dictionary with parameters

property distributed_params

Dict with the parameters for distributed and FP16 methond.

get_callbacks(stage: str) → OrderedDict[Callback][source]

Returns the callbacks for a given stage.

get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage.

get_loaders(stage: str, epoch: int = None) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage.

get_model(stage: str)[source]

Returns the model for a given stage.

get_optimizer(stage: str, model: Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module]]) → Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]][source]

Returns the optimizer for a given stage.

Parameters
  • stage – stage name

  • model (Union[Model, Dict[str, Model]]) – model or a dict of models

Returns

optimizer for selected stage

get_scheduler(stage: str, optimizer: Union[torch.optim.optimizer.Optimizer, Dict[str, torch.optim.optimizer.Optimizer]]) → Union[torch.optim.lr_scheduler._LRScheduler, Dict[str, torch.optim.lr_scheduler._LRScheduler]][source]

Returns the scheduler for a given stage.

get_stage_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage.

get_transforms(stage: str = None, dataset: str = None) → Callable[source]

Returns transform for a given stage and dataset.

Parameters
  • stage – stage name

  • dataset – dataset name (e.g. “train”, “valid”), will be used only if the value of _key_value` is True

Returns

transform function

Return type

Callable

property hparams

Returns hyperparameters

property initial_seed

Experiment’s initial seed value.

property logdir

Path to the directory where the experiment logs.

property stages

Experiment’s stage names.

property trial

Returns hyperparameter trial for current experiment. Could be usefull for Optuna/HyperOpt/Ray.tune hyperparameters optimizers.

Returns

trial

Example:

>>> experiment.trial
optuna.trial._trial.Trial  # Optuna variant

Experiment

class catalyst.experiments.experiment.Experiment(model: torch.nn.modules.module.Module, datasets: OrderedDict[str, Union[Dataset, Dict, Any]] = None, loaders: OrderedDict[str, DataLoader] = None, callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, trial: Any = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, check_time: bool = False, check_run: bool = False, overfit: bool = False, stage_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, initial_seed: int = 42)[source]

Bases: catalyst.core.experiment.IExperiment

One-staged experiment, you can use it to declare experiments in code.

__init__(model: torch.nn.modules.module.Module, datasets: OrderedDict[str, Union[Dataset, Dict, Any]] = None, loaders: OrderedDict[str, DataLoader] = None, callbacks: Union[OrderedDict[str, Callback], List[Callback]] = None, logdir: str = None, stage: str = 'train', criterion: torch.nn.modules.module.Module = None, optimizer: torch.optim.optimizer.Optimizer = None, scheduler: torch.optim.lr_scheduler._LRScheduler = None, trial: Any = None, num_epochs: int = 1, valid_loader: str = 'valid', main_metric: str = 'loss', minimize_metric: bool = True, verbose: bool = False, check_time: bool = False, check_run: bool = False, overfit: bool = False, stage_kwargs: Dict = None, checkpoint_data: Dict = None, distributed_params: Dict = None, initial_seed: int = 42)[source]
Parameters
  • model – model

  • datasets (OrderedDict[str, Union[Dataset, Dict, Any]]) – dictionary with one or several torch.utils.data.Dataset for training, validation or inference used for Loaders automatic creation preferred way for distributed training setup

  • loaders (OrderedDict[str, DataLoader]) – dictionary with one or several torch.utils.data.DataLoader for training, validation or inference

  • callbacks (Union[List[Callback], OrderedDict[str, Callback]]) – list or dictionary with Catalyst callbacks

  • logdir – path to output directory

  • stage – current stage

  • criterion – criterion function

  • optimizer – optimizer

  • scheduler – scheduler

  • trial – hyperparameters optimization trial. Used for integrations with Optuna/HyperOpt/Ray.tune.

  • num_epochs – number of experiment’s epochs

  • valid_loader – loader name used to calculate the metrics and save the checkpoints. For example, you can pass train and then the metrics will be taken from train loader.

  • main_metric – the key to the name of the metric by which the checkpoints will be selected.

  • minimize_metric – flag to indicate whether the main_metric should be minimized.

  • verbose – if True, it displays the status of the training to the console.

  • check_time – if True, computes the execution time of training process and displays it to the console.

  • check_run – if True, we run only 3 batches per loader and 3 epochs per stage to check pipeline correctness

  • overfit – if True, then takes only one batch per loader for model overfitting, for advance usage please check BatchOverfitCallback

  • stage_kwargs – additional stage params

  • checkpoint_data – additional data to save in checkpoint, for example: class_names, date_of_training, etc

  • distributed_params – dictionary with the parameters for distributed and FP16 method

  • initial_seed – experiment’s initial seed value

property distributed_params

Dict with the parameters for distributed and FP16 method.

get_callbacks(stage: str) → OrderedDict[str, Callback][source]

Returns the callbacks for a given stage.

get_criterion(stage: str) → torch.nn.modules.module.Module[source]

Returns the criterion for a given stage.

get_loaders(stage: str, epoch: int = None) → OrderedDict[str, DataLoader][source]

Returns the loaders for a given stage.

get_model(stage: str) → torch.nn.modules.module.Module[source]

Returns the model for a given stage.

get_optimizer(stage: str, model: torch.nn.modules.module.Module) → torch.optim.optimizer.Optimizer[source]

Returns the optimizer for a given stage.

get_scheduler(stage: str, optimizer=None) → torch.optim.lr_scheduler._LRScheduler[source]

Returns the scheduler for a given stage.

get_stage_params(stage: str) → Mapping[str, Any][source]

Returns the state parameters for a given stage.

property hparams

Returns hyper parameters

property initial_seed

Experiment’s initial seed value.

property logdir

Path to the directory where the experiment logs.

property stages

Experiment’s stage names (array with one value).

property trial

Returns hyperparameter trial for current experiment. Could be usefull for Optuna/HyperOpt/Ray.tune hyperparameters optimizers.

Returns

trial

Example:

>>> experiment.trial
optuna.trial._trial.Trial  # Optuna variant