Shortcuts

Contrib

Data

Transforms

This subpackage was borrowed from torchvision(https://github.com/pytorch/vision).

class catalyst.contrib.data.transforms.Compose(transforms)[source]

Bases: object

Composes several transforms together.

__init__(transforms)[source]
Parameters

transforms (List) – list of transforms to compose.

Example

>>> Compose([ToTensor(), Normalize()])
class catalyst.contrib.data.transforms.Normalize(mean, std, inplace=False)[source]

Bases: object

Normalize a tensor image with mean and standard deviation.

Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input torch.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]

Note

This transform acts out of place, i.e.,

it does not mutate the input tensor.

__init__(mean, std, inplace=False)[source]
Parameters
  • mean (sequence) – Sequence of means for each channel.

  • std (sequence) – Sequence of standard deviations for each channel.

  • inplace (bool,optional) – Bool to make this operation in-place.

class catalyst.contrib.data.transforms.ToTensor[source]

Bases: object

Convert a numpy.ndarray to tensor. Converts numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] if the numpy.ndarray has dtype = np.uint8 In the other cases, tensors are returned without scaling.

catalyst.contrib.data.transforms.normalize(tensor, mean, std, inplace=False)[source]

Normalize a tensor image with mean and standard deviation.

Note

This transform acts out of place by default, i.e.,

it does not mutates the input tensor.

Parameters
  • tensor (Tensor) – Tensor image of size (C, H, W) to be normalized.

  • mean (sequence) – Sequence of means for each channel.

  • std (sequence) – Sequence of standard deviations for each channel.

  • inplace (bool,optional) – Bool to make this operation inplace.

Returns

Normalized Tensor image.

Return type

Tensor

catalyst.contrib.data.transforms.to_tensor(pic: numpy.ndarray) → torch.Tensor[source]

Convert numpy.ndarray to tensor.

Parameters

pic (PIL Image or numpy.ndarray) – Image to be converted to tensor.

Returns

Converted image.

Return type

Tensor

Computer Vision

Mixins

class catalyst.contrib.data.cv.mixins.blur.BlurMixin(input_key: str = 'image', output_key: str = 'blur_factor', blur_min: int = 3, blur_max: int = 9, blur: List[str] = None)[source]

Bases: object

Calculates blur factor for augmented image.

__init__(input_key: str = 'image', output_key: str = 'blur_factor', blur_min: int = 3, blur_max: int = 9, blur: List[str] = None)[source]
Parameters
  • input_key (str) – input key to use from annotation dict

  • output_key (str) – output key to use to store the result

class catalyst.contrib.data.cv.mixins.flare.FlareMixin(input_key: str = 'image', output_key: str = 'flare_factor', sunflare_params: Dict = None)[source]

Bases: object

Calculates flare factor for augmented image.

__init__(input_key: str = 'image', output_key: str = 'flare_factor', sunflare_params: Dict = None)[source]
Parameters
  • input_key (str) – input key to use from annotation dict

  • output_key (str) – output key to use to store the result

  • sunflare_params (dict) – params to init albumentations.RandomSunFlare

class catalyst.contrib.data.cv.mixins.rotate.RotateMixin(input_key: str = 'image', output_key: str = 'rotation_factor', targets_key: str = None, rotate_probability: float = 1.0, hflip_probability: float = 0.5, one_hot_classes: int = None)[source]

Bases: object

Calculates rotation factor for augmented image.

__init__(input_key: str = 'image', output_key: str = 'rotation_factor', targets_key: str = None, rotate_probability: float = 1.0, hflip_probability: float = 0.5, one_hot_classes: int = None)[source]
Parameters
  • input_key (str) – input key to use from annotation dict

  • output_key (str) – output key to use to store the result

Transforms

class catalyst.contrib.data.cv.transforms.tensor.TensorToImage(denormalize: bool = False, move_channels_dim: bool = True, always_apply: bool = False, p: float = 1.0)[source]

Bases: albumentations.core.transforms_interface.ImageOnlyTransform

Casts torch.tensor to numpy.array.

__init__(denormalize: bool = False, move_channels_dim: bool = True, always_apply: bool = False, p: float = 1.0)[source]
Parameters
  • denormalize (bool) – if True, multiply image(s) by ImageNet std and add ImageNet mean

  • move_channels_dim (bool) – if True, convert [B]xCxHxW tensor to [B]xHxWxC format

  • always_apply (bool) – need to apply this transform anyway

  • p (float) – probability for this transform

apply(img: torch.Tensor, **params) → numpy.ndarray[source]

Apply the transform to the image.

class catalyst.contrib.data.cv.transforms.tensor.ToTensor(move_channels_dim: bool = True, always_apply: bool = False, p: float = 1.0)[source]

Bases: albumentations.pytorch.transforms.ToTensorV2

Casts numpy.array to torch.tensor.

__init__(move_channels_dim: bool = True, always_apply: bool = False, p: float = 1.0)[source]
Parameters
  • move_channels_dim (bool) – if False, casts numpy array to torch.tensor, but do not move channels dim

  • always_apply (bool) – need to apply this transform anyway

  • p (float) – probability for this transform

apply(img: numpy.ndarray, **params) → torch.Tensor[source]

Apply the transform to the image.

apply_to_mask(mask: numpy.ndarray, **params) → torch.Tensor[source]

Apply the transform to the mask.

get_transform_init_args_names() → tuple[source]

@TODO: Docs. Contribution is welcome.

Reader

class catalyst.contrib.data.cv.reader.ImageReader(input_key: str, output_key: str, rootpath: str = None, grayscale: bool = False)[source]

Bases: catalyst.data.reader.ReaderSpec

Image reader abstraction. Reads images from a csv dataset.

__init__(input_key: str, output_key: str, rootpath: str = None, grayscale: bool = False)[source]
Parameters
  • input_key (str) – key to use from annotation dict

  • output_key (str) – key to use to store the result

  • rootpath (str) – path to images dataset root directory (so your can use relative paths in annotations)

  • grayscale (bool) – flag if you need to work only with grayscale images

class catalyst.contrib.data.cv.reader.MaskReader(input_key: str, output_key: str, rootpath: str = None, clip_range: Tuple[Union[int, float], Union[int, float]] = (0, 1))[source]

Bases: catalyst.data.reader.ReaderSpec

Mask reader abstraction. Reads masks from a csv dataset.

__init__(input_key: str, output_key: str, rootpath: str = None, clip_range: Tuple[Union[int, float], Union[int, float]] = (0, 1))[source]
Parameters
  • input_key (str) – key to use from annotation dict

  • output_key (str) – key to use to store the result

  • rootpath (str) – path to images dataset root directory (so your can use relative paths in annotations)

  • clip_range (Tuple[int, int]) – lower and upper interval edges, image values outside the interval are clipped to the interval edges

Datasets

MNIST

class catalyst.contrib.datasets.mnist.MNIST(root, train=True, transform=None, target_transform=None, download=False)[source]

Bases: torch.utils.data.dataset.Dataset

MNIST Dataset.

__init__(root, train=True, transform=None, target_transform=None, download=False)[source]
Parameters
  • root (string) – Root directory of dataset where MNIST/processed/training.pt and MNIST/processed/test.pt exist.

  • train (bool, optional) – If True, creates dataset from training.pt, otherwise from test.pt.

  • download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • transform (callable, optional) – A function/transform that takes in an image and returns a transformed version.

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

property class_to_idx

Docs. Contribution is welcome.

Type

@TODO

classes = ['0 - zero', '1 - one', '2 - two', '3 - three', '4 - four', '5 - five', '6 - six', '7 - seven', '8 - eight', '9 - nine']
download()[source]

Download the MNIST data if it doesn’t exist in processed_folder.

extra_repr()[source]

@TODO: Docs. Contribution is welcome.

property processed_folder

Docs. Contribution is welcome.

Type

@TODO

property raw_folder

Docs. Contribution is welcome.

Type

@TODO

resources = [('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', 'f68b3c2dcbeaaa9fbdd348bbdeb94873'), ('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', 'd53e105ee54ea40749a09fcbcd1e9432'), ('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', '9fb629c4189551a2d022fa330f9573f3'), ('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', 'ec29112dd5afa0611ce80d1b7f02629c')]
test_file = 'test.pt'
training_file = 'training.pt'
catalyst.contrib.datasets.mnist.get_int(b)[source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.datasets.mnist.open_maybe_compressed_file(path)[source]

Return a file object that possibly decompresses ‘path’ on the fly. Decompression occurs when argument path is a string and ends with ‘.gz’ or ‘.xz’.

catalyst.contrib.datasets.mnist.read_image_file(path)[source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.datasets.mnist.read_label_file(path)[source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.datasets.mnist.read_sn3_pascalvincent_tensor(path, strict=True)[source]

Read a SN3 file in “Pascal Vincent” format. Argument may be a filename, compressed filename, or file object.

DL

Callbacks

AlchemyLogger

class catalyst.contrib.dl.callbacks.alchemy_logger.AlchemyLogger(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True, **logging_params)[source]

Bases: catalyst.core.callback.Callback

Logger callback, translates runner.*_metrics to Alchemy. Read about Alchemy here https://alchemy.host

Example

from catalyst.dl import SupervisedRunner, AlchemyLogger

runner = SupervisedRunner()

runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    logdir=logdir,
    num_epochs=num_epochs,
    verbose=True,
    callbacks={
        "logger": AlchemyLogger(
            token="...", # your Alchemy token
            project="your_project_name",
            experiment="your_experiment_name",
            group="your_experiment_group_name",
        )
    }
)

Powered by Catalyst.Ecosystem.

__init__(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True, **logging_params)[source]
Parameters
  • metric_names (List[str]) – list of metric names to log, if none - logs everything

  • log_on_batch_end (bool) – logs per-batch metrics if set True

  • log_on_epoch_end (bool) – logs per-epoch metrics if set True

on_batch_end(runner: catalyst.core.runner.IRunner)[source]

Translate batch metrics to Alchemy.

on_epoch_end(runner: catalyst.core.runner.IRunner)[source]

Translate epoch metrics to Alchemy.

on_loader_end(runner: catalyst.core.runner.IRunner)[source]

Translate loader metrics to Alchemy.

CutmixCallback

class catalyst.contrib.dl.callbacks.cutmix_callback.CutmixCallback(fields: List[str] = ('features', ), alpha=1.0, on_train_only=True, **kwargs)[source]

Bases: catalyst.core.callbacks.criterion.CriterionCallback

Callback to do Cutmix augmentation that has been proposed in CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features.

Warning

catalyst.contrib.dl.callbacks.CutmixCallback is inherited from catalyst.dl.CriterionCallback and does its work. You may not use them together.

__init__(fields: List[str] = ('features', ), alpha=1.0, on_train_only=True, **kwargs)[source]
Parameters
  • fields (List[str]) – list of features which must be affected.

  • alpha (float) – beta distribution parameter.

  • on_train_only (bool) – Apply to train only. So, if on_train_only is True, use a standard output/metric for validation.

on_batch_start(runner: catalyst.core.runner.IRunner) → None[source]

Mixes data according to Cutmix algorithm.

Parameters

runner (IRunner) – current runner

on_loader_start(runner: catalyst.core.runner.IRunner) → None[source]

Checks if it is needed for the loader.

Parameters

runner (IRunner) – current runner

GradNormLogger

class catalyst.contrib.dl.callbacks.gradnorm_logger.GradNormLogger(norm_type: int = 2, accumulation_steps: int = 1)[source]

Bases: catalyst.core.callback.Callback

Callback for logging model gradients.

__init__(norm_type: int = 2, accumulation_steps: int = 1)[source]
Parameters
  • norm_type (int) – norm type used to calculate norm of gradients. If OptimizerCallback provides non-default argument grad_clip_params with custom norm type, then corresponding norm type should be used in this class.

  • accumulation_steps (int) – number of steps before model.zero_grad(). Should be the same as in OptimizerCallback.

static grad_norm(*, model: torch.nn.modules.module.Module, prefix: str, norm_type: int) → Dict[source]

Computes gradient norms for a given model.

Parameters
  • model (Model) – model which gradients to be saved.

  • prefix (str) – prefix for keys in resulting dictionary.

  • norm_type (int) – norm type of gradient norm.

Returns

dictionary in which gradient norms are stored.

Return type

Dict

on_batch_end(runner: catalyst.core.runner.IRunner) → None[source]

On batch end event

Parameters

runner (IRunner) – current runner

KNNMetricCallback

class catalyst.contrib.dl.callbacks.knn_metric.KNNMetricCallback(input_key: str = 'logits', output_key: str = 'targets', prefix: str = 'knn', num_classes: int = 2, class_names: dict = None, cv_loader_names: Dict[str, List[str]] = None, metric_fn: str = 'f1-score', knn_metric: str = 'euclidean', num_neighbors: int = 5)[source]

Bases: catalyst.core.callback.Callback

A callback that returns single metric on runner.on_loader_end.

__init__(input_key: str = 'logits', output_key: str = 'targets', prefix: str = 'knn', num_classes: int = 2, class_names: dict = None, cv_loader_names: Dict[str, List[str]] = None, metric_fn: str = 'f1-score', knn_metric: str = 'euclidean', num_neighbors: int = 5)[source]

Returns metric value calculated using kNN algorithm.

Parameters
  • input_key – input key to get features.

  • output_key – output key to get targets.

  • prefix – key to store in logs.

  • num_classes – Number of classes; must be > 1.

  • class_names – of indexes and class names.

  • cv_loader_names – dict with keys and values of loader_names for which cross validation should be calculated. For example {“train” : [“valid”, “test”]}.

  • metric_fn – one of accuracy, precision, recall, f1-score. default is f1-score.

  • knn_metric – look sklearn.neighbors.NearestNeighbors parameter.

  • num_neighbors – number of neighbors, default is 5.

on_batch_end(runner: catalyst.core.runner.IRunner) → None[source]

Batch end hook.

Parameters

runner (IRunner) – current runner

on_epoch_end(runner: catalyst.core.runner.IRunner) → None[source]

Epoch end hook.

Parameters

runner (IRunner) – current runner

on_loader_end(runner: catalyst.core.runner.IRunner) → None[source]

Loader end hook.

Parameters

runner (IRunner) – current runner

InferMaskCallback

class catalyst.contrib.dl.callbacks.mask_inference.InferMaskCallback(out_dir=None, out_prefix=None, input_key=None, output_key=None, name_key=None, mean=None, std=None, threshold: float = 0.5, mask_strength: float = 0.5, mask_type: str = 'soft')[source]

Bases: catalyst.core.callback.Callback

@TODO: Docs. Contribution is welcome.

__init__(out_dir=None, out_prefix=None, input_key=None, output_key=None, name_key=None, mean=None, std=None, threshold: float = 0.5, mask_strength: float = 0.5, mask_type: str = 'soft')[source]
Parameters

@TODO – Docs. Contribution is welcome

on_batch_end(runner: catalyst.core.runner.IRunner)[source]

Batch end hook.

Parameters

runner (IRunner) – current runner

on_loader_start(runner: catalyst.core.runner.IRunner)[source]

Loader start hook.

Parameters

runner (IRunner) – current runner

on_stage_start(runner: catalyst.core.runner.IRunner)[source]

Stage start hook.

Parameters

runner (IRunner) – current runner

NeptuneLogger

class catalyst.contrib.dl.callbacks.neptune_logger.NeptuneLogger(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True, offline_mode: bool = False, **logging_params)[source]

Bases: catalyst.core.callback.Callback

Logger callback, translates runner.*_metrics to Neptune. Read about Neptune here https://neptune.ai

Example

from catalyst.dl import SupervisedRunner
from catalyst.contrib.dl.callbacks.neptune import NeptuneLogger

runner = SupervisedRunner()

runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    logdir=logdir,
    num_epochs=num_epochs,
    verbose=True,
    callbacks=[
        NeptuneLogger(
            api_token="...", # your Neptune token
            project_name="your_project_name",
            offline_mode=False, # turn off neptune for debug
            name="your_experiment_name",
            params={...},  # your hyperparameters
            tags=["resnet", "no-augmentations"], # tags
            upload_source_files=["*.py"], # files to save
             )
        ]
     )

You can see an example experiment here: https://ui.neptune.ai/o/shared/org/catalyst-integration/e/CAT-13/charts

You can log your experiments without registering. Just use “ANONYMOUS” token:

runner.train(
    ...
    callbacks=[
        "NepuneLogger(
            api_token="ANONYMOUS",
            project_name="shared/catalyst-integration",
            ...
             )
        ]
    )
__init__(metric_names: List[str] = None, log_on_batch_end: bool = True, log_on_epoch_end: bool = True, offline_mode: bool = False, **logging_params)[source]
Parameters
  • metric_names (List[str]) – list of metric names to log, if none - logs everything

  • log_on_batch_end (bool) – logs per-batch metrics if set True

  • log_on_epoch_end (bool) – logs per-epoch metrics if set True

  • offline_mode (bool) – whether logging to Neptune server should be turned off. It is useful for debugging

on_batch_end(runner: catalyst.core.runner.IRunner)[source]

Log batch metrics to Neptune.

on_loader_end(runner: catalyst.core.runner.IRunner)[source]

Translate epoch metrics to Neptune.

PeriodicLoaderCallback

class catalyst.contrib.dl.callbacks.periodic_loader_callback.PeriodicLoaderCallback(**kwargs)[source]

Bases: catalyst.core.callback.Callback

Callback for runing loaders with specified period. To disable loader use 0 as period.

Example

>>> PeriodicLoaderRunnerCallback(
>>>     train_additional=2,
>>>     valid=3,
>>>     valid_additional=5
>>> )
__init__(**kwargs)[source]
Parameters

kwargs – loader names and their run periods.

on_epoch_end(runner: catalyst.core.runner.IRunner) → None[source]

Store validation metrics and use latest validation score when validation loader is not required.

Parameters

runner (IRunner) – current runner

on_epoch_start(runner: catalyst.core.runner.IRunner) → None[source]

Set loaders for current epoch. If validation is not required then the first loader from loaders used in current epoch will be used as validation loader. Metrics from the latest epoch with true validation loader will be used in the epochs where this loader is missing.

Parameters

runner (IRunner) – current runner

on_stage_start(runner: catalyst.core.runner.IRunner) → None[source]

Collect information about loaders.

Parameters

runner (IRunner) – current runner

PerplexityMetricCallback

class catalyst.contrib.dl.callbacks.perplexity_metric.PerplexityMetricCallback(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'perplexity', ignore_index: int = None)[source]

Bases: catalyst.core.callbacks.metrics.MetricCallback

Perplexity is a very popular metric in NLP especially in Language Modeling task. It is 2^cross_entropy.

__init__(input_key: str = 'targets', output_key: str = 'logits', prefix: str = 'perplexity', ignore_index: int = None)[source]
Parameters
  • input_key (str) – input key to use for perplexity calculation, target tokens

  • output_key (str) – output key to use for perplexity calculation, logits of the predicted tokens

  • ignore_index (int) – index to ignore, usually pad_index

metric_fn(outputs, targets)[source]

Calculate perplexity

TelegramLogger

class catalyst.contrib.dl.callbacks.telegram_logger.TelegramLogger(token: str = None, chat_id: str = None, metric_names: List[str] = None, log_on_stage_start: bool = True, log_on_loader_start: bool = True, log_on_loader_end: bool = True, log_on_stage_end: bool = True, log_on_exception: bool = True)[source]

Bases: catalyst.core.callback.Callback

Logger callback, translates runner.metric_manager to telegram channel.

__init__(token: str = None, chat_id: str = None, metric_names: List[str] = None, log_on_stage_start: bool = True, log_on_loader_start: bool = True, log_on_loader_end: bool = True, log_on_stage_end: bool = True, log_on_exception: bool = True)[source]
Parameters
  • token (str) – telegram bot’s token, see https://core.telegram.org/bots

  • chat_id (str) – Chat unique identifier

  • metric_names – List of metric names to log. if none - logs everything.

  • log_on_stage_start (bool) – send notification on stage start

  • log_on_loader_start (bool) – send notification on loader start

  • log_on_loader_end (bool) – send notification on loader end

  • log_on_stage_end (bool) – send notification on stage end

  • log_on_exception (bool) – send notification on exception

on_exception(runner: catalyst.core.runner.IRunner)[source]

Notify about raised Exception.

on_loader_end(runner: catalyst.core.runner.IRunner)[source]

Translate runner.metric_manager to telegram channel.

on_loader_start(runner: catalyst.core.runner.IRunner)[source]

Notify about starting running the new loader.

on_stage_end(runner: catalyst.core.runner.IRunner)[source]

Notify about finishing a stage.

on_stage_start(runner: catalyst.core.runner.IRunner)[source]

Notify about starting a new stage.

TracerCallback

class catalyst.contrib.dl.callbacks.tracer_callback.TracerCallback(metric: str = 'loss', minimize: bool = True, min_delta: float = 1e-06, mode: str = 'best', do_once: bool = True, method_name: str = 'forward', requires_grad: bool = False, opt_level: str = None, trace_mode: str = 'eval', out_dir: Union[str, pathlib.Path] = None, out_model: Union[str, pathlib.Path] = None)[source]

Bases: catalyst.core.callback.Callback

Traces model during training if metric provided is improved.

__init__(metric: str = 'loss', minimize: bool = True, min_delta: float = 1e-06, mode: str = 'best', do_once: bool = True, method_name: str = 'forward', requires_grad: bool = False, opt_level: str = None, trace_mode: str = 'eval', out_dir: Union[str, pathlib.Path] = None, out_model: Union[str, pathlib.Path] = None)[source]
Parameters
  • metric (str) – Metric key we should trace model based on

  • minimize (bool) – Whether do we minimize metric or not

  • min_delta (float) – Minimum value of change for metric to be considered as improved

  • mode (str) – One of best or last

  • do_once (str) – Whether do we trace once per stage or every epoch

  • method_name (str) – Model’s method name that will be used as entrypoint during tracing

  • requires_grad (bool) – Flag to use grads

  • opt_level (str) – AMP FP16 init level

  • trace_mode (str) – Mode for model to trace (train or eval)

  • out_dir (Union[str, Path]) – Directory to save model to

  • out_model (Union[str, Path]) – Path to save model to (overrides out_dir argument)

on_epoch_end(runner: catalyst.core.runner.IRunner)[source]

Performing model tracing on epoch end if condition metric is improved

Parameters

runner (IRunner) – Current runner

on_stage_end(runner: catalyst.core.runner.IRunner)[source]

Performing model tracing on stage end if do_once is True.

Parameters

runner (IRunner) – Current runner

WandbLogger

class catalyst.contrib.dl.callbacks.wandb_logger.WandbLogger(metric_names: List[str] = None, log_on_batch_end: bool = False, log_on_epoch_end: bool = True, **logging_params)[source]

Bases: catalyst.core.callback.Callback

Logger callback, translates runner.*_metrics to Weights & Biases. Read about Weights & Biases here https://docs.wandb.com/

Example

from catalyst import dl
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset

class Projector(nn.Module):
    def __init__(self, input_size):
        super().__init__()
        self.linear = nn.Linear(input_size, 1)

    def forward(self, X):
        return self.linear(X).squeeze(-1)

X = torch.rand(16, 10)
y = torch.rand(X.shape[0])
model = Projector(X.shape[1])
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=8)
runner = dl.SupervisedRunner()

runner.train(
    model=model,
    loaders={
        "train": loader,
        "valid": loader
    },
    criterion=nn.MSELoss(),
    optimizer=optim.Adam(model.parameters()),
    logdir="log_example",
    callbacks=[
        dl.callbacks.WandbLogger(
            project="wandb_logger_example"
        )
    ],
    num_epochs=10
)
__init__(metric_names: List[str] = None, log_on_batch_end: bool = False, log_on_epoch_end: bool = True, **logging_params)[source]
Parameters
  • metric_names (List[str]) – list of metric names to log, if None - logs everything

  • log_on_batch_end (bool) – logs per-batch metrics if set True

  • log_on_epoch_end (bool) – logs per-epoch metrics if set True

  • **logging_params – any parameters of function wandb.init except reinit which is automatically set to True and dir which is set to <logdir>

on_batch_end(runner: catalyst.core.runner.IRunner)[source]

Translate batch metrics to Weights & Biases.

on_epoch_end(runner: catalyst.core.runner.IRunner)[source]

Translate epoch metrics to Weights & Biases.

on_loader_end(runner: catalyst.core.runner.IRunner)[source]

Translate loader metrics to Weights & Biases.

on_stage_end(runner: catalyst.core.runner.IRunner)[source]

Finish logging to Weights & Biases.

on_stage_start(runner: catalyst.core.runner.IRunner)[source]

Initialize Weights & Biases.

NN

Extensions for torch.nn

Criterion

Cross entropy

class catalyst.contrib.nn.criterion.ce.MaskCrossEntropyLoss(*args, target_name: str = 'targets', mask_name: str = 'mask', **kwargs)[source]

Bases: torch.nn.modules.loss.CrossEntropyLoss

@TODO: Docs. Contribution is welcome.

__init__(*args, target_name: str = 'targets', mask_name: str = 'mask', **kwargs)[source]

@TODO: Docs. Contribution is welcome.

forward(input: torch.Tensor, target_mask: torch.Tensor) → torch.Tensor[source]

Calculates loss between input and target_mask tensors.

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.criterion.ce.SymmetricCrossEntropyLoss(alpha: float = 1.0, beta: float = 1.0)[source]

Bases: torch.nn.modules.module.Module

The Symmetric Cross Entropy loss.

It has been proposed in Symmetric Cross Entropy for Robust Learning with Noisy Labels.

__init__(alpha: float = 1.0, beta: float = 1.0)[source]
Parameters
  • alpha (float) – corresponds to overfitting issue of CE

  • beta (float) – corresponds to flexible exploration on the robustness of RCE

forward(input: torch.Tensor, target: torch.Tensor) → torch.Tensor[source]

Calculates loss between input and target tensors.

Parameters
  • input (torch.Tensor) – input tensor of size (batch_size, num_classes)

  • target (torch.Tensor) – target tensor of size (batch_size), where values of a vector correspond to class index

class catalyst.contrib.nn.criterion.ce.NaiveCrossEntropyLoss(size_average=True)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(size_average=True)[source]

@TODO: Docs. Contribution is welcome.

forward(input: torch.Tensor, target: torch.Tensor) → torch.Tensor[source]

Calculates loss between input and target tensors.

Parameters
  • input (torch.Tensor) – input tensor of shape …

  • target (torch.Tensor) – target tensor of shape …

@TODO: Docs (add shapes). Contribution is welcome.

Contrastive

class catalyst.contrib.nn.criterion.contrastive.ContrastiveEmbeddingLoss(margin=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

The Contrastive embedding loss.

It has been proposed in Dimensionality Reduction by Learning an Invariant Mapping.

__init__(margin=1.0, reduction='mean')[source]
Parameters
  • margin – margin parameter

  • reduction – criterion reduction type

forward(embeddings_left: torch.Tensor, embeddings_right: torch.Tensor, distance_true) → torch.Tensor[source]

Forward propagation method for the contrastive loss.

Parameters
  • embeddings_left (torch.Tensor) – left objects embeddings

  • embeddings_right (torch.Tensor) – right objects embeddings

  • distance_true – true distances

Returns

loss

Return type

torch.Tensor

class catalyst.contrib.nn.criterion.contrastive.ContrastiveDistanceLoss(margin=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

The Contrastive distance loss.

@TODO: Docs. Contribution is welcome.

__init__(margin=1.0, reduction='mean')[source]
Parameters
  • margin – margin parameter

  • reduction (str) – criterion reduction type

forward(distance_pred, distance_true) → torch.Tensor[source]

Forward propagation method for the contrastive loss.

Parameters
  • distance_pred – predicted distances

  • distance_true – true distances

Returns

loss

Return type

torch.Tensor

class catalyst.contrib.nn.criterion.contrastive.ContrastivePairwiseEmbeddingLoss(margin=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

ContrastivePairwiseEmbeddingLoss – proof of concept criterion.

Still work in progress.

@TODO: Docs. Contribution is welcome.

__init__(margin=1.0, reduction='mean')[source]
Parameters
  • margin – margin parameter

  • reduction – criterion reduction type

forward(embeddings_pred, embeddings_true) → torch.Tensor[source]

Forward propagation method for the contrastive loss.

Work in progress.

Parameters
  • embeddings_pred – predicted embeddings

  • embeddings_true – true embeddings

Returns

loss

Return type

torch.Tensor

Circle

class catalyst.contrib.nn.criterion.circle.CircleLoss(margin: float, gamma: float)[source]

Bases: torch.nn.modules.module.Module

CircleLoss from “Circle Loss: A Unified Perspective of Pair Similarity Optimization” https://arxiv.org/abs/2002.10857

Adapter from: https://github.com/TinyZeaMays/CircleLoss

Example

>>> import torch
>>> from torch.nn import functional as F
>>> from catalyst.contrib.nn import CircleLoss
>>>
>>> features = F.normalize(torch.rand(256, 64, requires_grad=True))
>>> labels = torch.randint(high=10, size=(256,))
>>> criterion = CircleLoss(margin=0.25, gamma=256)
>>> criterion(features, labels)
__init__(margin: float, gamma: float) → None[source]
Parameters
  • margin – margin to use

  • gamma – gamma to use

forward(normed_features: torch.Tensor, labels: torch.Tensor) → torch.Tensor[source]
Parameters
  • normed_features – batch with samples features of shape [bs; feature_len]

  • labels – batch with samples correct labels of shape [bs; ]

Returns

circle loss

Return type

(Tensor)

Dice

class catalyst.contrib.nn.criterion.dice.BCEDiceLoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', bce_weight: float = 0.5, dice_weight: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', bce_weight: float = 0.5, dice_weight: float = 0.5)[source]

@TODO: Docs. Contribution is welcome.

forward(outputs, targets)[source]

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.criterion.dice.DiceLoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

@TODO: Docs. Contribution is welcome.

forward(logits: torch.Tensor, targets: torch.Tensor)[source]

Calculates loss between logits and target tensors.

@TODO: Docs. Contribution is welcome

Focal

class catalyst.contrib.nn.criterion.focal.FocalLossBinary(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Bases: torch.nn.modules.loss._Loss

Compute focal loss for binary classification problem.

It has been proposed in Focal Loss for Dense Object Detection paper.

@TODO: Docs (add Example). Contribution is welcome.

__init__(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

@TODO: Docs. Contribution is welcome.

forward(logits, targets)[source]
Parameters
  • logits – [bs; …]

  • targets – [bs; …]

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.criterion.focal.FocalLossMultiClass(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Bases: catalyst.contrib.nn.criterion.focal.FocalLossBinary

Compute focal loss for multi-class problem. Ignores targets having -1 label.

It has been proposed in Focal Loss for Dense Object Detection paper.

@TODO: Docs (add Example). Contribution is welcome.

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; …]

@TODO: Docs. Contribution is welcome.

GAN

class catalyst.contrib.nn.criterion.gan.MeanOutputLoss[source]

Bases: torch.nn.modules.module.Module

Criterion to compute simple mean of the output, completely ignoring target (maybe useful e.g. for WGAN real/fake validity averaging.

forward(output, target)[source]

Compute criterion.

@TODO: Docs (add typing). Contribution is welcome.

class catalyst.contrib.nn.criterion.gan.GradientPenaltyLoss[source]

Bases: torch.nn.modules.module.Module

Criterion to compute gradient penalty.

WARN: SHOULD NOT BE RUN WITH CriterionCallback,

use special GradientPenaltyCallback instead

forward(fake_data, real_data, critic, critic_condition_args)[source]

Compute gradient penalty.

Parameters

@TODO – Docs. Contribution is welcome.

Huber

class catalyst.contrib.nn.criterion.huber.HuberLoss(clip_delta=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(clip_delta=1.0, reduction='mean')[source]

@TODO: Docs. Contribution is welcome.

forward(y_pred: torch.Tensor, y_true: torch.Tensor, weights=None) → torch.Tensor[source]

@TODO: Docs. Contribution is welcome.

IOU

class catalyst.contrib.nn.criterion.iou.IoULoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: torch.nn.modules.module.Module

The intersection over union (Jaccard) loss.

@TODO: Docs. Contribution is welcome.

__init__(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]
Parameters
  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of 'none', 'Sigmoid', 'Softmax2d'

forward(outputs, targets)[source]

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.criterion.iou.BCEIoULoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

The Intersection over union (Jaccard) with BCE loss.

@TODO: Docs. Contribution is welcome.

__init__(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', reduction: str = 'mean')[source]
Parameters
  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of 'none', 'Sigmoid', 'Softmax2d'

  • reduction (str) – Specifies the reduction to apply to the output of BCE

forward(outputs, targets)[source]

@TODO: Docs. Contribution is welcome.

Lovasz

class catalyst.contrib.nn.criterion.lovasz.LovaszLossBinary(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

Creates a criterion that optimizes a binary Lovasz loss.

It has been proposed in The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks.

__init__(per_image=False, ignore=None)[source]

@TODO: Docs. Contribution is welcome.

forward(logits, targets)[source]

Forward propagation method for the Lovasz loss.

Parameters
  • logits – [bs; …]

  • targets – [bs; …]

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.criterion.lovasz.LovaszLossMultiClass(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

Creates a criterion that optimizes a multi-class Lovasz loss.

It has been proposed in The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks.

__init__(per_image=False, ignore=None)[source]

@TODO: Docs. Contribution is welcome.

forward(logits, targets)[source]

Forward propagation method for the Lovasz loss.

Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; …]

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.criterion.lovasz.LovaszLossMultiLabel(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

Creates a criterion that optimizes a multi-label Lovasz loss.

It has been proposed in The Lovasz-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks.

__init__(per_image=False, ignore=None)[source]

@TODO: Docs. Contribution is welcome.

forward(logits, targets)[source]

Forward propagation method for the Lovasz loss.

Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; num_classes; …]

@TODO: Docs. Contribution is welcome.

Margin

class catalyst.contrib.nn.criterion.margin.MarginLoss(alpha: float = 0.2, beta: float = 1.0, skip_labels: Union[int, List[int]] = -1)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(alpha: float = 0.2, beta: float = 1.0, skip_labels: Union[int, List[int]] = -1)[source]
Parameters
  • alpha (float) –

  • beta (float) –

  • skip_labels (int or List[int]) –

@TODO: Docs. Contribution is welcome.

forward(embeddings: torch.Tensor, targets: torch.Tensor) → torch.Tensor[source]

Forward propagation method for the margin loss.

@TODO: Docs. Contribution is welcome.

Triplet

class catalyst.contrib.nn.criterion.triplet.TripletLoss(margin: float = 0.3)[source]

Bases: torch.nn.modules.module.Module

Triplet loss with hard positive/negative mining.

Reference:

Code imported from https://github.com/NegatioN/OnlineMiningTripletLoss

__init__(margin: float = 0.3)[source]
Parameters

margin (float) – margin for triplet

forward(embeddings, targets)[source]

Forward propagation method for the triplet loss.

Parameters
  • embeddings – tensor of shape (batch_size, embed_dim)

  • targets – labels of the batch, of size (batch_size,)

Returns

scalar tensor containing the triplet loss

Return type

triplet_loss

class catalyst.contrib.nn.criterion.triplet.TripletPairwiseEmbeddingLoss(margin: float = 0.3, reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

TripletPairwiseEmbeddingLoss – proof of concept criterion.

Still work in progress.

@TODO: Docs. Contribution is welcome.

__init__(margin: float = 0.3, reduction: str = 'mean')[source]
Parameters
  • margin (float) – margin parameter

  • reduction (str) – criterion reduction type

forward(embeddings_pred, embeddings_true)[source]

Work in progress.

Parameters
  • embeddings_pred – predicted embeddings with shape [batch_size, embedding_size]

  • embeddings_true – true embeddings with shape [batch_size, embedding_size]

Returns

loss

Return type

torch.Tensor

Wing

class catalyst.contrib.nn.criterion.wing.WingLoss(width: int = 5, curvature: float = 0.5, reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

Creates a criterion that optimizes a Wing loss.

It has been proposed in Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks.

Examples

@TODO: Docs. Contribution is welcome.

Adapted from: https://github.com/BloodAxe/pytorch-toolbelt

__init__(width: int = 5, curvature: float = 0.5, reduction: str = 'mean')[source]
Parameters

@TODO – Docs. Contribution is welcome.

forward(outputs: torch.Tensor, targets: torch.Tensor) → torch.Tensor[source]
Parameters

@TODO – Docs. Contribution is welcome.

Modules

Common modules

class catalyst.contrib.nn.modules.common.Flatten[source]

Bases: torch.nn.modules.module.Module

Flattens the input. Does not affect the batch size.

@TODO: Docs (add Example). Contribution is welcome.

__init__()[source]

@TODO: Docs. Contribution is welcome.

forward(x)[source]

Forward call.

class catalyst.contrib.nn.modules.common.GaussianNoise(stddev: float = 0.1)[source]

Bases: torch.nn.modules.module.Module

A gaussian noise module.

Shape:

  • Input: (batch, *)

  • Output: (batch, *) (same shape as input)

__init__(stddev: float = 0.1)[source]
Parameters

stddev (float) – The standard deviation of the normal distribution. Default: 0.1.

forward(x: torch.Tensor)[source]

Forward call.

class catalyst.contrib.nn.modules.common.Lambda(lambda_fn)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(lambda_fn)[source]

@TODO: Docs. Contribution is welcome.

forward(x)[source]

Forward call.

class catalyst.contrib.nn.modules.common.Normalize(**normalize_kwargs)[source]

Bases: torch.nn.modules.module.Module

Performs \(L_p\) normalization of inputs over specified dimension.

@TODO: Docs (add Example). Contribution is welcome.

__init__(**normalize_kwargs)[source]
Parameters

**normalize_kwargs – see torch.nn.functional.normalize params

forward(x)[source]

Forward call.

Last-Mean-Average-Attention (LAMA)-Pooling

class catalyst.contrib.nn.modules.lama.TemporalLastPooling[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor, mask: torch.Tensor = None) → torch.Tensor[source]

Forward call.

class catalyst.contrib.nn.modules.lama.TemporalAvgPooling[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor, mask: torch.Tensor = None) → torch.Tensor[source]

Forward call.

class catalyst.contrib.nn.modules.lama.TemporalMaxPooling[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor, mask: torch.Tensor = None) → torch.Tensor[source]

Forward call.

class catalyst.contrib.nn.modules.lama.TemporalDropLastWrapper(net)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(net)[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.modules.lama.TemporalAttentionPooling(in_features, activation=None, kernel_size=1, **params)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(in_features, activation=None, kernel_size=1, **params)[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor, mask: torch.Tensor = None) → torch.Tensor[source]
Parameters

x (torch.Tensor) – tensor of size (batch_size, history_len, feature_size)

@TODO: Docs. Contribution is welcome.

name2activation = {'sigmoid': Sigmoid(), 'softmax': Softmax(dim=1), 'tanh': Tanh()}
class catalyst.contrib.nn.modules.lama.TemporalConcatPooling(in_features, history_len=1)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(in_features, history_len=1)[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor, mask: torch.Tensor = None) → torch.Tensor[source]
Parameters

x (torch.Tensor) – tensor of size (batch_size, history_len, feature_size)

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.nn.modules.lama.LamaPooling(in_features, groups=None)[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(in_features, groups=None)[source]

@TODO: Docs. Contribution is welcome.

available_groups = ['last', 'avg', 'avg_droplast', 'max', 'max_droplast', 'sigmoid', 'sigmoid_droplast', 'softmax', 'softmax_droplast', 'tanh', 'tanh_droplast']
forward(x: torch.Tensor, mask: torch.Tensor = None) → torch.Tensor[source]
Parameters

x (torch.Tensor) – tensor of size (batch_size, history_len, feature_size)

@TODO: Docs. Contribution is welcome.

Pooling

class catalyst.contrib.nn.modules.pooling.GlobalAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs. Contribution is welcome.

__init__(in_features, activation_fn='Sigmoid')[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

class catalyst.contrib.nn.modules.pooling.GlobalAvgAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs (add Example). Contribution is welcome.

__init__(in_features, activation_fn='Sigmoid')[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

class catalyst.contrib.nn.modules.pooling.GlobalAvgPool2d[source]

Bases: torch.nn.modules.module.Module

Applies a 2D global average pooling operation over an input signal composed of several input planes.

@TODO: Docs (add Example). Contribution is welcome.

__init__()[source]

Constructor method for the GlobalAvgPool2d class.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

class catalyst.contrib.nn.modules.pooling.GlobalConcatAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs (add Example). Contribution is welcome.

__init__(in_features, activation_fn='Sigmoid')[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

class catalyst.contrib.nn.modules.pooling.GlobalConcatPool2d[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs (add Example). Contribution is welcome.

__init__()[source]

Constructor method for the GlobalConcatPool2d class.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

class catalyst.contrib.nn.modules.pooling.GlobalMaxAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

@TODO: Docs (add Example). Contribution is welcome.

__init__(in_features, activation_fn='Sigmoid')[source]

@TODO: Docs. Contribution is welcome.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

class catalyst.contrib.nn.modules.pooling.GlobalMaxPool2d[source]

Bases: torch.nn.modules.module.Module

Applies a 2D global max pooling operation over an input signal composed of several input planes.

@TODO: Docs (add Example). Contribution is welcome.

__init__()[source]

Constructor method for the GlobalMaxPool2d class.

forward(x: torch.Tensor) → torch.Tensor[source]

Forward call.

static out_features(in_features)[source]

Returns number of channels produced by the pooling.

Parameters

in_features – number of channels in the input sample

RMSNorm

class catalyst.contrib.nn.modules.rms_norm.RMSNorm(dimension: int, epsilon: float = 1e-08, is_bias: bool = False)[source]

Bases: torch.nn.modules.module.Module

An implementation of RMS Normalization.

@TODO: Docs (link to paper). Contribution is welcome.

__init__(dimension: int, epsilon: float = 1e-08, is_bias: bool = False)[source]
Parameters
  • dimension (int) – the dimension of the layer output to normalize

  • epsilon (float) – an epsilon to prevent dividing by zero in case the layer has zero variance. (default = 1e-8)

  • is_bias (bool) – a boolean value whether to include bias term while normalization

forward(x: torch.Tensor) → torch.Tensor[source]

@TODO: Docs. Contribution is welcome.

SqueezeAndExcitation

class catalyst.contrib.nn.modules.se.ChannelSqueezeAndSpatialExcitation(in_channels: int)[source]

Bases: torch.nn.modules.module.Module

The sSE (Channel Squeeze and Spatial Excitation) block from the [Concurrent Spatial and Channel ‘Squeeze & Excitation’ in Fully Convolutional Networks](https://arxiv.org/abs/1803.02579) paper.

Adapted from https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66178

Shape:

  • Input: (batch, channels, height, width)

  • Output: (batch, channels, height, width) (same shape as input)

__init__(in_channels: int)[source]
Parameters

in_channels (int) – The number of channels in the feature map of the input.

forward(x: torch.Tensor)[source]

Forward call.

class catalyst.contrib.nn.modules.se.ConcurrentSpatialAndChannelSqueezeAndChannelExcitation(in_channels: int, r: int = 16)[source]

Bases: torch.nn.modules.module.Module

The scSE (Concurrent Spatial and Channel Squeeze and Channel Excitation) block from the [Concurrent Spatial and Channel ‘Squeeze & Excitation’ in Fully Convolutional Networks](https://arxiv.org/abs/1803.02579) paper.

Adapted from https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66178

Shape:

  • Input: (batch, channels, height, width)

  • Output: (batch, channels, height, width) (same shape as input)

__init__(in_channels: int, r: int = 16)[source]
Parameters
  • in_channels (int) – The number of channels in the feature map of the input.

  • r (int) – The reduction ratio of the intermediate channels. Default: 16.

forward(x: torch.Tensor)[source]

Forward call.

class catalyst.contrib.nn.modules.se.SqueezeAndExcitation(in_channels: int, r: int = 16)[source]

Bases: torch.nn.modules.module.Module

The channel-wise SE (Squeeze and Excitation) block from the [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507) paper.

Adapted from https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/65939 and https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/66178

Shape:

  • Input: (batch, channels, height, width)

  • Output: (batch, channels, height, width) (same shape as input)

__init__(in_channels: int, r: int = 16)[source]
Parameters
  • in_channels (int) – The number of channels in the feature map of the input.

  • r (int) – The reduction ratio of the intermediate channels. Default: 16.

forward(x: torch.Tensor)[source]

Forward call.

Optimizers

Lamb

class catalyst.contrib.nn.optimizers.lamb.Lamb(params, lr: Optional[float] = 0.001, betas: Optional[Tuple[float, float]] = (0.9, 0.999), eps: Optional[float] = 1e-06, weight_decay: Optional[float] = 0.0, adam: Optional[bool] = False)[source]

Bases: torch.optim.optimizer.Optimizer

Implements Lamb algorithm.

It has been proposed in Training BERT in 76 minutes.

__init__(params, lr: Optional[float] = 0.001, betas: Optional[Tuple[float, float]] = (0.9, 0.999), eps: Optional[float] = 1e-06, weight_decay: Optional[float] = 0.0, adam: Optional[bool] = False)[source]
Parameters
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float, optional) – learning rate (default: 1e-3)

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))

  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

  • adam (bool, optional) – always use trust ratio = 1, which turns this into Adam. Useful for comparison purposes.

step(closure: Optional[Callable] = None)[source]

Makes optimizer step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

Lookahead

class catalyst.contrib.nn.optimizers.lookahead.Lookahead(optimizer: torch.optim.optimizer.Optimizer, k: int = 5, alpha: float = 0.5)[source]

Bases: torch.optim.optimizer.Optimizer

Implements Lookahead algorithm.

It has been proposed in Lookahead Optimizer: k steps forward, 1 step back.

Adapted from: https://github.com/alphadl/lookahead.pytorch (MIT License)

__init__(optimizer: torch.optim.optimizer.Optimizer, k: int = 5, alpha: float = 0.5)[source]

@TODO: Docs. Contribution is welcome.

add_param_group(param_group)[source]

@TODO: Docs. Contribution is welcome.

classmethod get_from_params(params: Dict, base_optimizer_params: Dict = None, **kwargs) → catalyst.contrib.nn.optimizers.lookahead.Lookahead[source]

@TODO: Docs. Contribution is welcome.

load_state_dict(state_dict)[source]

@TODO: Docs. Contribution is welcome.

state_dict()[source]

@TODO: Docs. Contribution is welcome.

step(closure: Optional[Callable] = None)[source]

Makes optimizer step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

update(group)[source]

@TODO: Docs. Contribution is welcome.

update_lookahead()[source]

@TODO: Docs. Contribution is welcome.

QHAdamW

class catalyst.contrib.nn.optimizers.qhadamw.QHAdamW(params, lr=0.001, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-08)[source]

Bases: torch.optim.optimizer.Optimizer

Implements QHAdam algorithm.

Combines QHAdam algorithm that was proposed in Quasi-hyperbolic momentum and Adam for deep learning with weight decay decoupling from Decoupled Weight Decay Regularization paper.

Example

>>> optimizer = QHAdamW(
...     model.parameters(),
...     lr=3e-4, nus=(0.8, 1.0), betas=(0.99, 0.999))
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()

Adapted from: https://github.com/iprally/qhadamw-pytorch/blob/master/qhadamw.py (MIT License)

__init__(params, lr=0.001, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-08)[source]
Parameters
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float, optional) – learning rate (\(\alpha\) from the paper) (default: 1e-3)

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of the gradient and its square (default: (0.995, 0.999))

  • nus (Tuple[float, float], optional) – immediate discount factors used to estimate the gradient and its square (default: (0.7, 1.0))

  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay (float, optional) – weight decay (L2 regularization coefficient, times two) (default: 0.0)

step(closure: Optional[Callable] = None)[source]

Makes optimizer step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

RAdam

class catalyst.contrib.nn.optimizers.radam.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]

Bases: torch.optim.optimizer.Optimizer

Implements RAdam algorithm.

It has been proposed in On the Variance of the Adaptive Learning Rate and Beyond.

@TODO: Docs (add Example). Contribution is welcome

Adapted from: https://github.com/LiyuanLucasLiu/RAdam (Apache-2.0 License)

__init__(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]

@TODO: Docs. Contribution is welcome.

step(closure: Optional[Callable] = None)[source]

Makes optimizer step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

Ralamb

class catalyst.contrib.nn.optimizers.ralamb.Ralamb(params: Iterable, lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]

Bases: torch.optim.optimizer.Optimizer

RAdam optimizer with LARS/LAMB tricks.

Adapted from: https://github.com/mgrankin/over9000/blob/master/ralamb.py (Apache-2.0 License)

__init__(params: Iterable, lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]
Parameters
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float, optional) – learning rate (default: 1e-3)

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))

  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

step(closure: Optional[Callable] = None)[source]

Makes optimizer step.

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

Schedulers

class catalyst.contrib.nn.schedulers.base.BaseScheduler(optimizer, last_epoch=-1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler, abc.ABC

Base class for all schedulers with momentum update.

get_momentum() → List[float][source]

Function that returns the new momentum for optimizer.

Returns

calculated momentum for every param groups

Return type

List[float]

step(epoch: Optional[int] = None) → None[source]

Make one scheduler step.

Parameters

epoch (int, optional) – current epoch num

class catalyst.contrib.nn.schedulers.base.BatchScheduler(optimizer, last_epoch=-1)[source]

Bases: catalyst.contrib.nn.schedulers.base.BaseScheduler, abc.ABC

@TODO: Docs. Contribution is welcome.

OneCycleLRWithWarmup

class catalyst.contrib.nn.schedulers.onecycle.OneCycleLRWithWarmup(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]

Bases: catalyst.contrib.nn.schedulers.base.BatchScheduler

OneCycle scheduler with warm-up & lr decay stages.

First stage increases lr from init_lr to max_lr, and called warmup. Also it decreases momentum from init_momentum to min_momentum. Takes warmup_steps steps

Second is annealing stage. Decrease lr from max_lr to min_lr, Increase momentum from min_momentum to max_momentum.

Third, optional, lr decay.

__init__(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]
Parameters
  • optimizer – PyTorch optimizer

  • num_steps (int) – total number of steps

  • lr_range – tuple with two or three elements (max_lr, min_lr, [final_lr])

  • init_lr (float, optional) – initial lr

  • warmup_steps (int) – count of steps for warm-up stage

  • warmup_fraction (float, optional) – fraction in [0; 1) to calculate number of warmup steps. Cannot be set together with warmup_steps

  • decay_steps (int) – count of steps for lr decay stage

  • decay_fraction (float, optional) – fraction in [0; 1) to calculate number of decay steps. Cannot be set together with decay_steps

  • momentum_range – tuple with two or three elements (min_momentum, max_momentum, [final_momentum])

  • init_momentum (float, optional) – initial momentum

get_lr() → List[float][source]

Function that returns the new lr for optimizer.

Returns

calculated lr for every param groups

Return type

List[float]

get_momentum() → List[float][source]

Function that returns the new momentum for optimizer.

Returns

calculated momentum for every param groups

Return type

List[float]

recalculate(loader_len: int, current_step: int) → None[source]

Recalculates total num_steps for batch mode.

Parameters
  • loader_len (int) – total count of batches in an epoch

  • current_step (int) – current step

reset()[source]

@TODO: Docs. Contribution is welcome.

Models

Segmentation

Unet

class catalyst.contrib.models.cv.segmentation.unet.ResnetUnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.models.cv.segmentation.unet.Unet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

@TODO: Docs. Contribution is welcome.

Linknet

class catalyst.contrib.models.cv.segmentation.linknet.Linknet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.models.cv.segmentation.linknet.ResnetLinknet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

@TODO: Docs. Contribution is welcome.

FPNnet

class catalyst.contrib.models.cv.segmentation.fpn.FPNUnet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.models.cv.segmentation.fpn.ResnetFPNUnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

@TODO: Docs. Contribution is welcome.

PSPnet

class catalyst.contrib.models.cv.segmentation.psp.PSPnet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

@TODO: Docs. Contribution is welcome.

class catalyst.contrib.models.cv.segmentation.psp.ResnetPSPnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

@TODO: Docs. Contribution is welcome.

Registry

catalyst subpackage registries

catalyst.contrib.registry.Criterion(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Optimizer(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Scheduler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Module(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Model(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Sampler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Transform(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Experiment(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

Tools

Tensorboard

Tensorboard readers:
exception catalyst.contrib.tools.tensorboard.EventReadingException[source]

Bases: Exception

An exception that correspond to an event file reading error.

class catalyst.contrib.tools.tensorboard.EventsFileReader(events_file: BinaryIO)[source]

Bases: collections.abc.Iterable

An iterator over a Tensorboard events file.

__init__(events_file: BinaryIO)[source]

Initialize an iterator over an events file.

Parameters

events_file – An opened file-like object.

class catalyst.contrib.tools.tensorboard.SummaryItem(tag, step, wall_time, value, type)

Bases: tuple

property step

Alias for field number 1

property tag

Alias for field number 0

property type

Alias for field number 4

property value

Alias for field number 3

property wall_time

Alias for field number 2

class catalyst.contrib.tools.tensorboard.SummaryReader(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]

Bases: collections.abc.Iterable

Iterates over events in all the files in the current logdir.

Note

Only scalars are supported at the moment.

__init__(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]

Initalize new summary reader.

Parameters
  • logdir – A directory with Tensorboard summary data

  • tag_filter – A list of tags to leave (None for all)

  • types – A list of types to get.

  • "scalar" and "image" types are allowed at the moment. (Only) –

Utilities

Argparse

catalyst.contrib.utils.argparse.boolean_flag(parser: argparse.ArgumentParser, name: str, default: Optional[bool] = False, help: str = None, shorthand: str = None) → None[source]

Add a boolean flag to a parser inplace.

Examples

>>> parser = argparse.ArgumentParser()
>>> boolean_flag(
>>>     parser, "flag", default=False, help="some flag", shorthand="f"
>>> )
Parameters
  • parser (argparse.ArgumentParser) – parser to add the flag to

  • name (str) – argument name –<name> will enable the flag, while –no-<name> will disable it

  • default (bool, optional) – default value of the flag

  • help (str) – help string for the flag

  • shorthand (str) – shorthand string for the argument

Compression

catalyst.contrib.utils.compression.pack(data)

Serialize the data into bytes using pickle.

Parameters

data – a value

Returns

Returns a bytes object serialized with pickle data.

catalyst.contrib.utils.compression.pack_if_needed(data)

Serialize the data into bytes using pickle.

Parameters

data – a value

Returns

Returns a bytes object serialized with pickle data.

catalyst.contrib.utils.compression.unpack(data)

Deserialize bytes into an object using pickle.

Parameters

bytes – a bytes object containing serialized with pickle data.

Returns

Returns a value deserialized from the bytes-like object.

catalyst.contrib.utils.compression.unpack_if_needed(data)

Deserialize bytes into an object using pickle.

Parameters

bytes – a bytes object containing serialized with pickle data.

Returns

Returns a value deserialized from the bytes-like object.

Confusion Matrix

catalyst.contrib.utils.confusion_matrix.calculate_tp_fp_fn(confusion_matrix: numpy.ndarray) → numpy.ndarray[source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.utils.confusion_matrix.calculate_confusion_matrix_from_arrays(ground_truth: numpy.ndarray, prediction: numpy.ndarray, num_classes: int) → numpy.ndarray[source]

Calculate confusion matrix for a given set of classes. If GT value is outside of the [0, num_classes) it is excluded.

Parameters
  • ground_truth (np.ndarray) –

  • prediction (np.ndarray) –

  • num_classes (int) –

@TODO: Docs . Contribution is welcome

catalyst.contrib.utils.confusion_matrix.calculate_confusion_matrix_from_tensors(y_pred_logits: torch.Tensor, y_true: torch.Tensor) → numpy.ndarray[source]

@TODO: Docs. Contribution is welcome.

Dataset

catalyst.contrib.utils.dataset.create_dataset(dirs: str, extension: str = None, process_fn: Callable[[str], object] = None, recursive: bool = False) → Dict[str, object][source]

Create dataset (dict like {key: [values]}) from vctk-like dataset:

dataset/
    cat/
        *.ext
    dog/
        *.ext
Parameters
  • dirs (str) – path to dirs, for example /home/user/data/**

  • extension (str) – data extension you are looking for

  • process_fn (Callable[[str], object]) – function(path_to_file) -> object process function for found files, by default

  • recursive (bool) – enables recursive globbing

Returns

dataset

Return type

dict

catalyst.contrib.utils.dataset.create_dataframe(dataset: Dict[str, object], **dataframe_args) → pandas.core.frame.DataFrame[source]

Create pd.DataFrame from dict like {key: [values]}.

Parameters
  • dataset – dict like {key: [values]}

  • **dataframe_args

    indexIndex or array-like

    Index to use for resulting frame. Will default to np.arange(n) if no indexing information part of input data and no index provided

    columnsIndex or array-like

    Column labels to use for resulting frame. Will default to np.arange(n) if no column labels are provided

    dtypedtype, default None

    Data type to force, otherwise infer

Returns

dataframe from giving dataset

Return type

pd.DataFrame

catalyst.contrib.utils.dataset.split_dataset_train_test(dataset: pandas.core.frame.DataFrame, **train_test_split_args) → Tuple[Dict[str, object], Dict[str, object]][source]

Split dataset in train and test parts.

Parameters
  • dataset – dict like dataset

  • **train_test_split_args

    test_sizefloat, int, or None (default is None)

    If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is automatically set to the complement of the train size. If train size is also None, test size is set to 0.25.

    train_sizefloat, int, or None (default is None)

    If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.

    random_stateint or RandomState

    Pseudo-random number generator state used for random sampling.

    stratifyarray-like or None (default is None)

    If not None, data is split in a stratified fashion, using this as the class labels.

Returns

train and test dicts

Misc

catalyst.contrib.utils.misc.args_are_not_none(*args: Optional[Any]) → bool[source]

Check that all arguments are not None.

Parameters

*args (Any) – values

Returns

True if all value were not None, False otherwise

Return type

bool

catalyst.contrib.utils.misc.make_tuple(tuple_like)[source]

Creates a tuple if given tuple_like value isn’t list or tuple.

Returns

tuple or list

catalyst.contrib.utils.misc.pairwise(iterable: Iterable[Any]) → Iterable[Any][source]

Iterate sequences by pairs.

Examples

>>> for i in pairwise([1, 2, 5, -3]):
>>>     print(i)
(1, 2)
(2, 5)
(5, -3)
Parameters

iterable – Any iterable sequence

Returns

pairwise iterator

catalyst.contrib.utils.misc.find_value_ids(it: Iterable[Any], value: Any) → List[int][source]
Parameters
  • it – list of any

  • value – query element

Returns: indices of the all elements equal x0

Pandas

catalyst.contrib.utils.pandas.dataframe_to_list(dataframe: pandas.core.frame.DataFrame) → List[dict][source]

Converts dataframe to a list of rows (without indexes).

Parameters

dataframe (DataFrame) – input dataframe

Returns

list of rows

Return type

(List[dict])

catalyst.contrib.utils.pandas.folds_to_list(folds: Union[list, str, pandas.core.series.Series]) → List[int][source]

This function formats string or either list of numbers into a list of unique int.

Examples

>>> folds_to_list("1,2,1,3,4,2,4,6")
[1, 2, 3, 4, 6]
>>> folds_to_list([1, 2, 3.0, 5])
[1, 2, 3, 5]
Parameters

folds (Union[list, str, pd.Series]) – Either list of numbers or one string with numbers separated by commas or pandas series

Returns

list of unique ints

Return type

List[int]

Raises

ValueError – if value in string or array cannot be casted to int

catalyst.contrib.utils.pandas.split_dataframe(dataframe: pandas.core.frame.DataFrame, train_folds: List[int], valid_folds: Optional[List[int]] = None, infer_folds: Optional[List[int]] = None, tag2class: Optional[Dict[str, int]] = None, tag_column: str = None, class_column: str = None, seed: int = 42, n_folds: int = 5) → Tuple[pandas.core.frame.DataFrame, pandas.core.frame.DataFrame, pandas.core.frame.DataFrame, pandas.core.frame.DataFrame][source]

Split a Pandas DataFrame into folds.

Parameters
  • dataframe (pd.DataFrame) – input dataframe

  • train_folds (List[int]) – train folds

  • valid_folds (List[int], optional) – valid folds. If none takes all folds not included in train_folds

  • infer_folds (List[int], optional) – infer folds. If none takes all folds not included in train_folds and valid_folds

  • tag2class (Dict[str, int], optional) – mapping from label names into int

  • tag_column (str, optional) – column with label names

  • class_column (str, optional) – column to use for split

  • seed (int) – seed for split

  • n_folds (int) – number of folds

Returns

tuple with 4 dataframes

whole dataframe, train part, valid part and infer part

Return type

(tuple)

catalyst.contrib.utils.pandas.split_dataframe_on_column_folds(dataframe: pandas.core.frame.DataFrame, column: str, random_state: int = 42, n_folds: int = 5) → pandas.core.frame.DataFrame[source]

Splits DataFrame into N folds.

Parameters
  • dataframe – a dataset

  • column – which column to use

  • random_state – seed for random shuffle

  • n_folds – number of result folds

Returns

new dataframe with fold column

Return type

pd.DataFrame

catalyst.contrib.utils.pandas.split_dataframe_on_folds(dataframe: pandas.core.frame.DataFrame, random_state: int = 42, n_folds: int = 5) → pandas.core.frame.DataFrame[source]

Splits DataFrame into N folds.

Parameters
  • dataframe – a dataset

  • random_state – seed for random shuffle

  • n_folds – number of result folds

Returns

new dataframe with fold column

Return type

pd.DataFrame

catalyst.contrib.utils.pandas.split_dataframe_on_stratified_folds(dataframe: pandas.core.frame.DataFrame, class_column: str, random_state: int = 42, n_folds: int = 5) → pandas.core.frame.DataFrame[source]

Splits DataFrame into N stratified folds.

Also see catalyst.data.sampler.BalanceClassSampler

Parameters
  • dataframe – a dataset

  • class_column – which column to use for split

  • random_state – seed for random shuffle

  • n_folds – number of result folds

Returns

new dataframe with fold column

Return type

pd.DataFrame

catalyst.contrib.utils.pandas.split_dataframe_train_test(dataframe: pandas.core.frame.DataFrame, **train_test_split_args) → Tuple[pandas.core.frame.DataFrame, pandas.core.frame.DataFrame][source]

Split dataframe in train and test part.

Parameters
  • dataframe – pd.DataFrame to split

  • **train_test_split_args

    test_sizefloat, int, or None (default is None)

    If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples. If None, the value is automatically set to the complement of the train size. If train size is also None, test size is set to 0.25.

    train_sizefloat, int, or None (default is None)

    If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split. If int, represents the absolute number of train samples. If None, the value is automatically set to the complement of the test size.

    random_stateint or RandomState

    Pseudo-random number generator state used for random sampling.

    stratifyarray-like or None (default is None)

    If not None, data is split in a stratified fashion, using this as the class labels.

Returns

train and test DataFrames

Note

It exist cause sklearn split is overcomplicated.

catalyst.contrib.utils.pandas.separate_tags(dataframe: pandas.core.frame.DataFrame, tag_column: str = 'tag', tag_delim: str = ', ') → pandas.core.frame.DataFrame[source]

Separates values in class_column column.

Parameters
  • dataframe – a dataset

  • tag_column – column name to separate values

  • tag_delim – delimiter to separate values

Returns

new dataframe

Return type

pd.DataFrame

catalyst.contrib.utils.pandas.read_multiple_dataframes(in_csv_train: str = None, in_csv_valid: str = None, in_csv_infer: str = None, tag2class: Optional[Dict[str, int]] = None, class_column: str = None, tag_column: str = None) → Tuple[pandas.core.frame.DataFrame, pandas.core.frame.DataFrame, pandas.core.frame.DataFrame, pandas.core.frame.DataFrame][source]

This function reads train/valid/infer dataframes from giving paths.

Parameters
  • in_csv_train (str) – paths to train csv separated by commas

  • in_csv_valid (str) – paths to valid csv separated by commas

  • in_csv_infer (str) – paths to infer csv separated by commas

  • tag2class (Dict[str, int], optional) – mapping from label names into int

  • tag_column (str, optional) – column with label names

  • class_column (str, optional) – column to use for split

Returns

tuple with 4 dataframes

whole dataframe, train part, valid part and infer part

Return type

(tuple)

catalyst.contrib.utils.pandas.map_dataframe(dataframe: pandas.core.frame.DataFrame, tag_column: str, class_column: str, tag2class: Dict[str, int], verbose: bool = False) → pandas.core.frame.DataFrame[source]

This function maps tags from tag_column to ints into class_column using tag2class dictionary.

Parameters
  • dataframe (pd.DataFrame) – input dataframe

  • tag_column (str) – column with tags

  • class_column (str) –

  • tag2class (Dict[str, int]) – mapping from tags to class labels

  • verbose – flag if true, uses tqdm

Returns

updated dataframe with class_column

Return type

pd.DataFrame

catalyst.contrib.utils.pandas.get_dataset_labeling(dataframe: pandas.core.frame.DataFrame, tag_column: str) → Dict[str, int][source]

Prepares a mapping using unique values from tag_column.

{
    "class_name_0": 0,
    "class_name_1": 1,
    ...
    "class_name_N": N
}
Parameters
  • dataframe – a dataset

  • tag_column – which column to use

Returns

mapping from tag to labels

Return type

Dict[str, int]

catalyst.contrib.utils.pandas.merge_multiple_fold_csv(fold_name: str, paths: Optional[str]) → pandas.core.frame.DataFrame[source]

Reads csv into one DataFrame with column fold.

Parameters
  • fold_name (str) – current fold name

  • paths (str) – paths to csv separated by commas

Returns

merged dataframes with column fold == fold_name

Return type

pd.DataFrame

catalyst.contrib.utils.pandas.read_csv_data(in_csv: str = None, train_folds: Optional[List[int]] = None, valid_folds: Optional[List[int]] = None, infer_folds: Optional[List[int]] = None, seed: int = 42, n_folds: int = 5, in_csv_train: str = None, in_csv_valid: str = None, in_csv_infer: str = None, tag2class: Optional[Dict[str, int]] = None, class_column: str = None, tag_column: str = None) → Tuple[pandas.core.frame.DataFrame, List[dict], List[dict], List[dict]][source]

From giving path in_csv reads a dataframe and split it to train/valid/infer folds or from several paths in_csv_train, in_csv_valid, in_csv_infer reads independent folds.

Note

This function can be used with different combinations of params.
First block is used to get dataset from one csv:

in_csv, train_folds, valid_folds, infer_folds, seed, n_folds

Second includes paths to different csv for train/valid and infer parts:

in_csv_train, in_csv_valid, in_csv_infer

The other params (tag2class, tag_column, class_column) are optional

for any previous block

Parameters
  • in_csv (str) – paths to whole dataset

  • train_folds (List[int]) – train folds

  • valid_folds (List[int], optional) – valid folds. If none takes all folds not included in train_folds

  • infer_folds (List[int], optional) – infer folds. If none takes all folds not included in train_folds and valid_folds

  • seed (int) – seed for split

  • n_folds (int) – number of folds

  • in_csv_train (str) – paths to train csv separated by commas

  • in_csv_valid (str) – paths to valid csv separated by commas

  • in_csv_infer (str) – paths to infer csv separated by commas

  • tag2class (Dict[str, int]) – mapping from label names into ints

  • tag_column (str) – column with label names

  • class_column (str) – column to use for split

Returns

tuple with 4 elements (whole dataframe, list with train data, list with valid data and list with infer data)

Return type

(Tuple[pd.DataFrame, List[dict], List[dict], List[dict]])

catalyst.contrib.utils.pandas.balance_classes(dataframe: pandas.core.frame.DataFrame, class_column: str = 'label', random_state: int = 42, how: str = 'downsampling') → pandas.core.frame.DataFrame[source]

Balance classes in dataframe by class_column.

See also catalyst.data.sampler.BalanceClassSampler.

Parameters
  • dataframe – a dataset

  • class_column – which column to use for split

  • random_state – seed for random shuffle

  • how – strategy to sample must be one on [“downsampling”, “upsampling”]

Returns

new dataframe with balanced class_column

Return type

pd.DataFrame

Parallel

catalyst.contrib.utils.parallel.parallel_imap(func, args, pool: Union[multiprocessing.pool.Pool, catalyst.contrib.utils.parallel.DumbPool]) → List[T][source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.utils.parallel.tqdm_parallel_imap(func, args, pool: Union[multiprocessing.pool.Pool, catalyst.contrib.utils.parallel.DumbPool], total: int = None, pbar=<class 'tqdm.std.tqdm'>) → List[T][source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.utils.parallel.get_pool(workers: int) → Union[multiprocessing.pool.Pool, catalyst.contrib.utils.parallel.DumbPool][source]

@TODO: Docs. Contribution is welcome.

Plotly

catalyst.contrib.utils.plotly.plot_tensorboard_log(logdir: Union[str, pathlib.Path], step: Optional[str] = 'batch', metrics: Optional[List[str]] = None, height: Optional[int] = None, width: Optional[int] = None) → None[source]

@TODO: Docs. Contribution is welcome.

Adapted from https://github.com/belskikh/kekas/blob/v0.1.23/kekas/utils.py#L193

catalyst.contrib.utils.plotly.plot_metrics(logdir: Union[str, pathlib.Path], step: Optional[str] = 'epoch', metrics: Optional[List[str]] = None, height: Optional[int] = None, width: Optional[int] = None) → None[source]

Plots your learning results.

Parameters
  • logdir – the logdir that was specified during training.

  • step – ‘batch’ or ‘epoch’ - what logs to show: for batches or for epochs

  • metrics – list of metrics to plot. The loss should be specified as ‘loss’, learning rate = ‘_base/lr’ and other metrics should be specified as names in metrics dict that was specified during training

  • height – the height of the whole resulting plot

  • width – the width of the whole resulting plot

Serialization

catalyst.contrib.utils.serialization.serialize(data)

Serialize the data into bytes using pickle.

Parameters

data – a value

Returns

Returns a bytes object serialized with pickle data.

catalyst.contrib.utils.serialization.deserialize(data)

Deserialize bytes into an object using pickle.

Parameters

bytes – a bytes object containing serialized with pickle data.

Returns

Returns a value deserialized from the bytes-like object.

Visualization

catalyst.contrib.utils.visualization.plot_confusion_matrix(cm, class_names=None, normalize=False, title='confusion matrix', fname=None, show=True, figsize=12, fontsize=32, colormap='Blues')[source]

Render the confusion matrix and return matplotlib”s figure with it. Normalization can be applied by setting normalize=True.

catalyst.contrib.utils.visualization.render_figure_to_tensor(figure)[source]

@TODO: Docs. Contribution is welcome.

Computer Vision utilities

Image

catalyst.contrib.utils.cv.image.has_image_extension(uri) → bool[source]

Check that file has image extension.

Parameters

uri (Union[str, pathlib.Path]) – the resource to load the file from

Returns

True if file has image extension, False otherwise

Return type

bool

catalyst.contrib.utils.cv.image.imread(uri, grayscale: bool = False, expand_dims: bool = True, rootpath: Union[str, pathlib.Path] = None, **kwargs) → numpy.ndarray[source]

Reads an image from the specified file.

Parameters
  • uri (str, pathlib.Path, bytes, file) – the resource to load the image from, e.g. a filename, pathlib.Path, http address or file object, see imageio.imread docs for more info

  • grayscale (bool) –

  • expand_dims (bool) –

  • rootpath (Union[str, pathlib.Path]) – path to the resource with image (allows to use relative path)

Returns

image

Return type

np.ndarray

catalyst.contrib.utils.cv.image.imwrite(**kwargs)[source]

imwrite(uri, im, format=None, **kwargs)

Write an image to the specified file. Alias for imageio.imwrite.

Parameters

**kwargs – parameters for imageio.imwrite

catalyst.contrib.utils.cv.image.imsave(**kwargs)[source]

imwrite(uri, im, format=None, **kwargs)

Write an image to the specified file. Alias for imageio.imsave.

Parameters

**kwargs – parameters for imageio.imsave

catalyst.contrib.utils.cv.image.mask_to_overlay_image(image: numpy.ndarray, masks: List[numpy.ndarray], threshold: float = 0, mask_strength: float = 0.5) → numpy.ndarray[source]

Draws every mask for with some color over image.

Parameters
  • image (np.ndarray) – RGB image used as underlay for masks

  • masks (List[np.ndarray]) – list of masks

  • threshold (float) – threshold for masks binarization

  • mask_strength (float) – opacity of colorized masks

Returns

HxWx3 image with overlay

Return type

np.ndarray

catalyst.contrib.utils.cv.image.mimread(uri, clip_range: Tuple[int, int] = None, expand_dims: bool = True, rootpath: Union[str, pathlib.Path] = None, **kwargs) → numpy.ndarray[source]

Reads multiple images from the specified file.

Parameters
  • uri (str, pathlib.Path, bytes, file) – the resource to load the image from, e.g. a filename, pathlib.Path, http address or file object, see imageio.mimread docs for more info

  • clip_range (Tuple[int, int]) – lower and upper interval edges, image values outside the interval are clipped to the interval edges

  • expand_dims (bool) – if True, append channel axis to grayscale images rootpath (Union[str, pathlib.Path]): path to the resource with image (allows to use relative path),

Returns

image

Return type

np.ndarray

catalyst.contrib.utils.cv.image.mimwrite_with_meta(uri, ims, meta, **kwargs)[source]

@TODO: Docs. Contribution is welcome.

Tensor

catalyst.contrib.utils.cv.tensor.tensor_from_rgb_image(image: numpy.ndarray) → torch.Tensor[source]

@TODO: Docs. Contribution is welcome.

catalyst.contrib.utils.cv.tensor.tensor_to_ndimage(images: torch.Tensor, denormalize: bool = True, mean: Tuple[float, float, float] = (0.485, 0.456, 0.406), std: Tuple[float, float, float] = (0.229, 0.224, 0.225), move_channels_dim: bool = True, dtype=<class 'numpy.float32'>) → numpy.ndarray[source]

Convert float image(s) with standard normalization to np.ndarray with [0..1] when dtype is np.float32 and [0..255] when dtype is np.uint8.

Parameters
  • images (torch.Tensor) – [B]xCxHxW float tensor

  • denormalize (bool) – if True, multiply image(s) by std and add mean

  • mean (Tuple[float, float, float]) – per channel mean to add

  • std (Tuple[float, float, float]) – per channel std to multiply

  • move_channels_dim (bool) – if True, convert tensor to [B]xHxWxC format

  • dtype – result ndarray dtype. Only float32 and uint8 are supported

Returns

[B]xHxWxC np.ndarray of dtype

Natural Language Processing utilities

Text

catalyst.contrib.utils.nlp.text.tokenize_text(text: str, tokenizer, max_length: int, strip: bool = True, lowercase: bool = True, remove_punctuation: bool = True) → Dict[str, numpy.array][source]

Tokenizes givin text.

Parameters
  • text (str) – text to tokenize

  • tokenizer – Tokenizer instance from HuggingFace

  • max_length (int) – maximum length of tokens

  • strip (bool) – if true strips text before tokenizing

  • lowercase (bool) – if true makes text lowercase before tokenizing

  • remove_punctuation (bool) – if true removes string.punctuation from text before tokenizing

catalyst.contrib.utils.nlp.text.process_bert_output(bert_output, hidden_size: int, output_hidden_states: bool = False, pooling_groups: List[str] = None, mask: torch.Tensor = None, level: Union[int, str] = None)[source]

Processed the output.