Shortcuts

Contrib

NN

Criterion

class catalyst.contrib.nn.criterion.ce.MaskCrossEntropyLoss(*args, target_name: str = 'targets', mask_name: str = 'mask', **kwargs)[source]

Bases: torch.nn.modules.loss.CrossEntropyLoss

forward(input, target_mask)[source]
class catalyst.contrib.nn.criterion.ce.SymmetricCrossEntropyLoss(alpha=1.0, beta=1.0)[source]

Bases: torch.nn.modules.module.Module

__init__(alpha=1.0, beta=1.0)[source]

Symmetric Cross Entropy paper : https://arxiv.org/abs/1908.06112

Parameters
  • alpha (float) – corresponds to overfitting issue of CE

  • beta (float) – corresponds to flexible exploration on the robustness of RCE

forward(input, target)[source]
Parameters
  • input – shape = [batch_size; num_classes]

  • target – shape = [batch_size]

  • of a vector correspond to class index (values) –

class catalyst.contrib.nn.criterion.ce.NaiveCrossEntropyLoss(size_average=True)[source]

Bases: torch.nn.modules.module.Module

forward(input, target)[source]
class catalyst.contrib.nn.criterion.contrastive.ContrastiveEmbeddingLoss(margin=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

Contrastive embedding loss

paper: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf

__init__(margin=1.0, reduction='mean')[source]

Constructor method for the ContrastiveEmbeddingLoss class. :param margin: margin parameter. :param reduction: criterion reduction type.

forward(embeddings_left, embeddings_right, distance_true)[source]

Forward propagation method for the contrastive loss. :param embeddings_left: left objects embeddings :param embeddings_right: right objects embeddings :param distance_true: true distances

Returns

loss

class catalyst.contrib.nn.criterion.contrastive.ContrastiveDistanceLoss(margin=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

Contrastive distance loss

__init__(margin=1.0, reduction='mean')[source]

Constructor method for the ContrastiveDistanceLoss class. :param margin: margin parameter. :param reduction: criterion reduction type.

forward(distance_pred, distance_true)[source]

Forward propagation method for the contrastive loss. :param distance_pred: predicted distances :param distance_true: true distances

Returns

loss

class catalyst.contrib.nn.criterion.contrastive.ContrastivePairwiseEmbeddingLoss(margin=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

ContrastivePairwiseEmbeddingLoss – proof of concept criterion. Still work in progress.

__init__(margin=1.0, reduction='mean')[source]

Constructor method for the ContrastivePairwiseEmbeddingLoss class. :param margin: margin parameter. :param reduction: criterion reduction type.

forward(embeddings_pred, embeddings_true)[source]

Work in progress. :param embeddings_pred: predicted embeddings :param embeddings_true: true embeddings

Returns

loss

class catalyst.contrib.nn.criterion.dice.BCEDiceLoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', bce_weight: float = 0.5, dice_weight: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

forward(outputs, targets)[source]
class catalyst.contrib.nn.criterion.dice.DiceLoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(logits, targets)[source]
class catalyst.contrib.nn.criterion.focal.FocalLossBinary(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Bases: torch.nn.modules.loss._Loss

__init__(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Compute focal loss for binary classification problem.

forward(logits, targets)[source]
Parameters
  • logits – [bs; …]

  • targets – [bs; …]

class catalyst.contrib.nn.criterion.focal.FocalLossMultiClass(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Bases: catalyst.contrib.nn.criterion.focal.FocalLossBinary

Compute focal loss for multi-class problem. Ignores targets having -1 label

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; …]

class catalyst.contrib.nn.criterion.gan.MeanOutputLoss[source]

Bases: torch.nn.modules.module.Module

Criterion to compute simple mean of the output, completely ignoring target (maybe useful e.g. for WGAN real/fake validity averaging

forward(output, target)[source]

Compute criterion

class catalyst.contrib.nn.criterion.gan.GradientPenaltyLoss[source]

Bases: torch.nn.modules.module.Module

Criterion to compute gradient penalty

WARN: SHOULD NOT BE RUN WITH CriterionCallback,

use special GradientPenaltyCallback instead

forward(fake_data, real_data, critic, critic_condition_args)[source]

Compute gradient penalty

class catalyst.contrib.nn.criterion.huber.HuberLoss(clip_delta=1.0, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

forward(y_pred, y_true, weights=None)[source]
class catalyst.contrib.nn.criterion.iou.IoULoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: torch.nn.modules.module.Module

Intersection over union (Jaccard) loss

Parameters
  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

forward(outputs, targets)[source]
class catalyst.contrib.nn.criterion.iou.BCEIoULoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

Intersection over union (Jaccard) with BCE loss

Parameters
  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

  • reduction (str) – Specifies the reduction to apply to the output of BCE

forward(outputs, targets)[source]

https://arxiv.org/abs/1705.08790 Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)

class catalyst.contrib.nn.criterion.lovasz.LovaszLossBinary(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

forward(logits, targets)[source]
Parameters
  • logits – [bs; …]

  • targets – [bs; …]

class catalyst.contrib.nn.criterion.lovasz.LovaszLossMultiClass(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; …]

class catalyst.contrib.nn.criterion.lovasz.LovaszLossMultiLabel(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; num_classes; …]

class catalyst.contrib.nn.criterion.margin.MarginLoss(alpha: float = 0.2, beta: float = 1.0, skip_labels: Union[int, List[int]] = -1)[source]

Bases: torch.nn.modules.module.Module

__init__(alpha: float = 0.2, beta: float = 1.0, skip_labels: Union[int, List[int]] = -1)[source]

Constructor method for the MarginLoss class. :param alpha: :param beta: :param skip_labels:

forward(embeddings, targets)[source]
class catalyst.contrib.nn.criterion.triplet.TripletLoss(margin=0.3)[source]

Bases: torch.nn.modules.module.Module

Triplet loss with hard positive/negative mining. Reference: Code imported from https://github.com/NegatioN/OnlineMiningTripletLoss.

Parameters

margin (float) – margin for triplet.

__init__(margin=0.3)[source]

Constructor method for the TripletLoss class.

Parameters

margin – margin parameter.

forward(embeddings, targets)[source]

Forward propagation method for the triplet loss.

Parameters
  • embeddings – tensor of shape (batch_size, embed_dim)

  • targets – labels of the batch, of size (batch_size,)

Returns

scalar tensor containing the triplet loss

Return type

triplet_loss

class catalyst.contrib.nn.criterion.triplet.TripletPairwiseEmbeddingLoss(margin=0.3, reduction='mean')[source]

Bases: torch.nn.modules.module.Module

TripletPairwiseEmbeddingLoss – proof of concept criterion. Still work in progress.

__init__(margin=0.3, reduction='mean')[source]

Constructor method for the TripletPairwiseEmbeddingLoss class.

Parameters
  • margin – margin parameter.

  • reduction – criterion reduction type.

forward(embeddings_pred, embeddings_true)[source]

Work in progress.

Parameters
  • embeddings_pred – predicted embeddings with shape [batch_size, embedding_size]

  • embeddings_true – true embeddings with shape [batch_size, embedding_size]

Returns

loss

Return type

torch.Tensor

class catalyst.contrib.nn.criterion.wing.WingLoss(width: int = 5, curvature: float = 0.5, reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

forward(outputs, targets)[source]

Modules

class catalyst.contrib.nn.modules.common.Flatten[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.nn.modules.common.Lambda(lambda_fn)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.nn.modules.common.Normalize(**normalize_kwargs)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.nn.modules.lama.TemporalLastPooling[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
class catalyst.contrib.nn.modules.lama.TemporalAvgPooling[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
class catalyst.contrib.nn.modules.lama.TemporalMaxPooling[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
class catalyst.contrib.nn.modules.lama.TemporalDropLastWrapper(net)[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
class catalyst.contrib.nn.modules.lama.TemporalAttentionPooling(in_features, activation=None, kernel_size=1, **params)[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
Parameters

x – [batch_size, history_len, feature_size]

Returns

name2activation = {'sigmoid': Sigmoid(), 'softmax': Softmax(dim=1), 'tanh': Tanh()}
class catalyst.contrib.nn.modules.lama.TemporalConcatPooling(in_features, history_len=1)[source]

Bases: torch.nn.modules.module.Module

forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
Parameters

x – [batch_size, history_len, feature_size]

Returns

class catalyst.contrib.nn.modules.lama.LamaPooling(in_features, groups=None)[source]

Bases: torch.nn.modules.module.Module

available_groups = ['last', 'avg', 'avg_droplast', 'max', 'max_droplast', 'sigmoid', 'sigmoid_droplast', 'softmax', 'softmax_droplast', 'tanh', 'tanh_droplast']
forward(x: torch.Tensor, mask: torch.Tensor = None)[source]
Parameters

x – [batch_size, history_len, feature_size]

Returns

class catalyst.contrib.nn.modules.pooling.GlobalAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.nn.modules.pooling.GlobalAvgAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.nn.modules.pooling.GlobalAvgPool2d[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.nn.modules.pooling.GlobalConcatAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.nn.modules.pooling.GlobalConcatPool2d[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.nn.modules.pooling.GlobalMaxAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.nn.modules.pooling.GlobalMaxPool2d[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]

Optimizers

class catalyst.contrib.nn.optimizers.lamb.Lamb(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0, adam=False)[source]

Bases: torch.optim.optimizer.Optimizer

Lamb optimizer

__init__(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0, adam=False)[source]

Implements Lamb algorithm from Training BERT in 76 minutes.

Parameters
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float, optional) – learning rate (default: 1e-3)

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))

  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

  • adam (bool, optional) – always use trust ratio = 1, which turns this into Adam. Useful for comparison purposes.

step(closure=None)[source]

Makes optimizer step

catalyst.contrib.nn.optimizers.lamb.log_lamb_rs(optimizer: torch.optim.optimizer.Optimizer, event_writer, token_count: int)[source]

Log a histogram of trust ratio scalars in across layers.

class catalyst.contrib.nn.optimizers.lookahead.Lookahead(optimizer: torch.optim.optimizer.Optimizer, k: int = 5, alpha: float = 0.5)[source]

Bases: torch.optim.optimizer.Optimizer

__init__(optimizer: torch.optim.optimizer.Optimizer, k: int = 5, alpha: float = 0.5)[source]

Taken from: https://github.com/alphadl/lookahead.pytorch

add_param_group(param_group)[source]
classmethod get_from_params(params: Dict, base_optimizer_params: Dict = None, **kwargs) → catalyst.contrib.nn.optimizers.lookahead.Lookahead[source]
load_state_dict(state_dict)[source]
state_dict()[source]
step(closure=None)[source]
update(group)[source]
update_lookahead()[source]
class catalyst.contrib.nn.optimizers.qhadamw.QHAdamW(params, lr=0.001, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-08)[source]

Bases: torch.optim.optimizer.Optimizer

__init__(params, lr=0.001, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-08)[source]

Combines the weight decay decoupling from AdamW (Decoupled Weight Decay Regularization. Loshchilov and Hutter, 2019) with QHAdam (Quasi-hyperbolic momentum and Adam for deep learning. Ma and Yarats, 2019).

https://github.com/iprally/qhadamw-pytorch/blob/master/qhadamw.py

Parameters
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float, optional) – learning rate (\(\alpha\) from the paper) (default: 1e-3)

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of the gradient and its square (default: (0.995, 0.999))

  • nus (Tuple[float, float], optional) – immediate discount factors used to estimate the gradient and its square (default: (0.7, 1.0))

  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay (float, optional) – weight decay (L2 regularization coefficient, times two) (default: 0.0)

Example

>>> optimizer = QHAdamW(
...     model.parameters(),
...     lr=3e-4, nus=(0.8, 1.0), betas=(0.99, 0.999))
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
QHAdam paper:
step(closure=None)[source]
class catalyst.contrib.nn.optimizers.radam.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]

Bases: torch.optim.optimizer.Optimizer

__init__(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]

Taken from https://github.com/LiyuanLucasLiu/RAdam

step(closure=None)[source]
class catalyst.contrib.nn.optimizers.ralamb.Ralamb(params: Iterable, lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]

Bases: torch.optim.optimizer.Optimizer

RAdam optimizer with LARS/LAMB tricks Taken from https://github.com/mgrankin/over9000/blob/master/ralamb.py

__init__(params: Iterable, lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]
Parameters
  • params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

  • lr (float, optional) – learning rate (default: 1e-3)

  • betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))

  • eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)

  • weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

step(closure=None)[source]

Makes optimizer step

Schedulers

class catalyst.contrib.nn.schedulers.base.BaseScheduler(optimizer, last_epoch=-1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler, abc.ABC

Base class for all schedulers with momentum update

get_momentum() → List[float][source]

Function that returns the new momentum for optimizer

Returns

calculated momentum for every param groups

Return type

List[float]

step(epoch: Optional[int] = None) → None[source]

Make one scheduler step

Parameters

epoch (int, optional) – current epoch’s num

class catalyst.contrib.nn.schedulers.base.BatchScheduler(optimizer, last_epoch=-1)[source]

Bases: catalyst.contrib.nn.schedulers.base.BaseScheduler, abc.ABC

class catalyst.contrib.nn.schedulers.onecycle.OneCycleLRWithWarmup(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]

Bases: catalyst.contrib.nn.schedulers.base.BatchScheduler

OneCycle scheduler with warm-up & lr decay stages. First stage increases lr from init_lr to max_lr, and called warmup. Also it decreases momentum from init_momentum to min_momentum. Takes warmup_steps steps

Second is annealing stage. Decrease lr from max_lr to min_lr, Increase momentum from min_momentum to max_momentum.

Third, optional, lr decay.

__init__(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]
Parameters
  • optimizer – PyTorch optimizer

  • num_steps (int) – total number of steps

  • lr_range – tuple with two or three elements (max_lr, min_lr, [final_lr])

  • init_lr (float, optional) – initial lr

  • warmup_steps (int) – count of steps for warm-up stage

  • warmup_fraction (float, optional) – fraction in [0; 1) to calculate number of warmup steps. Cannot be set together with warmup_steps

  • decay_steps (int) – count of steps for lr decay stage

  • decay_fraction (float, optional) – fraction in [0; 1) to calculate number of decay steps. Cannot be set together with decay_steps

  • momentum_range – tuple with two or three elements (min_momentum, max_momentum, [final_momentum])

  • init_momentum (float, optional) – initial momentum

get_lr() → List[float][source]

Function that returns the new lr for optimizer :returns: calculated lr for every param groups :rtype: List[float]

get_momentum() → List[float][source]

Function that returns the new momentum for optimizer

Returns

calculated momentum for every param groups

Return type

List[float]

recalculate(loader_len: int, current_step: int) → None[source]

Recalculates total num_steps for batch mode

Parameters
  • loader_len (int) – total count of batches in an epoch

  • current_step (int) – current step

reset()[source]

Models

Segmentation

class catalyst.contrib.models.cv.segmentation.unet.ResnetUnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

class catalyst.contrib.models.cv.segmentation.unet.Unet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

class catalyst.contrib.models.cv.segmentation.linknet.Linknet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

class catalyst.contrib.models.cv.segmentation.linknet.ResnetLinknet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

class catalyst.contrib.models.cv.segmentation.fpn.FPNUnet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

class catalyst.contrib.models.cv.segmentation.fpn.ResnetFPNUnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

class catalyst.contrib.models.cv.segmentation.psp.PSPnet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.UnetSpec

class catalyst.contrib.models.cv.segmentation.psp.ResnetPSPnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]

Bases: catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec

DL

Runner

class catalyst.contrib.dl.runner.alchemy.AlchemyRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]

Bases: catalyst.dl.core.runner.Runner

Runner wrapper with Alchemy integration hooks. Read about Alchemy here https://alchemy.host Powered by Catalyst.Ecosystem

Example

from catalyst.dl import SupervisedAlchemyRunner

runner = SupervisedAlchemyRunner()

runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    logdir=logdir,
    num_epochs=num_epochs,
    verbose=True,
    monitoring_params={
        "token": "...", # your Alchemy token
        "project": "your_project_name",
        "experiment": "your_experiment_name",
        "group": "your_experiment_group_name"
    }
)
run_experiment(experiment: catalyst.dl.core.experiment.Experiment)[source]

Starts experiment

Parameters

experiment (Experiment) – experiment class

class catalyst.contrib.dl.runner.alchemy.SupervisedAlchemyRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]

Bases: catalyst.contrib.dl.runner.alchemy.AlchemyRunner, catalyst.dl.runner.supervised.SupervisedRunner

SupervisedRunner with Alchemy

class catalyst.contrib.dl.runner.neptune.NeptuneRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]

Bases: catalyst.dl.core.runner.Runner

Runner wrapper with Neptune integration hooks. Read about Neptune here https://neptune.ai

Examples

Initialize runner:

from catalyst.dl import SupervisedNeptuneRunner
runner = SupervisedNeptuneRunner()

Pass monitoring_params and train model:

runner.train(
    model=model,
    criterion=criterion,
    optimizer=optimizer,
    loaders=loaders,
    logdir=logdir,
    num_epochs=num_epochs,
    verbose=True,
    monitoring_params={
        "init": {
           "project_qualified_name": "shared/catalyst-integration",
           "api_token": "ANONYMOUS",  # api key,
        },
        "create_experiment": {
            "name": "catalyst-example", # experiment name
            "params": {"epoch_nr":10}, # immutable
            "properties": {"data_source": "cifar10"} , # mutable
            "tags": ["resnet", "no-augmentations"],
            "upload_source_files": ["**/*.py"] # grep-like
        }
    })

You can see an example experiment here: https://ui.neptune.ai/o/shared/org/catalyst-integration/e/CAT-3/logs

You can log your experiments there without registering. Just use “ANONYMOUS” token:

runner.train(
    ...
    monitoring_params={
        "init": {
           "project_qualified_name": "shared/catalyst-integration",
            "api_token": "ANONYMOUS",  # api key,
        },
        ...
    })
run_experiment(experiment: catalyst.dl.core.experiment.Experiment)[source]
class catalyst.contrib.dl.runner.neptune.SupervisedNeptuneRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]

Bases: catalyst.contrib.dl.runner.neptune.NeptuneRunner, catalyst.dl.runner.supervised.SupervisedRunner

class catalyst.contrib.dl.runner.wandb.WandbRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]

Bases: catalyst.dl.core.runner.Runner

Runner wrapper with wandb integration hooks.

run_experiment(experiment: catalyst.dl.core.experiment.Experiment)[source]
class catalyst.contrib.dl.runner.wandb.SupervisedWandbRunner(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]

Bases: catalyst.contrib.dl.runner.wandb.WandbRunner, catalyst.dl.runner.supervised.SupervisedRunner

Registry

catalyst subpackage registries

catalyst.contrib.registry.Criterion(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Optimizer(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Scheduler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Module(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Model(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Sampler(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)

catalyst.contrib.registry.Transform(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]

Adds factory to registry with it’s __name__ attribute or provided name. Signature is flexible.

Parameters
  • factory – Factory instance

  • factories – More instances

  • name – Provided name for first instance. Use only when pass single instance.

  • named_factories – Factory and their names as kwargs

Returns

First factory passed

Return type

(Factory)