Contrib

Criterion

class catalyst.contrib.criterion.ce.NaiveCrossEntropyLoss(size_average=True)[source]

Bases: torch.nn.modules.module.Module

forward(input, target)[source]
class catalyst.contrib.criterion.center.CenterLoss(num_classes, feature_dim)[source]

Bases: torch.nn.modules.module.Module

forward(feature, label)[source]
class catalyst.contrib.criterion.center.CenterLossFunc[source]

Bases: torch.autograd.function.Function

static backward(ctx, grad_output)[source]
static forward(ctx, feature, label, centers)[source]
class catalyst.contrib.criterion.contrastive.ContrastiveDistanceLoss(margin=1.0)[source]

Bases: torch.nn.modules.module.Module

Contrastive distance loss

forward(dist, y)[source]
class catalyst.contrib.criterion.contrastive.ContrastiveEmbeddingLoss(margin=1.0, reduction='elementwise_mean')[source]

Bases: torch.nn.modules.module.Module

Contrastive embedding loss

paper: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf

forward(x0, x1, y)[source]
class catalyst.contrib.criterion.dice.BCEDiceLoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', bce_weight: float = 0.5, dice_weight: float = 0.5)[source]

Bases: torch.nn.modules.module.Module

forward(outputs, targets)[source]
class catalyst.contrib.criterion.dice.DiceLoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(logits, targets)[source]
class catalyst.contrib.criterion.focal.FocalLossBinary(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Bases: torch.nn.modules.loss._Loss

__init__(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Compute focal loss for binary classification problem.

forward(logits, targets)[source]
Parameters
  • logits – [bs; …]

  • targets – [bs; …]

class catalyst.contrib.criterion.focal.FocalLossMultiClass(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]

Bases: catalyst.contrib.criterion.focal.FocalLossBinary

Compute focal loss for multi-class problem. Ignores targets having -1 label

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; …]

class catalyst.contrib.criterion.huber.HuberLoss(clip_delta=1.0, reduction='elementwise_mean')[source]

Bases: torch.nn.modules.module.Module

forward(y_pred, y_true, weights=None)[source]
class catalyst.contrib.criterion.iou.BCEIoULoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

Intersection over union (Jaccard) with BCE loss

Parameters
  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

  • reduction (str) – Specifies the reduction to apply to the output of BCE

forward(outputs, targets)[source]
class catalyst.contrib.criterion.iou.IoULoss(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]

Bases: torch.nn.modules.module.Module

Intersection over union (Jaccard) loss

Parameters
  • eps (float) – epsilon to avoid zero division

  • threshold (float) – threshold for outputs binarization

  • activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]

forward(outputs, targets)[source]

https://arxiv.org/abs/1705.08790 Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)

class catalyst.contrib.criterion.lovasz.LovaszLossBinary(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

forward(logits, targets)[source]
Parameters
  • logits – [bs; …]

  • targets – [bs; …]

class catalyst.contrib.criterion.lovasz.LovaszLossMultiClass(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; …]

class catalyst.contrib.criterion.lovasz.LovaszLossMultiLabel(per_image=False, ignore=None)[source]

Bases: torch.nn.modules.loss._Loss

forward(logits, targets)[source]
Parameters
  • logits – [bs; num_classes; …]

  • targets – [bs; num_classes; …]

class catalyst.contrib.criterion.wing.WingLoss(width: int = 5, curvature: float = 0.5, reduction: str = 'mean')[source]

Bases: torch.nn.modules.module.Module

forward(outputs, targets)[source]
catalyst.contrib.criterion.wing.wing_loss(outputs: torch.Tensor, targets: torch.Tensor, width: int = 5, curvature: float = 0.5, reduction: str = 'mean')[source]

https://arxiv.org/pdf/1711.06753.pdf

Source https://github.com/BloodAxe/pytorch-toolbelt See losses for details.

Models

Segmentation

class catalyst.contrib.models.segmentation.unet.ResnetUnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.ResnetUnetSpec

class catalyst.contrib.models.segmentation.unet.Unet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.UnetSpec

class catalyst.contrib.models.segmentation.linknet.Linknet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.UnetSpec

class catalyst.contrib.models.segmentation.linknet.ResnetLinknet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.ResnetUnetSpec

class catalyst.contrib.models.segmentation.fpn.FPNUnet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.UnetSpec

class catalyst.contrib.models.segmentation.fpn.ResnetFPNUnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.ResnetUnetSpec

class catalyst.contrib.models.segmentation.psp.PSPnet(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.UnetSpec

class catalyst.contrib.models.segmentation.psp.ResnetPSPnet(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]

Bases: catalyst.contrib.models.segmentation.core.ResnetUnetSpec

Modules

class catalyst.contrib.modules.common.Flatten[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.modules.common.Lambda(lambda_fn)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.modules.lama.TemporalLastPooling[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.modules.lama.TemporalAvgPooling[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.modules.lama.TemporalMaxPooling[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.modules.lama.TemporalDropLastWrapper(net)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
class catalyst.contrib.modules.lama.TemporalAttentionPooling(features_in, activation=None, kernel_size=1, **params)[source]

Bases: torch.nn.modules.module.Module

forward(features)[source]
Parameters

features – [batch_size, history_len, feature_size]

Returns

name2activation = {'sigmoid': Sigmoid(), 'softmax': Softmax(dim=1), 'tanh': Tanh()}
class catalyst.contrib.modules.lama.TemporalConcatPooling(features_in, history_len=1)[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
Parameters

x – [batch_size, history_len, feature_size]

Returns

class catalyst.contrib.modules.lama.LamaPooling(features_in, groups=None)[source]

Bases: torch.nn.modules.module.Module

available_groups = ['last', 'avg', 'avg_droplast', 'max', 'max_droplast', 'sigmoid', 'sigmoid_droplast', 'softmax', 'softmax_droplast', 'tanh', 'tanh_droplast']
forward(x)[source]
Parameters

x – [batch_size, history_len, feature_size]

Returns

class catalyst.contrib.modules.noisy.NoisyFactorizedLinear(in_features, out_features, sigma_zero=0.4, bias=True)[source]

Bases: torch.nn.modules.linear.Linear

NoisyNet layer with factorized gaussian noise

N.B. nn.Linear already initializes weight and bias to

forward(input)[source]
class catalyst.contrib.modules.noisy.NoisyLinear(in_features, out_features, sigma_init=0.017, bias=True)[source]

Bases: torch.nn.modules.linear.Linear

forward(input)[source]
reset_parameters()[source]
class catalyst.contrib.modules.pooling.GlobalAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.pooling.GlobalAvgAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.pooling.GlobalAvgPool2d[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.pooling.GlobalConcatAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.pooling.GlobalConcatPool2d[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.pooling.GlobalMaxAttnPool2d(in_features, activation_fn='Sigmoid')[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.pooling.GlobalMaxPool2d[source]

Bases: torch.nn.modules.module.Module

forward(x)[source]
static out_features(in_features)[source]
class catalyst.contrib.modules.real_nvp.SquashingLayer(squashing_fn=<class 'torch.nn.modules.activation.Tanh'>)[source]

Bases: torch.nn.modules.module.Module

__init__(squashing_fn=<class 'torch.nn.modules.activation.Tanh'>)[source]

Layer that squashes samples from some distribution to be bounded.

forward(action, action_logprob)[source]
class catalyst.contrib.modules.real_nvp.CouplingLayer(action_size, layer_fn, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, bias=True, parity='odd')[source]

Bases: torch.nn.modules.module.Module

__init__(action_size, layer_fn, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, bias=True, parity='odd')[source]

Conditional affine coupling layer used in Real NVP Bijector. Original paper: https://arxiv.org/abs/1605.08803 Adaptation to RL: https://arxiv.org/abs/1804.02808 Important notes ————— 1. State embeddings are supposed to have size (action_size * 2). 2. Scale and translation networks used in the Real NVP Bijector both have one hidden layer of (action_size) (activation_fn) units. 3. Parity (“odd” or “even”) determines which part of the input is being copied and which is being transformed.

forward(action, state_embedding, action_logprob)[source]

Optimizers

Schedulers

class catalyst.contrib.schedulers.base.BaseScheduler(optimizer, last_epoch=-1)[source]

Bases: torch.optim.lr_scheduler._LRScheduler, abc.ABC

Base class for all schedulers with momentum update

get_momentum() → List[float][source]

Function that returns the new momentum for optimizer

Returns

calculated momentum for every param groups

Return type

List[float]

step(epoch: Optional[int] = None) → None[source]

Make one scheduler step

Parameters

epoch (int, optional) – current epoch’s num

class catalyst.contrib.schedulers.base.BatchScheduler(optimizer, last_epoch=-1)[source]

Bases: catalyst.contrib.schedulers.base.BaseScheduler, abc.ABC

class catalyst.contrib.schedulers.onecycle.OneCycleLRWithWarmup(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]

Bases: catalyst.contrib.schedulers.base.BatchScheduler

OneCycle scheduler with warm-up & lr decay stages. First stage increases lr from init_lr to max_lr, and called warmup. Also it decreases momentum from init_momentum to min_momentum. Takes warmup_steps steps

Second is annealing stage. Decrease lr from max_lr to min_lr, Increase momentum from min_momentum to max_momentum.

Third, optional, lr decay.

__init__(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]
Parameters
  • optimizer – PyTorch optimizer

  • num_steps (int) – total number of steps

  • lr_range – tuple with two or three elements (max_lr, min_lr, [final_lr])

  • init_lr (float, optional) – initial lr

  • warmup_steps (int) – count of steps for warm-up stage

  • warmup_fraction (float, optional) – fraction in [0; 1) to calculate number of warmup steps. Cannot be set together with warmup_steps

  • decay_steps (int) – count of steps for lr decay stage

  • decay_fraction (float, optional) – fraction in [0; 1) to calculate number of decay steps. Cannot be set together with decay_steps

  • momentum_range – tuple with two or three elements (min_momentum, max_momentum, [final_momentum])

  • init_momentum (float, optional) – initial momentum

get_lr() → List[float][source]

Function that returns the new lr for optimizer :returns: calculated lr for every param groups :rtype: List[float]

get_momentum() → List[float][source]

Function that returns the new momentum for optimizer

Returns

calculated momentum for every param groups

Return type

List[float]

recalculate(loader_len: int, current_step: int) → None[source]

Recalculates total num_steps for batch mode

Parameters
  • loader_len (int) – total count of batches in an epoch

  • current_step (int) – current step

reset()[source]