Contrib¶
Criterion¶
-
class
catalyst.contrib.criterion.ce.
MaskCrossEntropyLoss
(*args, target_name: str = 'targets', mask_name: str = 'mask', **kwargs)[source]¶ Bases:
torch.nn.modules.loss.CrossEntropyLoss
-
class
catalyst.contrib.criterion.ce.
NaiveCrossEntropyLoss
(size_average=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.criterion.center.
CenterLoss
(num_classes, feature_dim)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.criterion.center.
CenterLossFunc
[source]¶ Bases:
torch.autograd.function.Function
-
class
catalyst.contrib.criterion.contrastive.
ContrastiveEmbeddingLoss
(margin=1.0, reduction='elementwise_mean')[source]¶ Bases:
torch.nn.modules.module.Module
Contrastive embedding loss
paper: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
-
class
catalyst.contrib.criterion.contrastive.
ContrastiveDistanceLoss
(margin=1.0)[source]¶ Bases:
torch.nn.modules.module.Module
Contrastive distance loss
-
class
catalyst.contrib.criterion.dice.
BCEDiceLoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', bce_weight: float = 0.5, dice_weight: float = 0.5)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.criterion.dice.
DiceLoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.criterion.focal.
FocalLossBinary
(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]¶ Bases:
torch.nn.modules.loss._Loss
-
class
catalyst.contrib.criterion.focal.
FocalLossMultiClass
(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]¶ Bases:
catalyst.contrib.criterion.focal.FocalLossBinary
Compute focal loss for multi-class problem. Ignores targets having -1 label
-
class
catalyst.contrib.criterion.huber.
HuberLoss
(clip_delta=1.0, reduction='elementwise_mean')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.criterion.iou.
IoULoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
Intersection over union (Jaccard) loss
- Parameters
eps (float) – epsilon to avoid zero division
threshold (float) – threshold for outputs binarization
activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]
-
class
catalyst.contrib.criterion.iou.
BCEIoULoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', reduction: str = 'mean')[source]¶ Bases:
torch.nn.modules.module.Module
Intersection over union (Jaccard) with BCE loss
- Parameters
eps (float) – epsilon to avoid zero division
threshold (float) – threshold for outputs binarization
activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]
reduction (str) – Specifies the reduction to apply to the output of BCE
https://arxiv.org/abs/1705.08790 Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
-
class
catalyst.contrib.criterion.lovasz.
LovaszLossBinary
(per_image=False, ignore=None)[source]¶ Bases:
torch.nn.modules.loss._Loss
-
class
catalyst.contrib.criterion.lovasz.
LovaszLossMultiClass
(per_image=False, ignore=None)[source]¶ Bases:
torch.nn.modules.loss._Loss
Models¶
Segmentation¶
-
class
catalyst.contrib.models.segmentation.unet.
ResnetUnet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.ResnetUnetSpec
-
class
catalyst.contrib.models.segmentation.unet.
Unet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.segmentation.linknet.
Linknet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.segmentation.linknet.
ResnetLinknet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.ResnetUnetSpec
-
class
catalyst.contrib.models.segmentation.fpn.
FPNUnet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.segmentation.fpn.
ResnetFPNUnet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.ResnetUnetSpec
-
class
catalyst.contrib.models.segmentation.psp.
PSPnet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.segmentation.psp.
ResnetPSPnet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None)[source]¶ Bases:
catalyst.contrib.models.segmentation.core.ResnetUnetSpec
Modules¶
-
class
catalyst.contrib.modules.common.
Lambda
(lambda_fn)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.lama.
TemporalLastPooling
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.lama.
TemporalAvgPooling
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.lama.
TemporalMaxPooling
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.lama.
TemporalDropLastWrapper
(net)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.lama.
TemporalAttentionPooling
(features_in, activation=None, kernel_size=1, **params)[source]¶ Bases:
torch.nn.modules.module.Module
-
name2activation
= {'sigmoid': Sigmoid(), 'softmax': Softmax(dim=1), 'tanh': Tanh()}¶
-
-
class
catalyst.contrib.modules.lama.
TemporalConcatPooling
(features_in, history_len=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.lama.
LamaPooling
(features_in, groups=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
available_groups
= ['last', 'avg', 'avg_droplast', 'max', 'max_droplast', 'sigmoid', 'sigmoid_droplast', 'softmax', 'softmax_droplast', 'tanh', 'tanh_droplast']¶
-
-
class
catalyst.contrib.modules.noisy.
NoisyFactorizedLinear
(in_features, out_features, sigma_zero=0.4, bias=True)[source]¶ Bases:
torch.nn.modules.linear.Linear
NoisyNet layer with factorized gaussian noise
N.B. nn.Linear already initializes weight and bias to
-
class
catalyst.contrib.modules.noisy.
NoisyLinear
(in_features, out_features, sigma_init=0.017, bias=True)[source]¶ Bases:
torch.nn.modules.linear.Linear
-
class
catalyst.contrib.modules.pooling.
GlobalAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.pooling.
GlobalAvgAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.pooling.
GlobalAvgPool2d
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.pooling.
GlobalConcatAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.pooling.
GlobalConcatPool2d
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.pooling.
GlobalMaxAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.pooling.
GlobalMaxPool2d
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.real_nvp.
SquashingLayer
(squashing_fn=<class 'torch.nn.modules.activation.Tanh'>)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.modules.real_nvp.
CouplingLayer
(action_size, layer_fn, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, bias=True, parity='odd')[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(action_size, layer_fn, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, bias=True, parity='odd')[source]¶ Conditional affine coupling layer used in Real NVP Bijector. Original paper: https://arxiv.org/abs/1605.08803 Adaptation to RL: https://arxiv.org/abs/1804.02808 Important notes ————— 1. State embeddings are supposed to have size (action_size * 2). 2. Scale and translation networks used in the Real NVP Bijector both have one hidden layer of (action_size) (activation_fn) units. 3. Parity (“odd” or “even”) determines which part of the input is being copied and which is being transformed.
-
Optimizers¶
Schedulers¶
-
class
catalyst.contrib.schedulers.base.
BaseScheduler
(optimizer, last_epoch=-1)[source]¶ Bases:
torch.optim.lr_scheduler._LRScheduler
,abc.ABC
Base class for all schedulers with momentum update
-
class
catalyst.contrib.schedulers.base.
BatchScheduler
(optimizer, last_epoch=-1)[source]¶ Bases:
catalyst.contrib.schedulers.base.BaseScheduler
,abc.ABC
-
class
catalyst.contrib.schedulers.onecycle.
OneCycleLRWithWarmup
(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]¶ Bases:
catalyst.contrib.schedulers.base.BatchScheduler
OneCycle scheduler with warm-up & lr decay stages. First stage increases lr from
init_lr
tomax_lr
, and calledwarmup
. Also it decreases momentum frominit_momentum
tomin_momentum
. Takeswarmup_steps
stepsSecond is
annealing
stage. Decrease lr frommax_lr
tomin_lr
, Increase momentum frommin_momentum
tomax_momentum
.Third, optional, lr decay.
-
__init__
(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]¶ - Parameters
optimizer – PyTorch optimizer
num_steps (int) – total number of steps
lr_range – tuple with two or three elements (max_lr, min_lr, [final_lr])
init_lr (float, optional) – initial lr
warmup_steps (int) – count of steps for warm-up stage
warmup_fraction (float, optional) – fraction in [0; 1) to calculate number of warmup steps. Cannot be set together with
warmup_steps
decay_steps (int) – count of steps for lr decay stage
decay_fraction (float, optional) – fraction in [0; 1) to calculate number of decay steps. Cannot be set together with
decay_steps
momentum_range – tuple with two or three elements (min_momentum, max_momentum, [final_momentum])
init_momentum (float, optional) – initial momentum
-
get_lr
() → List[float][source]¶ Function that returns the new lr for optimizer :returns: calculated lr for every param groups :rtype: List[float]
-
get_momentum
() → List[float][source]¶ Function that returns the new momentum for optimizer
- Returns
calculated momentum for every param groups
- Return type
List[float]
-