Contrib¶
NN¶
Criterion¶
-
class
catalyst.contrib.nn.criterion.ce.
MaskCrossEntropyLoss
(*args, target_name: str = 'targets', mask_name: str = 'mask', **kwargs)[source]¶ Bases:
torch.nn.modules.loss.CrossEntropyLoss
-
class
catalyst.contrib.nn.criterion.ce.
SymmetricCrossEntropyLoss
(alpha=1.0, beta=1.0)[source]¶ Bases:
torch.nn.modules.module.Module
-
__init__
(alpha=1.0, beta=1.0)[source]¶ Symmetric Cross Entropy paper : https://arxiv.org/abs/1908.06112
- Parameters
alpha (float) – corresponds to overfitting issue of CE
beta (float) – corresponds to flexible exploration on the robustness of RCE
-
-
class
catalyst.contrib.nn.criterion.ce.
NaiveCrossEntropyLoss
(size_average=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.criterion.contrastive.
ContrastiveEmbeddingLoss
(margin=1.0, reduction='mean')[source]¶ Bases:
torch.nn.modules.module.Module
Contrastive embedding loss
paper: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
-
class
catalyst.contrib.nn.criterion.contrastive.
ContrastiveDistanceLoss
(margin=1.0, reduction='mean')[source]¶ Bases:
torch.nn.modules.module.Module
Contrastive distance loss
-
class
catalyst.contrib.nn.criterion.contrastive.
ContrastivePairwiseEmbeddingLoss
(margin=1.0, reduction='mean')[source]¶ Bases:
torch.nn.modules.module.Module
ContrastivePairwiseEmbeddingLoss – proof of concept criterion. Still work in progress.
-
class
catalyst.contrib.nn.criterion.dice.
BCEDiceLoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', bce_weight: float = 0.5, dice_weight: float = 0.5)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.criterion.dice.
DiceLoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.criterion.focal.
FocalLossBinary
(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]¶ Bases:
torch.nn.modules.loss._Loss
-
class
catalyst.contrib.nn.criterion.focal.
FocalLossMultiClass
(ignore: int = None, reduced: bool = False, gamma: float = 2.0, alpha: float = 0.25, threshold: float = 0.5, reduction: str = 'mean')[source]¶ Bases:
catalyst.contrib.nn.criterion.focal.FocalLossBinary
Compute focal loss for multi-class problem. Ignores targets having -1 label
-
class
catalyst.contrib.nn.criterion.gan.
MeanOutputLoss
[source]¶ Bases:
torch.nn.modules.module.Module
Criterion to compute simple mean of the output, completely ignoring target (maybe useful e.g. for WGAN real/fake validity averaging
-
class
catalyst.contrib.nn.criterion.gan.
GradientPenaltyLoss
[source]¶ Bases:
torch.nn.modules.module.Module
Criterion to compute gradient penalty
- WARN: SHOULD NOT BE RUN WITH CriterionCallback,
use special GradientPenaltyCallback instead
-
class
catalyst.contrib.nn.criterion.huber.
HuberLoss
(clip_delta=1.0, reduction='mean')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.criterion.iou.
IoULoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
Intersection over union (Jaccard) loss
- Parameters
eps (float) – epsilon to avoid zero division
threshold (float) – threshold for outputs binarization
activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]
-
class
catalyst.contrib.nn.criterion.iou.
BCEIoULoss
(eps: float = 1e-07, threshold: float = None, activation: str = 'Sigmoid', reduction: str = 'mean')[source]¶ Bases:
torch.nn.modules.module.Module
Intersection over union (Jaccard) with BCE loss
- Parameters
eps (float) – epsilon to avoid zero division
threshold (float) – threshold for outputs binarization
activation (str) – An torch.nn activation applied to the outputs. Must be one of [‘none’, ‘Sigmoid’, ‘Softmax2d’]
reduction (str) – Specifies the reduction to apply to the output of BCE
https://arxiv.org/abs/1705.08790 Lovasz-Softmax and Jaccard hinge loss in PyTorch Maxim Berman 2018 ESAT-PSI KU Leuven (MIT License)
-
class
catalyst.contrib.nn.criterion.lovasz.
LovaszLossBinary
(per_image=False, ignore=None)[source]¶ Bases:
torch.nn.modules.loss._Loss
-
class
catalyst.contrib.nn.criterion.lovasz.
LovaszLossMultiClass
(per_image=False, ignore=None)[source]¶ Bases:
torch.nn.modules.loss._Loss
-
class
catalyst.contrib.nn.criterion.lovasz.
LovaszLossMultiLabel
(per_image=False, ignore=None)[source]¶ Bases:
torch.nn.modules.loss._Loss
-
class
catalyst.contrib.nn.criterion.margin.
MarginLoss
(alpha: float = 0.2, beta: float = 1.0, skip_labels: Union[int, List[int]] = -1)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.criterion.triplet.
TripletLoss
(margin=0.3)[source]¶ Bases:
torch.nn.modules.module.Module
Triplet loss with hard positive/negative mining. Reference: Code imported from https://github.com/NegatioN/OnlineMiningTripletLoss.
- Parameters
margin (float) – margin for triplet.
-
class
catalyst.contrib.nn.criterion.triplet.
TripletPairwiseEmbeddingLoss
(margin=0.3, reduction='mean')[source]¶ Bases:
torch.nn.modules.module.Module
TripletPairwiseEmbeddingLoss – proof of concept criterion. Still work in progress.
Modules¶
-
class
catalyst.contrib.nn.modules.common.
Lambda
(lambda_fn)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.common.
Normalize
(**normalize_kwargs)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.lama.
TemporalLastPooling
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.lama.
TemporalAvgPooling
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.lama.
TemporalMaxPooling
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.lama.
TemporalDropLastWrapper
(net)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.lama.
TemporalAttentionPooling
(in_features, activation=None, kernel_size=1, **params)[source]¶ Bases:
torch.nn.modules.module.Module
-
forward
(x: torch.Tensor, mask: torch.Tensor = None)[source]¶ - Parameters
x – [batch_size, history_len, feature_size]
- Returns
-
name2activation
= {'sigmoid': Sigmoid(), 'softmax': Softmax(dim=1), 'tanh': Tanh()}¶
-
-
class
catalyst.contrib.nn.modules.lama.
TemporalConcatPooling
(in_features, history_len=1)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.lama.
LamaPooling
(in_features, groups=None)[source]¶ Bases:
torch.nn.modules.module.Module
-
available_groups
= ['last', 'avg', 'avg_droplast', 'max', 'max_droplast', 'sigmoid', 'sigmoid_droplast', 'softmax', 'softmax_droplast', 'tanh', 'tanh_droplast']¶
-
-
class
catalyst.contrib.nn.modules.pooling.
GlobalAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.pooling.
GlobalAvgAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.pooling.
GlobalAvgPool2d
[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.pooling.
GlobalConcatAttnPool2d
(in_features, activation_fn='Sigmoid')[source]¶ Bases:
torch.nn.modules.module.Module
-
class
catalyst.contrib.nn.modules.pooling.
GlobalConcatPool2d
[source]¶ Bases:
torch.nn.modules.module.Module
Optimizers¶
-
class
catalyst.contrib.nn.optimizers.lamb.
Lamb
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0, adam=False)[source]¶ Bases:
torch.optim.optimizer.Optimizer
Lamb optimizer
-
__init__
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-06, weight_decay=0, adam=False)[source]¶ Implements Lamb algorithm from Training BERT in 76 minutes.
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
adam (bool, optional) – always use trust ratio = 1, which turns this into Adam. Useful for comparison purposes.
-
-
catalyst.contrib.nn.optimizers.lamb.
log_lamb_rs
(optimizer: torch.optim.optimizer.Optimizer, event_writer, token_count: int)[source]¶ Log a histogram of trust ratio scalars in across layers.
-
class
catalyst.contrib.nn.optimizers.lookahead.
Lookahead
(optimizer: torch.optim.optimizer.Optimizer, k: int = 5, alpha: float = 0.5)[source]¶ Bases:
torch.optim.optimizer.Optimizer
-
__init__
(optimizer: torch.optim.optimizer.Optimizer, k: int = 5, alpha: float = 0.5)[source]¶ Taken from: https://github.com/alphadl/lookahead.pytorch
-
-
class
catalyst.contrib.nn.optimizers.qhadamw.
QHAdamW
(params, lr=0.001, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-08)[source]¶ Bases:
torch.optim.optimizer.Optimizer
-
__init__
(params, lr=0.001, betas=(0.995, 0.999), nus=(0.7, 1.0), weight_decay=0.0, eps=1e-08)[source]¶ Combines the weight decay decoupling from AdamW (Decoupled Weight Decay Regularization. Loshchilov and Hutter, 2019) with QHAdam (Quasi-hyperbolic momentum and Adam for deep learning. Ma and Yarats, 2019).
https://github.com/iprally/qhadamw-pytorch/blob/master/qhadamw.py
- Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (\(\alpha\) from the paper) (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of the gradient and its square (default: (0.995, 0.999))
nus (Tuple[float, float], optional) – immediate discount factors used to estimate the gradient and its square (default: (0.7, 1.0))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 regularization coefficient, times two) (default: 0.0)
Example
>>> optimizer = QHAdamW( ... model.parameters(), ... lr=3e-4, nus=(0.8, 1.0), betas=(0.99, 0.999)) >>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step() QHAdam paper:
-
-
class
catalyst.contrib.nn.optimizers.radam.
RAdam
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Bases:
torch.optim.optimizer.Optimizer
-
__init__
(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0)[source]¶ Taken from https://github.com/LiyuanLucasLiu/RAdam
-
-
class
catalyst.contrib.nn.optimizers.ralamb.
Ralamb
(params: Iterable, lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]¶ Bases:
torch.optim.optimizer.Optimizer
RAdam optimizer with LARS/LAMB tricks Taken from https://github.com/mgrankin/over9000/blob/master/ralamb.py
-
__init__
(params: Iterable, lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]¶ - Parameters
params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
lr (float, optional) – learning rate (default: 1e-3)
betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8)
weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
-
Schedulers¶
-
class
catalyst.contrib.nn.schedulers.base.
BaseScheduler
(optimizer, last_epoch=-1)[source]¶ Bases:
torch.optim.lr_scheduler._LRScheduler
,abc.ABC
Base class for all schedulers with momentum update
-
class
catalyst.contrib.nn.schedulers.base.
BatchScheduler
(optimizer, last_epoch=-1)[source]¶ Bases:
catalyst.contrib.nn.schedulers.base.BaseScheduler
,abc.ABC
-
class
catalyst.contrib.nn.schedulers.onecycle.
OneCycleLRWithWarmup
(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]¶ Bases:
catalyst.contrib.nn.schedulers.base.BatchScheduler
OneCycle scheduler with warm-up & lr decay stages. First stage increases lr from
init_lr
tomax_lr
, and calledwarmup
. Also it decreases momentum frominit_momentum
tomin_momentum
. Takeswarmup_steps
stepsSecond is
annealing
stage. Decrease lr frommax_lr
tomin_lr
, Increase momentum frommin_momentum
tomax_momentum
.Third, optional, lr decay.
-
__init__
(optimizer: torch.optim.optimizer.Optimizer, num_steps: int, lr_range=(1.0, 0.005), init_lr: float = None, warmup_steps: int = 0, warmup_fraction: float = None, decay_steps: int = 0, decay_fraction: float = None, momentum_range=(0.8, 0.99, 0.999), init_momentum: float = None)[source]¶ - Parameters
optimizer – PyTorch optimizer
num_steps (int) – total number of steps
lr_range – tuple with two or three elements (max_lr, min_lr, [final_lr])
init_lr (float, optional) – initial lr
warmup_steps (int) – count of steps for warm-up stage
warmup_fraction (float, optional) – fraction in [0; 1) to calculate number of warmup steps. Cannot be set together with
warmup_steps
decay_steps (int) – count of steps for lr decay stage
decay_fraction (float, optional) – fraction in [0; 1) to calculate number of decay steps. Cannot be set together with
decay_steps
momentum_range – tuple with two or three elements (min_momentum, max_momentum, [final_momentum])
init_momentum (float, optional) – initial momentum
-
get_lr
() → List[float][source]¶ Function that returns the new lr for optimizer :returns: calculated lr for every param groups :rtype: List[float]
-
get_momentum
() → List[float][source]¶ Function that returns the new momentum for optimizer
- Returns
calculated momentum for every param groups
- Return type
List[float]
-
Models¶
Segmentation¶
-
class
catalyst.contrib.models.cv.segmentation.unet.
ResnetUnet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec
-
class
catalyst.contrib.models.cv.segmentation.unet.
Unet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.cv.segmentation.linknet.
Linknet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.cv.segmentation.linknet.
ResnetLinknet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec
-
class
catalyst.contrib.models.cv.segmentation.fpn.
FPNUnet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.cv.segmentation.fpn.
ResnetFPNUnet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec
-
class
catalyst.contrib.models.cv.segmentation.psp.
PSPnet
(num_classes: int = 1, in_channels: int = 3, num_channels: int = 32, num_blocks: int = 4, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.UnetSpec
-
class
catalyst.contrib.models.cv.segmentation.psp.
ResnetPSPnet
(num_classes: int = 1, arch: str = 'resnet18', pretrained: bool = True, encoder_params: Dict = None, bridge_params: Dict = None, decoder_params: Dict = None, head_params: Dict = None, state_dict: Union[dict, str, pathlib.Path] = None)[source]¶ Bases:
catalyst.contrib.models.cv.segmentation.core.ResnetUnetSpec
DL¶
Runner¶
-
class
catalyst.contrib.dl.runner.alchemy.
AlchemyRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]¶ Bases:
catalyst.dl.core.runner.Runner
Runner wrapper with Alchemy integration hooks. Read about Alchemy here https://alchemy.host Powered by Catalyst.Ecosystem
Example
from catalyst.dl import SupervisedAlchemyRunner runner = SupervisedAlchemyRunner() runner.train( model=model, criterion=criterion, optimizer=optimizer, loaders=loaders, logdir=logdir, num_epochs=num_epochs, verbose=True, monitoring_params={ "token": "...", # your Alchemy token "project": "your_project_name", "experiment": "your_experiment_name", "group": "your_experiment_group_name" } )
-
run_experiment
(experiment: catalyst.dl.core.experiment.Experiment)[source]¶ Starts experiment
- Parameters
experiment (Experiment) – experiment class
-
-
class
catalyst.contrib.dl.runner.alchemy.
SupervisedAlchemyRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]¶ Bases:
catalyst.contrib.dl.runner.alchemy.AlchemyRunner
,catalyst.dl.runner.supervised.SupervisedRunner
SupervisedRunner with Alchemy
-
class
catalyst.contrib.dl.runner.neptune.
NeptuneRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]¶ Bases:
catalyst.dl.core.runner.Runner
Runner wrapper with Neptune integration hooks. Read about Neptune here https://neptune.ai
Examples
Initialize runner:
from catalyst.dl import SupervisedNeptuneRunner runner = SupervisedNeptuneRunner()
Pass monitoring_params and train model:
runner.train( model=model, criterion=criterion, optimizer=optimizer, loaders=loaders, logdir=logdir, num_epochs=num_epochs, verbose=True, monitoring_params={ "init": { "project_qualified_name": "shared/catalyst-integration", "api_token": "ANONYMOUS", # api key, }, "create_experiment": { "name": "catalyst-example", # experiment name "params": {"epoch_nr":10}, # immutable "properties": {"data_source": "cifar10"} , # mutable "tags": ["resnet", "no-augmentations"], "upload_source_files": ["**/*.py"] # grep-like } })
You can see an example experiment here: https://ui.neptune.ai/o/shared/org/catalyst-integration/e/CAT-3/logs
You can log your experiments there without registering. Just use “ANONYMOUS” token:
runner.train( ... monitoring_params={ "init": { "project_qualified_name": "shared/catalyst-integration", "api_token": "ANONYMOUS", # api key, }, ... })
-
class
catalyst.contrib.dl.runner.neptune.
SupervisedNeptuneRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]¶ Bases:
catalyst.contrib.dl.runner.neptune.NeptuneRunner
,catalyst.dl.runner.supervised.SupervisedRunner
-
class
catalyst.contrib.dl.runner.wandb.
WandbRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None)[source]¶ Bases:
catalyst.dl.core.runner.Runner
Runner wrapper with wandb integration hooks.
-
run_experiment
(experiment: catalyst.dl.core.experiment.Experiment)[source]¶ Starts experiment
- Parameters
experiment (Experiment) – experiment class
-
-
class
catalyst.contrib.dl.runner.wandb.
SupervisedWandbRunner
(model: torch.nn.modules.module.Module = None, device: Union[str, torch.device] = None, input_key: Any = 'features', output_key: Any = 'logits', input_target_key: str = 'targets')[source]¶ Bases:
catalyst.contrib.dl.runner.wandb.WandbRunner
,catalyst.dl.runner.supervised.SupervisedRunner
SupervisedRunner with WandB
Registry¶
catalyst subpackage registries
-
catalyst.contrib.registry.
Criterion
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.contrib.registry.
Optimizer
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.contrib.registry.
Scheduler
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.contrib.registry.
Module
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.contrib.registry.
Model
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.contrib.registry.
Sampler
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)
-
catalyst.contrib.registry.
Transform
(factory: Union[Type, Callable[[...], Any]] = None, *factories: Union[Type, Callable[[...], Any]], name: str = None, **named_factories: Union[Type, Callable[[...], Any]]) → Union[Type, Callable[[...], Any]]¶ Adds factory to registry with it’s
__name__
attribute or provided name. Signature is flexible.- Parameters
factory – Factory instance
factories – More instances
name – Provided name for first instance. Use only when pass single instance.
named_factories – Factory and their names as kwargs
- Returns
First factory passed
- Return type
(Factory)