Typing¶
All Catalyst custom types are defined in this module.
- 
catalyst.typing.Model¶
- alias of - torch.nn.modules.module.Module
- 
catalyst.typing.Criterion¶
- alias of - torch.nn.modules.module.Module
- 
class catalyst.typing.Optimizer(params, defaults)[source]¶
- Bases: - object- Base class for all optimizers. - Warning - Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators over values of dictionaries. - Parameters
- params (iterable) – an iterable of - torch.Tensors or- dicts. Specifies what Tensors should be optimized.
- defaults – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them). 
 
 - 
add_param_group(param_group)[source]¶
- Add a param group to the - Optimizers param_groups.- This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the - Optimizeras training progresses.- Parameters
- param_group (dict) – Specifies what Tensors should be optimized along with group 
- optimization options. (specific) – 
 
 
 - 
load_state_dict(state_dict)[source]¶
- Loads the optimizer state. - Parameters
- state_dict (dict) – optimizer state. Should be an object returned from a call to - state_dict().
 
 - 
state_dict()[source]¶
- Returns the state of the optimizer as a - dict.- It contains two entries: - state - a dict holding current optimization state. Its content
- differs between optimizer classes. 
 
- param_groups - a dict containing all parameter groups 
 
 - 
step(closure)[source]¶
- Performs a single optimization step (parameter update). - Parameters
- closure (callable) – A closure that reevaluates the model and returns the loss. Optional for most optimizers. 
 - Note - Unless otherwise specified, this function should not modify the - .gradfield of the parameters.
 - 
zero_grad(set_to_none: bool = False)[source]¶
- Sets the gradients of all optimized - torch.Tensors to zero.- Parameters
- set_to_none (bool) – instead of setting to zero, set the grads to None. This is will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests - zero_grad(set_to_none=True)followed by a backward pass,- .grads are guaranteed to be None for params that did not receive a gradient. 3.- torch.optimoptimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).
 
 
- 
catalyst.typing.Scheduler¶
- alias of - torch.optim.lr_scheduler._LRScheduler
- 
class catalyst.typing.Dataset[source]¶
- Bases: - typing.Generic- An abstract class representing a - Dataset.- All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite - __getitem__(), supporting fetching a data sample for a given key. Subclasses could also optionally overwrite- __len__(), which is expected to return the size of the dataset by many- Samplerimplementations and the default options of- DataLoader.- Note - DataLoaderby default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.