Shortcuts

Tools

Main

Frozen Class

Frozen class. Example of usage can be found in catalyst.core.runner.IRunner.

class catalyst.tools.frozen_class.FrozenClass[source]

Bases: object

Class which prohibit __setattr__ on existing attributes.

Examples

>>> class IRunner(FrozenClass):

Time Manager

Simple timer.

class catalyst.tools.time_manager.TimeManager[source]

Bases: object

@TODO: Docs. Contribution is welcome.

__init__()[source]

Initialization

reset() → None[source]

Reset all previous timers.

start(name: str) → None[source]

Starts timer name.

Parameters

name – name of a timer

stop(name: str) → None[source]

Stops timer name.

Parameters

name – name of a timer

Contrib

Tensorboard

Tensorboard readers:
exception catalyst.contrib.tools.tensorboard.EventReadingException[source]

Bases: Exception

An exception that correspond to an event file reading error.

class catalyst.contrib.tools.tensorboard.EventsFileReader(events_file: BinaryIO)[source]

Bases: collections.abc.Iterable

An iterator over a Tensorboard events file.

__init__(events_file: BinaryIO)[source]

Initialize an iterator over an events file.

Parameters

events_file – An opened file-like object.

class catalyst.contrib.tools.tensorboard.SummaryItem(tag, step, wall_time, value, type)

Bases: tuple

property step

Alias for field number 1

property tag

Alias for field number 0

property type

Alias for field number 4

property value

Alias for field number 3

property wall_time

Alias for field number 2

class catalyst.contrib.tools.tensorboard.SummaryReader(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]

Bases: collections.abc.Iterable

Iterates over events in all the files in the current logdir.

Note

Only scalars are supported at the moment.

__init__(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]

Initalize new summary reader.

Parameters
  • logdir – A directory with Tensorboard summary data

  • tag_filter – A list of tags to leave (None for all)

  • types – A list of types to get.

  • "scalar" and "image" types are allowed at the moment. (Only) –

Meters

The meters from torchnet.meters.

Every meter implements catalyst.tools.meters.meter.Meter interface.

Meter

Meters provide a way to keep track of important statistics in an online manner.

class catalyst.tools.meters.meter.Meter[source]

Bases: object

This class is abstract, but provides a standard interface for all meters to follow.

add(value)[source]

Log a new value to the meter.

Parameters

value – Next result to include.

reset()[source]

Resets the meter to default settings.

value()[source]

Get the value of the meter in the current state.

AP Meter

The APMeter measures the average precision per class.

class catalyst.tools.meters.apmeter.APMeter[source]

Bases: catalyst.tools.meters.meter.Meter

The APMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where:

1. The output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function).

2. The target contains only values 0 (for negative examples) and 1 (for positive examples).

3. The weight ( > 0) represents weight for each sample.

__init__()[source]

Constructor method for the APMeter class.

add(output: torch.Tensor, target: torch.Tensor, weight: torch.Tensor = None) → None[source]

Add a new observation.

Parameters
  • output – NxK tensor that for each of the N examples indicates the probability of the example belonging to each of the K classes, according to the model. The probabilities should sum to one over all classes

  • target – binary NxK tensort that encodes which of the K classes are associated with the N-th input (eg: a row [0, 1, 0, 1] indicates that the example is associated with classes 2 and 4)

  • weight (optional, Tensor) – Nx1 tensor representing the weight for each example (each weight > 0)

reset()[source]

Resets the meter with empty member variables.

value() → torch.Tensor[source]

Returns the model”s average precision for each class.

Returns

1xK tensor, with avg precision for each class k

Return type

torch.Tensor

AUC Meter

The AUCMeter measures the area under the receiver-operating characteristic (ROC) curve for binary classification problems. The area under the curve (AUC) can be interpreted as the probability that, given a randomly selected positive example and a randomly selected negative example, the positive example is assigned a higher score by the classification model than the negative example.

class catalyst.tools.meters.aucmeter.AUCMeter[source]

Bases: catalyst.tools.meters.meter.Meter

The AUCMeter is designed to operate on one-dimensional Tensors output and target, where:

1. The output contains model output scores that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function)

2. The target contains only values 0 (for negative examples) and 1 (for positive examples).

__init__()[source]

Constructor method for the AUCMeter class.

add(output: torch.Tensor, target: torch.Tensor) → None[source]

Update stored scores and targets.

Parameters
  • output – one-dimensional tensor output

  • target – one-dimensional tensor target

reset() → None[source]

Reset stored scores and targets.

value()[source]

Return metric values of AUC, TPR and FPR.

Returns

(AUC, TPR, FPR)

Return type

tuple of floats

Average Value Meter

Average value meter

class catalyst.tools.meters.averagevaluemeter.AverageValueMeter[source]

Bases: catalyst.tools.meters.meter.Meter

Average value meter stores mean and standard deviation for population of input values. Meter updates are applied online, one value for each update. Values are not cached, only the last added.

__init__()[source]

Constructor method for the AverageValueMeter class.

add(value, batch_size) → None[source]

Add a new observation.

Updates of mean and std are going online, with Welford’s online algorithm.

Parameters
  • value – value for update, can be scalar number or PyTorch tensor

  • batch_size – batch size for update

Note

Because of algorithm design, you can update meter values with only one value a time.

reset()[source]

Resets the meter to default settings.

value()[source]

Returns meter values.

Returns

tuple of mean and std that have been updated online.

Return type

Tuple[float, float]

Class Error Meter

class catalyst.tools.meters.classerrormeter.ClassErrorMeter(topk=None, accuracy=False)[source]

Bases: catalyst.tools.meters.meter.Meter

@TODO: Docs. Contribution is welcome.

__init__(topk=None, accuracy=False)[source]

Constructor method for the AverageValueMeter class.

add(output, target) → None[source]

@TODO: Docs. Contribution is welcome.

reset() → None[source]

@TODO: Docs. Contribution is welcome.

value(k=-1)[source]

@TODO: Docs. Contribution is welcome.

Confusion Meter

Maintains a confusion matrix for a given classification problem.

class catalyst.tools.meters.confusionmeter.ConfusionMeter(k: int, normalized: bool = False)[source]

Bases: catalyst.tools.meters.meter.Meter

ConfusionMeter constructs a confusion matrix for a multi-class classification problems. It does not support multi-label, multi-class problems: for such problems, please use MultiLabelConfusionMeter.

__init__(k: int, normalized: bool = False)[source]
Parameters
  • k – number of classes in the classification problem

  • normalized – determines whether or not the confusion matrix is normalized or not

add(predicted: torch.Tensor, target: torch.Tensor) → None[source]

Computes the confusion matrix of K x K size where K is no of classes.

Parameters
  • predicted – Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1

  • target – Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors

reset() → None[source]

Reset confusion matrix, filling it with zeros.

value()[source]
Returns

Confusion matrix of K rows and K columns, where rows corresponds to ground-truth targets and columns corresponds to predicted targets.

Map Meter

The mAP meter measures the mean average precision over all classes.

class catalyst.tools.meters.mapmeter.mAPMeter[source]

Bases: catalyst.tools.meters.meter.Meter

This meter is a wrapper for catalyst.tools.meters.apmeter.APMeter. The mAPMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where:

1. The output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function)

2. The target contains only values 0 (for negative examples) and 1 (for positive examples)

3. The weight ( > 0) represents weight for each sample.

__init__()[source]

Constructor method for the mAPMeter class.

add(output: torch.Tensor, target: torch.Tensor, weight: Optional[torch.Tensor] = None) → None[source]

Update self.apmeter.

Parameters
  • output – Model output scores as NxK tensor

  • target – Target scores as NxK tensor

  • weight (Tensor, optional) – Weight values for each sample as Nx1 Tensor

reset() → None[source]

Reset self.apmeter.

value()[source]

Returns mean of self.apmeter value.

Returns

mAP scalar tensor

Return type

torch.Tensor

Moving Average Value Meter

Moving average meter calculates average for moving window of values.

class catalyst.tools.meters.movingaveragevaluemeter.MovingAverageValueMeter(windowsize)[source]

Bases: catalyst.tools.meters.meter.Meter

MovingAverageValueMeter stores mean and standard deviation for population of array that is handled like a queue during updates. Queue(window) is filled with zeros from the start by default. Meter updates are applied online, one value for each update. Meter values are moving average and moving standard deviation.

__init__(windowsize)[source]
Parameters

windowsize – size of window of values, which is continuous and ends on last updated element

add(value: float) → None[source]

Adds observation sample.

Parameters

value – scalar

reset() → None[source]

Reset sum, number of elements, moving variance and zero out window.

value()[source]

Return mean and standard deviation of window.

Returns

(window mean, window std)

Return type

tuple of floats

MSE Meter

MSE and RMSE meters.

class catalyst.tools.meters.msemeter.MSEMeter(root: bool = False)[source]

Bases: catalyst.tools.meters.meter.Meter

This meter can handle MSE and RMSE. Root calculation can be toggled(not calculated by default).

__init__(root: bool = False)[source]
Parameters

root – Toggle between calculation of RMSE (True) and MSE (False)

add(output: torch.Tensor, target: torch.Tensor) → None[source]

Update squared error stored sum and number of elements.

Parameters
  • output – Model output tensor or numpy array

  • target – Target tensor or numpy array

reset() → None[source]

Reset meter number of elements and squared error sum.

value() → float[source]

Calculate MSE and return RMSE or MSE.

Returns

Root of MSE if self.root is True else MSE

Return type

float

Precision-Recall-F1 Meter

In this module precision, recall and F1 score calculations are defined in separate functions.

PrecisionRecallF1ScoreMeter can keep track for all three of these.

class catalyst.tools.meters.ppv_tpr_f1_meter.PrecisionRecallF1ScoreMeter(threshold=0.5)[source]

Bases: catalyst.tools.meters.meter.Meter

Keeps track of global true positives, false positives, and false negatives for each epoch and calculates precision, recall, and F1-score based on those metrics. Currently, this meter works for binary cases only, please use multiple instances of this class for multi-label cases.

__init__(threshold=0.5)[source]

Constructor method for the `` PrecisionRecallF1ScoreMeter`` class.

add(output: torch.Tensor, target: torch.Tensor) → None[source]

Thresholds predictions and calculates the true positives, false positives, and false negatives in comparison to the target.

Parameters
  • output – prediction after activation function shape should be (batch_size, …), but works with any shape

  • target – label (binary), shape should be the same as output’s shape

reset() → None[source]

Resets true positive, false positive and false negative counts to 0.

value()[source]

Calculates precision/recall/f1 based on the current stored tp/fp/fn counts.

Returns

(precision, recall, f1)

Return type

tuple of floats