Shortcuts

Tools

Main

Frozen Class

Frozen class. Example of usage can be found in catalyst.core.runner.IRunner.

class catalyst.tools.frozen_class.FrozenClass[source]

Bases: object

Class which prohibit __setattr__ on existing attributes.

Examples

>>> class IRunner(FrozenClass):

Time Manager

Simple timer.

class catalyst.tools.time_manager.TimeManager[source]

Bases: object

@TODO: Docs. Contribution is welcome.

__init__()[source]

Initialization

reset() → None[source]

Reset all previous timers.

start(name: str) → None[source]

Starts timer name.

Parameters

name – name of a timer

stop(name: str) → None[source]

Stops timer name.

Parameters

name – name of a timer

Contrib

Tensorboard

exception catalyst.contrib.tools.tensorboard.EventReadingException[source]

Bases: Exception

An exception that correspond to an event file reading error.

class catalyst.contrib.tools.tensorboard.EventsFileReader(events_file: BinaryIO)[source]

Bases: collections.abc.Iterable

An iterator over a Tensorboard events file.

__init__(events_file: BinaryIO)[source]

Initialize an iterator over an events file.

Parameters

events_file – An opened file-like object.

class catalyst.contrib.tools.tensorboard.SummaryItem(tag, step, wall_time, value, type)

Bases: tuple

property step

Alias for field number 1

property tag

Alias for field number 0

property type

Alias for field number 4

property value

Alias for field number 3

property wall_time

Alias for field number 2

class catalyst.contrib.tools.tensorboard.SummaryReader(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]

Bases: collections.abc.Iterable

Iterates over events in all the files in the current logdir.

Note

Only scalars are supported at the moment.

__init__(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]

Initalize new summary reader.

Parameters
  • logdir – A directory with Tensorboard summary data

  • tag_filter – A list of tags to leave (None for all)

  • types – A list of types to get.

  • "scalar" and "image" types are allowed at the moment. (Only) –

Meters

The meters from torchnet.meters.

Every meter implements catalyst.tools.meters.meter.Meter interface.

Meter

Meters provide a way to keep track of important statistics in an online manner.

class catalyst.tools.meters.meter.Meter[source]

Bases: object

This class is abstract, but provides a standard interface for all meters to follow.

add(value)[source]

Log a new value to the meter.

Parameters

value – Next result to include.

reset()[source]

Resets the meter to default settings.

value()[source]

Get the value of the meter in the current state.

Average Value Meter

Average value meter

class catalyst.tools.meters.averagevaluemeter.AverageValueMeter[source]

Bases: catalyst.tools.meters.meter.Meter

Average value meter stores mean and standard deviation for population of input values. Meter updates are applied online, one value for each update. Values are not cached, only the last added.

__init__()[source]

Constructor method for the AverageValueMeter class.

add(value, batch_size) → None[source]

Add a new observation.

Updates of mean and std are going online, with Welford’s online algorithm.

Parameters
  • value – value for update, can be scalar number or PyTorch tensor

  • batch_size – batch size for update

Note

Because of algorithm design, you can update meter values with only one value a time.

reset()[source]

Resets the meter to default settings.

value()[source]

Returns meter values.

Returns

tuple of mean and std that have been updated online.

Return type

Tuple[float, float]

Confusion Meter

Maintains a confusion matrix for a given classification problem.

class catalyst.tools.meters.confusionmeter.ConfusionMeter(k: int, normalized: bool = False)[source]

Bases: catalyst.tools.meters.meter.Meter

ConfusionMeter constructs a confusion matrix for a multiclass classification problems. It does not support multilabel, multiclass problems: for such problems, please use MultiLabelConfusionMeter.

__init__(k: int, normalized: bool = False)[source]
Parameters
  • k – number of classes in the classification problem

  • normalized – determines whether or not the confusion matrix is normalized or not

add(predicted: torch.Tensor, target: torch.Tensor) → None[source]

Computes the confusion matrix of K x K size where K is no of classes.

Parameters
  • predicted – Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1

  • target – Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors

reset() → None[source]

Reset confusion matrix, filling it with zeros.

value()[source]
Returns

Confusion matrix of K rows and K columns, where rows corresponds to ground-truth targets and columns corresponds to predicted targets.

Precision-Recall-F1 Meter

In this module precision, recall and F1 score calculations are defined in separate functions.

PrecisionRecallF1ScoreMeter can keep track for all three of these.

class catalyst.tools.meters.ppv_tpr_f1_meter.PrecisionRecallF1ScoreMeter(threshold=0.5)[source]

Bases: catalyst.tools.meters.meter.Meter

Keeps track of global true positives, false positives, and false negatives for each epoch and calculates precision, recall, and F1-score based on those metrics. Currently, this meter works for binary cases only, please use multiple instances of this class for multilabel cases.

__init__(threshold=0.5)[source]

Constructor method for the `` PrecisionRecallF1ScoreMeter`` class.

add(output: torch.Tensor, target: torch.Tensor) → None[source]

Thresholds predictions and calculates the true positives, false positives, and false negatives in comparison to the target.

Parameters
  • output – prediction after activation function shape should be (batch_size, …), but works with any shape

  • target – label (binary), shape should be the same as output’s shape

reset() → None[source]

Resets true positive, false positive and false negative counts to 0.

value()[source]

Calculates precision/recall/f1 based on the current stored tp/fp/fn counts.

Returns

(precision, recall, f1)

Return type

tuple of floats