Tools¶
Main¶
Frozen Class¶
Frozen class.
Example of usage can be found in catalyst.core.runner.IRunner
.
Time Manager¶
Simple timer.
Contrib¶
Tensorboard¶
- Tensorboard readers:
-
exception
catalyst.contrib.tools.tensorboard.
EventReadingException
[source]¶ Bases:
Exception
An exception that correspond to an event file reading error.
-
class
catalyst.contrib.tools.tensorboard.
EventsFileReader
(events_file: BinaryIO)[source]¶ Bases:
collections.abc.Iterable
An iterator over a Tensorboard events file.
-
class
catalyst.contrib.tools.tensorboard.
SummaryItem
(tag, step, wall_time, value, type)¶ Bases:
tuple
-
property
step
¶ Alias for field number 1
-
property
tag
¶ Alias for field number 0
-
property
type
¶ Alias for field number 4
-
property
value
¶ Alias for field number 3
-
property
wall_time
¶ Alias for field number 2
-
property
-
class
catalyst.contrib.tools.tensorboard.
SummaryReader
(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]¶ Bases:
collections.abc.Iterable
Iterates over events in all the files in the current logdir.
Note
Only scalars are supported at the moment.
-
__init__
(logdir: Union[str, pathlib.Path], tag_filter: Optional[collections.abc.Iterable] = None, types: collections.abc.Iterable = ('scalar',))[source]¶ Initalize new summary reader.
- Parameters
logdir – A directory with Tensorboard summary data
tag_filter – A list of tags to leave (None for all)
types – A list of types to get.
"scalar" and "image" types are allowed at the moment. (Only) –
-
Meters¶
The meters from torchnet.meters.
Every meter implements catalyst.tools.meters.meter.Meter
interface.
Meter¶
Meters provide a way to keep track of important statistics in an online manner.
AP Meter¶
The APMeter measures the average precision per class.
-
class
catalyst.tools.meters.apmeter.
APMeter
[source]¶ Bases:
catalyst.tools.meters.meter.Meter
The APMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where:
1. The output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function).
2. The target contains only values 0 (for negative examples) and 1 (for positive examples).
3. The weight ( > 0) represents weight for each sample.
-
add
(output: torch.Tensor, target: torch.Tensor, weight: torch.Tensor = None) → None[source]¶ Add a new observation.
- Parameters
output – NxK tensor that for each of the N examples indicates the probability of the example belonging to each of the K classes, according to the model. The probabilities should sum to one over all classes
target – binary NxK tensort that encodes which of the K classes are associated with the N-th input (eg: a row [0, 1, 0, 1] indicates that the example is associated with classes 2 and 4)
weight (optional, Tensor) – Nx1 tensor representing the weight for each example (each weight > 0)
-
AUC Meter¶
The AUCMeter measures the area under the receiver-operating characteristic (ROC) curve for binary classification problems. The area under the curve (AUC) can be interpreted as the probability that, given a randomly selected positive example and a randomly selected negative example, the positive example is assigned a higher score by the classification model than the negative example.
-
class
catalyst.tools.meters.aucmeter.
AUCMeter
[source]¶ Bases:
catalyst.tools.meters.meter.Meter
The AUCMeter is designed to operate on one-dimensional Tensors output and target, where:
1. The output contains model output scores that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function)
2. The target contains only values 0 (for negative examples) and 1 (for positive examples).
Average Value Meter¶
Average value meter
-
class
catalyst.tools.meters.averagevaluemeter.
AverageValueMeter
[source]¶ Bases:
catalyst.tools.meters.meter.Meter
Average value meter stores mean and standard deviation for population of input values. Meter updates are applied online, one value for each update. Values are not cached, only the last added.
-
add
(value, batch_size) → None[source]¶ Add a new observation.
Updates of mean and std are going online, with Welford’s online algorithm.
- Parameters
value – value for update, can be scalar number or PyTorch tensor
batch_size – batch size for update
Note
Because of algorithm design, you can update meter values with only one value a time.
-
Class Error Meter¶
-
class
catalyst.tools.meters.classerrormeter.
ClassErrorMeter
(topk=None, accuracy=False)[source]¶ Bases:
catalyst.tools.meters.meter.Meter
@TODO: Docs. Contribution is welcome.
Confusion Meter¶
Maintains a confusion matrix for a given classification problem.
-
class
catalyst.tools.meters.confusionmeter.
ConfusionMeter
(k: int, normalized: bool = False)[source]¶ Bases:
catalyst.tools.meters.meter.Meter
ConfusionMeter constructs a confusion matrix for a multi-class classification problems. It does not support multi-label, multi-class problems: for such problems, please use MultiLabelConfusionMeter.
-
__init__
(k: int, normalized: bool = False)[source]¶ - Parameters
k – number of classes in the classification problem
normalized – determines whether or not the confusion matrix is normalized or not
-
add
(predicted: torch.Tensor, target: torch.Tensor) → None[source]¶ Computes the confusion matrix of
K x K
size whereK
is no of classes.- Parameters
predicted – Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1
target – Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors
-
Map Meter¶
The mAP meter measures the mean average precision over all classes.
-
class
catalyst.tools.meters.mapmeter.
mAPMeter
[source]¶ Bases:
catalyst.tools.meters.meter.Meter
This meter is a wrapper for
catalyst.tools.meters.apmeter.APMeter
. The mAPMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where:1. The output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function)
2. The target contains only values 0 (for negative examples) and 1 (for positive examples)
3. The weight ( > 0) represents weight for each sample.
Moving Average Value Meter¶
Moving average meter calculates average for moving window of values.
-
class
catalyst.tools.meters.movingaveragevaluemeter.
MovingAverageValueMeter
(windowsize)[source]¶ Bases:
catalyst.tools.meters.meter.Meter
MovingAverageValueMeter stores mean and standard deviation for population of array that is handled like a queue during updates. Queue(window) is filled with zeros from the start by default. Meter updates are applied online, one value for each update. Meter values are moving average and moving standard deviation.
MSE Meter¶
MSE and RMSE meters.
-
class
catalyst.tools.meters.msemeter.
MSEMeter
(root: bool = False)[source]¶ Bases:
catalyst.tools.meters.meter.Meter
This meter can handle MSE and RMSE. Root calculation can be toggled(not calculated by default).
-
__init__
(root: bool = False)[source]¶ - Parameters
root – Toggle between calculation of RMSE (True) and MSE (False)
-
Precision-Recall-F1 Meter¶
In this module precision, recall and F1 score calculations are defined in separate functions.
PrecisionRecallF1ScoreMeter
can keep track for all three of these.
-
class
catalyst.tools.meters.ppv_tpr_f1_meter.
PrecisionRecallF1ScoreMeter
(threshold=0.5)[source]¶ Bases:
catalyst.tools.meters.meter.Meter
Keeps track of global true positives, false positives, and false negatives for each epoch and calculates precision, recall, and F1-score based on those metrics. Currently, this meter works for binary cases only, please use multiple instances of this class for multi-label cases.
-
add
(output: torch.Tensor, target: torch.Tensor) → None[source]¶ Thresholds predictions and calculates the true positives, false positives, and false negatives in comparison to the target.
- Parameters
output – prediction after activation function shape should be (batch_size, …), but works with any shape
target – label (binary), shape should be the same as output’s shape
-