Examples¶
Basic Level¶
Create instance from config file with params.
Please note that Basic Fire Magic also allows your hero to cast fire spells at reduced cost.
# transform.yaml
_target_: torchvision.transforms.Normalize
mean: [0.5, 0.5, 0.5]
std: [0.5, 0.5, 0.5]
import hydra_slayer
import yaml
registry = hydra_slayer.Registry()
with open("dataset.yaml") as stream:
raw_config = yaml.safe_load(stream)
transform = registry.get_from_params(**raw_config)
transform
# Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
Advanced Level¶
Create CIFAR100 dataset from config file with params.
Please note that Advanced Fire Magic also allows your hero to cast fire spells at reduced cost and increased effectiveness.
# dataset.yaml
_target_: torchvision.datasets.CIFAR100
root: ./data
train: false
transform:
_target_: torchvision.transforms.Compose
transforms:
- _target_: torchvision.transforms.ToTensor
- _target_: torchvision.transforms.Normalize
mean: [0.5, 0.5, 0.5]
std: [0.5, 0.5, 0.5]
download: true
import hydra_slayer
import yaml
registry = hydra_slayer.Registry()
with open("dataset.yaml") as stream:
config = yaml.safe_load(stream)
dataset = registry.get_from_params(**config)
dataset
# Dataset CIFAR100
# Number of datapoints: 10000
# Root location: ./data
# Split: Test
# StandardTransform
# Transform: Compose(
# ToTensor()
# Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
# )
Expert level¶
Read multiple CSV files as pandas dataframes and merge them.
Please note that Expert Fire Magic also allows your hero to cast fire spells at reduced cost and maximum effectiveness.
# dataset.yaml
dataframe:
_target_: pandas.merge
left:
_target_: pandas.read_csv
filepath_or_buffer: dataset/dataset_part1.csv
# By default, hydra-slayer use partial fit for functions
# (what is useful with activation functions in neural networks).
# But if we want to call ``pandas.read_csv`` function instead,
# then we should pass ``call_meta_factory`` manually.
meta_factory: &call_function
_target_: catalyst.tools.registry.call_meta_factory
right:
_target_: pandas.read_csv
filepath_or_buffer: dataset/dataset_part2.csv
meta_factory: *call_function
how: inner
on: user
meta_factory: *call_function
import hydra_slayer
import yaml
registry = hydra_slayer.Registry()
with open("config.yaml") as stream:
raw_config = yaml.safe_load(stream)
config = registry.get_from_params(**raw_config)
dataset = config["dataframe"]
dataset
# <class 'pandas.core.frame.DataFrame'>
# user country premium ...
# 0 1 USA True ...
# 1 2 USA False ...
# ... ... ... ...