virtex.utils.common
- virtex.utils.common.cycle(dataloader, device, start_iteration: int = 0)[source]
A generator to yield batches of data from dataloader infinitely.
Internally, it sets the
epoch
for dataloader sampler to shuffle the examples. One may optionally provide the starting iteration to make sure the shuffling seed is different and continues naturally.
- virtex.utils.common.common_setup(_C: virtex.config.Config, _A: argparse.Namespace, job_type: str = 'pretrain')[source]
Setup common stuff at the start of every pretraining or downstream evaluation job, all listed here to avoid code duplication. Basic steps:
Fix random seeds and other PyTorch flags.
Set up a serialization directory and loggers.
- Log important stuff such as config, process info (useful during
distributed training).
Save a copy of config to serialization directory.
Note
It is assumed that multiple processes for distributed training have already been launched from outside. Functions from
virtex.utils.distributed
module ae used to get process info.- Parameters
_C – Config object with all the parameters.
_A – Argparse command line arguments.
job_type – Type of job for which setup is to be done; one of
{"pretrain", "downstream"}
.
- virtex.utils.common.common_parser(description: str = '') argparse.ArgumentParser [source]
Create an argument parser some common arguments useful for any pretraining or downstream evaluation scripts.
- Parameters
description – Description to be used with the argument parser.
- Returns
A parser object with added arguments.