virtex.utils.common.cycle(dataloader, device, start_iteration: int = 0)[source]

A generator to yield batches of data from dataloader infinitely.

Internally, it sets the epoch for dataloader sampler to shuffle the examples. One may optionally provide the starting iteration to make sure the shuffling seed is different and continues naturally.

virtex.utils.common.common_setup(_C: virtex.config.Config, _A: argparse.Namespace, job_type: str = 'pretrain')[source]

Setup common stuff at the start of every pretraining or downstream evaluation job, all listed here to avoid code duplication. Basic steps:

  1. Fix random seeds and other PyTorch flags.

  2. Set up a serialization directory and loggers.

  3. Log important stuff such as config, process info (useful during

    distributed training).

  4. Save a copy of config to serialization directory.


It is assumed that multiple processes for distributed training have already been launched from outside. Functions from virtex.utils.distributed module ae used to get process info.

  • _C – Config object with all the parameters.

  • _A – Argparse command line arguments.

  • job_type – Type of job for which setup is to be done; one of {"pretrain", "downstream"}.

virtex.utils.common.common_parser(description: str = '') argparse.ArgumentParser[source]

Create an argument parser some common arguments useful for any pretraining or downstream evaluation scripts.


description – Description to be used with the argument parser.


A parser object with added arguments.