virtex.optim.lr_scheduler
- class virtex.optim.lr_scheduler.LinearWarmupNoDecayLR(optimizer: torch.optim.optimizer.Optimizer, total_steps: int, warmup_steps: int, last_epoch: int = - 1)[source]
Bases:
torch.optim.lr_scheduler.LambdaLR
A learning rate scheduler which linearly increases learning rate from 0 LR, and further keeps it constant throughout training.
- Parameters
optimizer – Wrapped optimizer.
total_steps – Total epochs (or iterations) for training.
warmup_steps – Number of first few steps to do linear warmup.
last_epoch – The index of last step (epoch or iteration). We named it
last_epoch
instead oflast_step
to keep the naming consistent with other LR schedulers in PyTorch.
- class virtex.optim.lr_scheduler.LinearWarmupMultiStepLR(optimizer: torch.optim.optimizer.Optimizer, total_steps: int, warmup_steps: int, milestones: List[int], gamma: float = 0.1, last_epoch: int = - 1)[source]
Bases:
torch.optim.lr_scheduler.LambdaLR
A learning rate scheduler which linearly increases learning rate from 0 LR, and further decreases it by gamma once the number of steps reaches one of the milestones.
- Parameters
optimizer – Wrapped optimizer.
total_steps – Total epochs (or iterations) for training.
warmup_steps – Number of first few steps to do linear warmup.
last_epoch – The index of last step (epoch or iteration). We named it
last_epoch
instead oflast_step
to keep the naming consistent with other LR schedulers in PyTorch.milestones – List of step indices (epochs or iterations depending on context). Must be increasing.
gamma – Multiplicative factor of learning rate decay.
last_epoch – The index of last step (epoch or iteration). We named it
last_epoch
instead oflast_step
to keep the naming consistent with other LR schedulers in PyTorch.
- class virtex.optim.lr_scheduler.LinearWarmupLinearDecayLR(optimizer: torch.optim.optimizer.Optimizer, total_steps: int, warmup_steps: int, last_epoch: int = - 1)[source]
Bases:
torch.optim.lr_scheduler.LambdaLR
A learning rate scheduler which linearly increases learning rate from 0 LR, and further decreases it linearly to zero.
- Parameters
optimizer – Wrapped optimizer.
total_steps – Total epochs (or iterations) for training.
warmup_steps – Number of first few steps to do linear warmup.
last_epoch – The index of last step (epoch or iteration). We named it
last_epoch
instead oflast_step
to keep the naming consistent with other LR schedulers in PyTorch.
- class virtex.optim.lr_scheduler.LinearWarmupCosineAnnealingLR(optimizer: torch.optim.optimizer.Optimizer, total_steps: int, warmup_steps: int, last_epoch: int = - 1)[source]
Bases:
torch.optim.lr_scheduler.LambdaLR
A learning rate scheduler which linearly increases learning rate from 0 LR, and further decreases it to zero by cosine decay. After linear warmup, the LR decays as:
\[\eta_t = \eta_{max}\cos^2(\frac{T_{cur} - T_{warm}}{T_{max} - T_{warm}}\frac{\pi}{2})\]- Parameters
optimizer – Wrapped optimizer.
total_steps – Total epochs (or iterations) for training.
warmup_steps – Number of first few steps to do linear warmup.
last_epoch – The index of last step (epoch or iteration). We named it
last_epoch
instead oflast_step
to keep the naming consistent with other LR schedulers in PyTorch.