probnmn.evaluators.module_training_evaluator¶
- 
class 
probnmn.evaluators.module_training_evaluator.ModuleTrainingEvaluator(config: probnmn.config.Config, models: Dict[str, Type[torch.nn.modules.module.Module]], gpu_ids: List[int] = [0], cpu_workers: int = 0)[source]¶ Bases:
probnmn.evaluators._evaluator._EvaluatorPerforms evaluation for
module_trainingphase, using batches of evaluation examples fromModuleTrainingDataset.- Parameters
 - config: Config
 A
Configobject with all the relevant configuration parameters.- models: Dict[str, Type[nn.Module]]
 All the models which interact with each other for evaluation. This should come from
ModuleTrainingTrainer.- gpu_ids: List[int], optional (default=[0])
 List of GPU IDs to use or evaluation,
[-1]- use CPU.- cpu_workers: int, optional (default = 0)
 Number of CPU workers to use for fetching batch examples in dataloader.
Examples
To evaluate a pre-trained checkpoint:
>>> config = Config("config.yaml") # PHASE must be "module_training" >>> trainer = ModuleTrainingTrainer(config, serialization_dir="/tmp") >>> trainer.load_checkpoint("/path/to/module_training_checkpoint.pth") >>> evaluator = ModuleTrainingEvaluator(config, trainer.models) >>> eval_metrics = evaluator.evaluate(num_batches=50)
- 
_do_iteration(self, batch:Dict[str, Any]) → Dict[str, Any][source]¶ Perform one iteration, given a batch. Take a forward pass to accumulate metrics in
NeuralModulenetwork.- Parameters
 - batch: Dict[str, Any]
 A batch of evaluation examples sampled from dataloader.
- Returns
 - Dict[str, Any]
 A dictionary containing model predictions and/or batch validation losses of
ProgramGeneratorandNeuralModuleNetwork. Nested dict structure:{ "program_generator": {"predictions"} "nmn": {"predictions", "loss"} }