I'm using some ready-made scripts for distributed training of my model, and not understanding very well the mechanics behind it. Basically, it uses torch.distributed and the master script spawns multiple processes, but updating happens in the same model (script). However, validation on the validation dataset is done separately on the different processes, and I don't get an "overall" validation loss (see here). How can I combine the validation results from the different processes to get an overall result?
torch.distributed
2.1m questions
2.1m answers
60 comments
57.0k users