A Quick Recap of Distributed Training in PyTorch: DP → DDP

A log for myself — and anyone who wants a simple mental model of how multi-GPU training actually works.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top