[Issue Tracker] PyTorch distributed RPC #96
Labels
bug
Something isn't working
distributed
Something related to distributed training
pytorch
Something PyTorch related
upstream
Something upstream related
This is an issue tracker for the upstream issues:
Initialize RPC with large world size:
Pass
nn.Module
andnn.Parameter
as RPC argument:nn.Parameter
as RPC argument automatically detaches from the computation graph pytorch/pytorch#86525The text was updated successfully, but these errors were encountered: