Skip to content

Commit

Permalink
Remove composable API's fully_shard from torchtnt example and test (#944
Browse files Browse the repository at this point in the history
)

Summary:
Pull Request resolved: #944

The fully_shard name is now used by FSDP2 (torch.distributed._composable.fsdp.fully_shard) and the Composable API's fully_shard (torch.distributed._composable.fully_shard) is being deprecated. Therefore, we want to remove torch.distributed._composable.fully_shard from torchtnt as well.
Deprecation message from PyTorch:
https://github.com/pytorch/pytorch/blob/main/torch/distributed/_composable/fully_shard.py#L41-L48

Reviewed By: fegin

Differential Revision: D65702749

fbshipit-source-id: a755fe4f0c7800184d62d958466445e313ceb796
  • Loading branch information
wz337 authored and facebook-github-bot committed Nov 11, 2024
1 parent 72df3db commit 753f05a
Showing 1 changed file with 1 addition and 3 deletions.
4 changes: 1 addition & 3 deletions tests/utils/test_prepare_module_gpu.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@
import unittest

import torch

from torch.distributed._composable import fully_shard
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.fully_sharded_data_parallel import MixedPrecision
from torch.nn.parallel import DistributedDataParallel as DDP
Expand Down Expand Up @@ -93,7 +91,7 @@ def _test_is_fsdp_module() -> None:
model = FSDP(torch.nn.Linear(1, 1, device=device))
assert _is_fsdp_module(model)
model = torch.nn.Linear(1, 1, device=device)
fully_shard(model)
model = FSDP(model)
assert _is_fsdp_module(model)

@skip_if_not_distributed
Expand Down

0 comments on commit 753f05a

Please sign in to comment.