Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dino #238

Merged
merged 11 commits into from
Jul 26, 2024
Merged

Dino #238

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,4 @@ benchmarks/voir
benchmarks/*/base/
benchmarks/lightning/lightning_logs/

benchmarks/*/src/
50 changes: 44 additions & 6 deletions .pin/constraints-cuda-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .pin/constraints-hpu-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .pin/constraints-rocm-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .pin/constraints-xpu-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/_templates/simple/requirements.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
voir>=0.2.9,<0.3
voir>=0.2.17,<0.3
torch
49 changes: 31 additions & 18 deletions benchmarks/_templates/stdout/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,32 +15,45 @@ def criterion(*args, **kwargs):
return random.normalvariate(0, 1)


def main():
device = accelerator.fetch_device(0) # <= This is your cuda device
def prepare_voir():
from benchmate.observer import BenchObserver
from benchmate.monitor import bench_monitor

observer = BenchObserver(
batch_size_fn=lambda batch: 1,
accelerator.Event,
earlystop=65,
batch_size_fn=lambda x: len(x[0]),
raise_stop_program=False,
stdout=True,
)

return observer, bench_monitor


def main():
device = accelerator.fetch_device(0) # <= This is your cuda device

observer, monitor = prepare_voir()

# optimizer = observer.optimizer(optimizer)
# criterion = observer.criterion(criterion)

dataloader = list(range(6000))

for epoch in range(10000):
for i in observer.iterate(dataloader):
# avoid .item()
# avoid torch.cuda; use accelerator from torchcompat instead
# avoid torch.cuda.synchronize or accelerator.synchronize

# y = model(i)
loss = criterion()
# loss.backward()
# optimizer.step()

observer.record_loss(loss)

time.sleep(0.1)
with monitor():
for epoch in range(10000):
for i in observer.iterate(dataloader):
# avoid .item()
# avoid torch.cuda; use accelerator from torchcompat instead
# avoid torch.cuda.synchronize or accelerator.synchronize

# y = model(i)
loss = criterion()
# loss.backward()
# optimizer.step()

observer.record_loss(loss)

time.sleep(0.1)

assert epoch < 2, "milabench stopped the train script before the end of training"
assert i < 72, "milabench stopped the train script before the end of training"
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/_templates/stdout/requirements.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
voir>=0.2.9,<0.3
voir>=0.2.17,<0.3
torch
2 changes: 1 addition & 1 deletion benchmarks/_templates/voir/requirements.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
voir>=0.2.9,<0.3
voir>=0.2.17,<0.3
torch
6 changes: 0 additions & 6 deletions benchmarks/accelerate_opt/benchfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,6 @@
class AccelerateBenchmark(Package):
base_requirements = "requirements.in"

def make_env(self):
env = super().make_env()
value = self.resolve_argument("--cpus_per_gpu", 8)
env["OMP_NUM_THREADS"] = str(value)
return env

def build_prepare_plan(self):
return CmdCommand(
self,
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.cuda.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.hpu.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.rocm.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.xpu.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/brax/requirements.cuda.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/brax/requirements.hpu.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading
Loading