Skip to content

Commit

Permalink
Add new Dino | Update voir (#238)
Browse files Browse the repository at this point in the history
* update voir

* dinov2 working

* dinov2 working

* dino-giant working

* Always set OMP_NUM_THREADS

* Add some batch size

* Update tests

* update template

* update lock file

* Allow to override the number of CPU seen by milabench

---------

Co-authored-by: pierre.delaunay <[email protected]>
  • Loading branch information
Delaunay and pierre.delaunay authored Jul 26, 2024
1 parent ea3ff78 commit 2586e65
Show file tree
Hide file tree
Showing 100 changed files with 1,308 additions and 362 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,4 @@ benchmarks/voir
benchmarks/*/base/
benchmarks/lightning/lightning_logs/

benchmarks/*/src/
50 changes: 44 additions & 6 deletions .pin/constraints-cuda-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .pin/constraints-hpu-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .pin/constraints-rocm-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .pin/constraints-xpu-torch.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/_templates/simple/requirements.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
voir>=0.2.9,<0.3
voir>=0.2.17,<0.3
torch
49 changes: 31 additions & 18 deletions benchmarks/_templates/stdout/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,32 +15,45 @@ def criterion(*args, **kwargs):
return random.normalvariate(0, 1)


def main():
device = accelerator.fetch_device(0) # <= This is your cuda device
def prepare_voir():
from benchmate.observer import BenchObserver
from benchmate.monitor import bench_monitor

observer = BenchObserver(
batch_size_fn=lambda batch: 1,
accelerator.Event,
earlystop=65,
batch_size_fn=lambda x: len(x[0]),
raise_stop_program=False,
stdout=True,
)

return observer, bench_monitor


def main():
device = accelerator.fetch_device(0) # <= This is your cuda device

observer, monitor = prepare_voir()

# optimizer = observer.optimizer(optimizer)
# criterion = observer.criterion(criterion)

dataloader = list(range(6000))

for epoch in range(10000):
for i in observer.iterate(dataloader):
# avoid .item()
# avoid torch.cuda; use accelerator from torchcompat instead
# avoid torch.cuda.synchronize or accelerator.synchronize

# y = model(i)
loss = criterion()
# loss.backward()
# optimizer.step()

observer.record_loss(loss)

time.sleep(0.1)
with monitor():
for epoch in range(10000):
for i in observer.iterate(dataloader):
# avoid .item()
# avoid torch.cuda; use accelerator from torchcompat instead
# avoid torch.cuda.synchronize or accelerator.synchronize

# y = model(i)
loss = criterion()
# loss.backward()
# optimizer.step()

observer.record_loss(loss)

time.sleep(0.1)

assert epoch < 2, "milabench stopped the train script before the end of training"
assert i < 72, "milabench stopped the train script before the end of training"
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/_templates/stdout/requirements.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
voir>=0.2.9,<0.3
voir>=0.2.17,<0.3
torch
2 changes: 1 addition & 1 deletion benchmarks/_templates/voir/requirements.in
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
voir>=0.2.9,<0.3
voir>=0.2.17,<0.3
torch
6 changes: 0 additions & 6 deletions benchmarks/accelerate_opt/benchfile.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,6 @@
class AccelerateBenchmark(Package):
base_requirements = "requirements.in"

def make_env(self):
env = super().make_env()
value = self.resolve_argument("--cpus_per_gpu", 8)
env["OMP_NUM_THREADS"] = str(value)
return env

def build_prepare_plan(self):
return CmdCommand(
self,
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.cuda.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.hpu.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.rocm.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/accelerate_opt/requirements.xpu.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/brax/requirements.cuda.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion benchmarks/brax/requirements.hpu.txt

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Loading

0 comments on commit 2586e65

Please sign in to comment.