[CI/Docs/Examples] - Replace llama with llama2 model #2799
gpu-ci.yml
on: pull_request
GPU CI Concierge
14s
Check Python Interface
23m 29s
Training Tests
1h 6m
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
output
Expired
|
3.5 KB |
|