[CI/Docs/Examples] - Replace llama with llama2 model #2292
gpu-ci-skip.yml
on: pull_request
GPU CI Concierge
0s
Check Python Interface
0s
Training Tests
0s
Annotations
1 error
Training Tests
Canceling since a higher priority waiting request for 'gpu-ci-skip-fix_ci' exists
|