[CI/Docs/Examples] - Replace llama with llama2 model #2280
gpu-ci-skip.yml
on: pull_request
GPU CI Concierge
0s
Check Python Interface
2s
Training Tests
0s