[CI/Docs/Examples] - Replace llama with llama2 model #2294
gpu-ci-skip.yml
on: pull_request
GPU CI Concierge
0s
Check Python Interface
0s
Training Tests
0s