[CI/Docs/Examples] - Replace llama with llama2 model #2798
This run and associated checks have been archived and are scheduled for deletion.
Learn more about checks retention
gpu-ci.yml
on: pull_request
GPU CI Concierge
15s
Check Python Interface
23m 24s
Training Tests
0s
Annotations
1 error
Inference Tests
Process completed with exit code 1.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
output
Expired
|
112 Bytes |
|