Skip to content

refactor(core): embed llama.cpp's server binary directly for LLM inference #360

refactor(core): embed llama.cpp's server binary directly for LLM inference

refactor(core): embed llama.cpp's server binary directly for LLM inference #360

Triggered via pull request May 13, 2024 23:29
Status Cancelled
Total duration 53s
Artifacts

bloat.yml

on: pull_request
cargo_bloat
42s
cargo_bloat
Fit to window
Zoom out
Zoom in

Annotations

2 errors and 4 warnings
cargo_bloat
Canceling since a higher priority waiting request for 'TabbyML/tabby/.github/workflows/bloat.yml@refs/pull/2113/merge-switch-to-llama-cpp-binary' exists
cargo_bloat
The operation was canceled.
cargo_bloat
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
cargo_bloat
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
cargo_bloat
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
cargo_bloat
The `set-output` command is deprecated and will be disabled soon. Please upgrade to using Environment Files. For more information see: https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/