Skip to content

Revert "protyping the idea that supports running on CPU for a GGML_US… #16

Revert "protyping the idea that supports running on CPU for a GGML_US…

Revert "protyping the idea that supports running on CPU for a GGML_US… #16

Triggered via push November 6, 2023 03:10
Status Failure
Total duration 8m 1s
Artifacts

docker.yml

on: push
Matrix: Push Docker image to Docker Hub
Fit to window
Zoom out
Zoom in

Annotations

11 errors
Push Docker image to Docker Hub (full-rocm, .devops/full-rocm.Dockerfile, linux/amd64,linux/arm64)
buildx failed with: ERROR: failed to solve: failed to push ghcr.io/ggerganov/llama.cpp:full-rocm-d0a81f4178f89aab8a5344cf1751c884f28d326b: unexpected status from POST request to https://ghcr.io/v2/ggerganov/llama.cpp/blobs/uploads/: 403 Forbidden
Push Docker image to Docker Hub (full, .devops/full.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "full-rocm__devops_full-ro" failed.
Push Docker image to Docker Hub (light-rocm, .devops/main-rocm.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "full-rocm__devops_full-ro" failed.
Push Docker image to Docker Hub (light, .devops/main.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "full-rocm__devops_full-ro" failed.
Push Docker image to Docker Hub (full-cuda, .devops/full-cuda.Dockerfile, linux/amd64)
The job was canceled because "full-rocm__devops_full-ro" failed.
Push Docker image to Docker Hub (light-cuda, .devops/main-cuda.Dockerfile, linux/amd64)
The job was canceled because "full-rocm__devops_full-ro" failed.