Skip to content

cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946) #17

cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946)

cuda : supports running on CPU for GGML_USE_CUBLAS=ON build (#3946) #17

Triggered via push November 7, 2023 06:58
Status Failure
Total duration 3m 37s
Artifacts

docker.yml

on: push
Matrix: Push Docker image to Docker Hub
Fit to window
Zoom out
Zoom in

Annotations

11 errors
Push Docker image to Docker Hub (light-cuda, .devops/main-cuda.Dockerfile, linux/amd64)
buildx failed with: ERROR: failed to solve: failed to push ghcr.io/ggerganov/llama.cpp:light-cuda-75fb6f2ba0930be1515757196a81d32a1c2ab8ff: unexpected status from POST request to https://ghcr.io/v2/ggerganov/llama.cpp/blobs/uploads/: 403 Forbidden
Push Docker image to Docker Hub (full, .devops/full.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-cuda__devops_main-c" failed.
Push Docker image to Docker Hub (light, .devops/main.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-cuda__devops_main-c" failed.
Push Docker image to Docker Hub (full-cuda, .devops/full-cuda.Dockerfile, linux/amd64)
The job was canceled because "light-cuda__devops_main-c" failed.
Push Docker image to Docker Hub (full-rocm, .devops/full-rocm.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-cuda__devops_main-c" failed.
Push Docker image to Docker Hub (light-rocm, .devops/main-rocm.Dockerfile, linux/amd64,linux/arm64)
The job was canceled because "light-cuda__devops_main-c" failed.