-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA error: CUBLAS_STATUS_NOT_SUPPORTED #37
Comments
sgemm is already implemented. Please check whether ZLUDA BLAS is loaded and is being used. (cublas.dll on Windows, libcublas.so on Linux) |
tasklist /m cublas.dll |
What application are you trying? |
Fooocus |
Follow the second paragraph of ZLUDA PyTorch instruction.
|
Here you find a install Guide for Fooocus with ZLUDA. I tested it a few minutes ago. |
Error loading caffe2_nvrtc.dll or its dependencied after replace the three dll |
Make sure that you have
|
That error is caused by any Python Version that got installed through the Microsoft Store. Open up a cmd and type Verify that the Path to Python 3.10.11 is at the top. |
It's a little better, but new error emerge |
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.1.ff.net.2.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.1.attn2.to_q.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.1.attn2.to_k.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.1.attn2.to_v.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.5.1.transformer_blocks.1.attn2.to_out.0.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.5.1.proj_out.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.5.2.conv.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.6.0.in_layers.2.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.6.0.emb_layers.1.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.6.0.out_layers.3.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.6.0.skip_connection.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.7.0.in_layers.2.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.7.0.emb_layers.1.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.7.0.out_layers.3.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.7.0.skip_connection.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.8.0.in_layers.2.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.8.0.emb_layers.1.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.8.0.out_layers.3.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
ERROR diffusion_model.output_blocks.8.0.skip_connection.weight CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling
cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)
The text was updated successfully, but these errors were encountered: