-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
chore: switch back llama.cpp to b3027 to avoid deepseekcoder 6.7 regr…
…ession (#2323)
- Loading branch information
Showing
1 changed file
with
1 addition
and
1 deletion.
There are no files selected for viewing
Submodule llama.cpp
updated
21 files
+0 −4 | CMakeLists.txt | |
+5 −5 | README-sycl.md | |
+1 −2 | README.md | |
+2 −26 | examples/llama-bench/llama-bench.cpp | |
+1 −3 | ggml-cuda.cu | |
+22 −88 | ggml-cuda/concat.cu | |
+0 −6 | ggml-cuda/norm.cu | |
+10 −8 | ggml-cuda/rope.cu | |
+1 −0 | ggml-kompute.cpp | |
+1 −4 | ggml-metal.m | |
+10 −6 | ggml-metal.metal | |
+68 −56 | ggml-sycl.cpp | |
+48 −69 | ggml.c | |
+1 −14 | ggml.h | |
+3 −1 | ggml_vk_generate_shaders.py | |
+174 −100 | llama.cpp | |
+4 −0 | scripts/sync-ggml-am.sh | |
+1 −1 | scripts/sync-ggml.last | |
+2 −0 | scripts/sync-ggml.sh | |
+32 −78 | tests/test-backend-ops.cpp | |
+7 −13 | tests/test-tokenizer-random.py |