Releases: NeoZhangJianyu/llama.cpp
Releases · NeoZhangJianyu/llama.cpp
b4176
b4174
vulkan: Fix a vulkan-shaders-gen arugment parsing error (#10484) The vulkan-shaders-gen was not parsing the --no-clean argument correctly. Because the previous code was parsing the arguments which have a value only and the --no-clean argument does not have a value, it was not being parsed correctly. This commit can now correctly parse arguments that don't have values.
b4164
Merge pull request #3 from NeoZhangJianyu/fix_win_package fix build package for 2025.0
b4158
flake.lock: Update (#10470) Flake lock file updates: • Updated input 'nixpkgs': 'github:NixOS/nixpkgs/5e4fbfb6b3de1aa2872b76d49fafc942626e2add?narHash=sha256-OZiZ3m8SCMfh3B6bfGC/Bm4x3qc1m2SVEAlkV6iY7Yg%3D' (2024-11-15) → 'github:NixOS/nixpkgs/23e89b7da85c3640bbc2173fe04f4bd114342367?narHash=sha256-y/MEyuJ5oBWrWAic/14LaIr/u5E0wRVzyYsouYY3W6w%3D' (2024-11-19) Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
b4145
vulkan: predicate max operation in soft_max shaders/soft_max (#10437) Fixes #10434
b4127
sycl: Revert MUL_MAT_OP support changes (#10385)
b3943
llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) * refactor llama_batch_get_one * adapt all examples * fix simple.cpp * fix llama_bench * fix * fix context shifting * free batch before return * use common_batch_add, reuse llama_batch in loop * null terminated seq_id list * fix save-load-state example * fix perplexity * correct token pos in llama_batch_allocr
b3942
rpc : backend refactoring (#9912) * rpc : refactor backend Use structs for RPC request/response messages * rpc : refactor server
b3831
Enable use to the rebar feature to upload buffers to the device. (#9251)
b3828
[SYCL] add missed dll file in package (#9577) * update oneapi to 2024.2 * use 2024.1 --------- Co-authored-by: arthw <[email protected]>