v0.13.1
⚠️ Notice
- This is a patch release, please also check the full release note for 0.13.
🧰 Fixed and Improvements
- Bump llama.cpp version to b3334, supporting Deepseek V2 series models.
- Turn on fast attention for Qwen2-1.5B model to fix the quantization error.
- Properly set number of GPU layers (to zero) when device is CPU.