Releases: TabbyML/tabby
Releases Β· TabbyML/tabby
v0.14.0-rc.1
v0.14.0-rc.1
v0.14.0-rc.0
v0.14.0-rc.0
v0.13.1
β οΈ Notice
- This is a patch release, please also check the full release note for 0.13.
π§° Fixed and Improvements
- Bump llama.cpp version to b3334, supporting Deepseek V2 series models.
- Turn on fast attention for Qwen2-1.5B model to fix the quantization error.
- Properly set number of GPU layers (to zero) when device is CPU.
v0.13.1-rc.9
v0.13.1-rc.9
v0.13.1-rc.8
v0.13.1-rc.8
v0.13.1-rc.7
v0.13.1-rc.7
v0.13.1-rc.6
v0.13.1-rc.6
v0.13.1-rc.0
v0.13.1-rc.0
v0.13.0
β οΈ Notice
- WizardCoder-3B is no longer supported, for model with low parameter size, consider try Qwen2-1.5B-Instruct
π Features
- Introduced a new Home page featuring the Answer Engine, which activates when the chat model is loaded.
- Enhanced the Answer Engine's context by indexing issues and pull requests.
- Supports web page crawling to further enrich the Answer Engine's context.
- Enabled navigation through various git trees in the git browser.
π§° Fixed and Improvements
- Turn on sha256 checksum verification for model downloading.
- Added an environment variable
TABBY_HUGGINGFACE_HOST_OVERRIDE
to overridehuggingface.co
with compatible mirrors (e.g.,hf-mirror.com
) for model downloading. - Bumped
llama.cpp
version to b3166. - Improved logging for the
llama.cpp
backend. - Added support for triggering background jobs in the admin UI.
- Enhanced logging for backend jobs in the admin UI.
π« New Contributors
- @TennyZhuang made their first contribution in #2355
- @woutermans made their first contribution in #2378
Full Changelog: v0.12.0...v0.13.0
v0.13.0-rc.4
v0.13.0-rc.4