Releases: Vali-98/ChatterUI
Releases Β· Vali-98/ChatterUI
v0.8.0-beta4
- Fixed sliders not being editable via text input
- Added mistral template to instruct
- Made generating text only every overwrite the related swipe instead of whichever swipe is curently loaded
Update v0.8.0-beta4b:
- Fixed UI issues with continuing generations
v0.8.0-beta3
Features:
- Synchronized llama.cpp - This update should allow devices which can run 4_0_4_8 quant to also run 4_0_4_4
- Added the editing popup to the chat and character lists
- Finished up the new chat history manager
- You can now clone chats and characters
- Searching characters now functional
- Sort buttons for character lists can now go in ascending and descending order
v0.8.0-beta2
Added {{date}}, {{time}} and {{weekday}} macros, added last_output_prefix
to instruct settings.
v0.8.0-beta1
This beta features some incomplete UI changes and updated llama.cpp.
v0.7.10
v0.7.10
This is a cummulative update for all recent changes from 0.7.9a-0.7.9g with a few additions.
Features:
- Added the Gemma 2 instruct template to defaults
- Added DRY Sampler to koboldcpp
- Added XTC sampler to local inferencing
- Added support for Gemma 2, Nemotron, Minitron and Minitron-Width to local inferencing
- Added new supported API's: Generic Text Completions, Cohere
- Added a default card; AI Bot
Changes:
- Changed local inferencing to properly use tokenizer from currently used model
- Changed icons to allow for adaptive and monochrome colors on Android
- Default app hue to blue
- Changed chat editor modal buttons to be bigger and less finicky to press
- Changed several default Sampler options to be disabled, this was causing deterministic outputs on specific APIs
- top_k can now be set to 0, which disables it for local inferencing and several other APIs (note, some APIs will not accept 0 here)
Fixes:
- Fixed instruct suffixes never being added
- Fixed Tokenizer calculation being very inaccurate, causing many issues with overflowing local models, causing context to fallback to halving the cache which also leads to massive reprocessing times. This also fixes issues with context shifitng failing.
- Example messages should never be readded once at least the first message has been shifted out of context: this fixes issues with examples being readded causing massive reprocessing for local models.
- Issues with TTS Engine when using custom sherpa-onnx models
- Fixed specific popups to properly shift with keyboard instead of remaining in place
- Fixed initial default preset always being broken
Dev:
- Massive refactors of several screens, though rerendering is pretty minimal, this further improves it by sectioning more components and behaviors, as well as reducing zustand selectors in chat items.
- Added sqlite-vec library to expo-sqlite in preparation for future embedding models
- Added fixes to cui-llama.rn for embedding support
- Finally removed a lot of deprecated screens and todos.
- Changed file structure to fix broken interactions between zustand and Fast Refresh. If you are adding a new component, be sure it is contained within the
/app
directory.
v0.7.9g
v0.7.9f
Features:
- Added a default card, AI Bot. This simple card aims to show an example of what a card is for new users and will automatically be generated for new installs. For existing installs, you can also generate this card from the settings menu if needed.
Fixes:
- Issues with invalid TTS voice lists causing crashes.