Preface:
Merry Christmas and Happy New Year! This will probably be the last release for this year (aside critical bug fixes) and I just want to say thanks to everyone who has supported the project! ChatterUI has grown from a simple curiosity to an awesome tool used by thousands, and it wouldn't have been possible without your support and feedback.
v0.8.3a
Fixes:
- SliderInput text not updating when changing Sampler presets.
- Custom API Templates incorrectly adding duplicate entries from the base OpenAI template.
v0.8.3
BREAKING CHANGE: Llama.cpp no longer supports Q4_0_4_4, Q4_0_4_8 and Q4_0_8_8. Instead, Q4_0 will automatically be aligned for ARM optimized kernels, use Q4_0 from now on!
This is a cummulative update for all beta features over the last month, including the major migration to Expo SDK 52 and the new React-Native architecture.
This update introduces the new experimental API Manager and API Configuration templates! This should allow you to add APIs missing from ChatterUI to a degree. This is mostly useful for APIs which are almost compliant with the OpenAI spec but may have extra or reduced sampler options. These configurations are sharable jsons which you can share to add new API compatibility to ChatterUI!
Known Issue:
- Softlock: If you have an Animated Ellipses visible during generations and open a menu that causes an alert, the alert will be invisible.
Features:
- Updated to Expo SDK 52. This is mostly an under-the-hood change, but should make the app feel a lot more responsive, some screens load much faster than before, and app startup should feel a lot quicker.
- Added a new API Manager!
- This API manager functions similar to the recently added Model manager.
- If you prefer the old API Manager, simply go to Settings > Use Legacy API
- You can now have multiple connection presets to the same API type
- You can now create your own API Configuration Templates! Refer to this discussion to give feedback and suggestions.
- Added an option for landscape mode in settings. This is automatically enabled for tablets.
- Added a unique component for editing string arrays such as
stop_sequence
. - Added a
Lock App
feature which will require user authentication via PIN/Pattern/Biometrics when starting the app.- This is disabled by default, and can be enabled in the Settings menu.
- Added
Notifications
on completion, which will let you know when generations are complete while the app is in the background.- This is disabled by default, and can be enabled in the Settings menu.
- You can toggle showing the message and character names from the notification or just show a generic 'Completion completed' notification.
Changes:
- Updated llama.cpp: This brings in a new feature which requantizes Q4_0 models into Q4_0_X_X for optimized arm kernels upon loading the model, removing the need for special quantizations.
- Dry Sampling is now available in Local Mode!
- Added a new dropdown bottom sheet. This will replace a few old dropdown menus in future.
- Instruct Menu now uses new dropdown and popup menu for button functions.
- Characters and Chats will no longer have
last_modified
update when accessed, only when edited or chatted. - Checkboxes have been changed to a new custom component.
- Changed animated ellipses during generations
- Updated to a new Markdown formatter! This formatter should be far better when dealing with mixed bullet types and nested lists.
- All inferencing is now done in a background task - this fixes an issue where generations are paused during the prompt building phase due to tabbing out of the app too quickly after a generation begins.
Fixes:
- Incorrect Gemma2 Instruct format.
- Exporting strings resulting is broken base64 files instead
- Cohere API being completely broken
- Fixed a softlock on Character List if you close search bar while a filter removes all characters from the list
- Fixed a crash on startup due to importing an unsupported model.