Releases: your-papa/obsidian-Smart2Brain
Releases · your-papa/obsidian-Smart2Brain
1.3.0
1.2.0
1.1.1
What's Changed
- chore(github): update issue templates by @nicobrauchtgit in #90
- add spanish translations with chatgpt help by @oldlastman in #93
- feat(language): add spanish to i18n file by @Leo310 in #95
- Add Turkish translation by @ege-adam in #97
- feat(language): add turkish to i18n file by @Leo310 in #98
- feat: Add french language by @flemzord in #107
- feat: Add Simplified Chinese language by @probe301 in #111
New Contributors
- @oldlastman made their first contribution in #93
- @ege-adam made their first contribution in #97
- @flemzord made their first contribution in #107
- @probe301 made their first contribution in #111
Full Changelog: 1.0.2...1.0.3
Bigger changes are on the way!
Unfortunately, we are really busy with university right now so it may take some time. 👨🎓
1.0.2
closes #81
What's Changed
- Update Documentation: OLLAMA_ORIGINS='app://obsidian.md' by @nicobrauchtgit in #82
Full Changelog: 1.0.1...1.0.2
Release 1.0.1
What's Changed
- docs: UI Improvement Template by @nicobrauchtgit in #74
- style(onboarding): better ux and logo fix by @Leo310 in #75
Full Changelog: 1.0.0...1.0.1
1.0.0
🐙 MVP is ready!
🌟 Features
📝 Chat with your Notes
- RAG pipeline: All your notes will be embedded into vectors and then retrieved based on the similarity to your query in order to generate an answer based on the retrieved notes
- Get reference links to notes: Because the answers are generated based on your retrieved notes we can trace where the information comes from and reference the origin of the knowledge in the answers as Obsidian links
- Chat with LLM: You can disable the function to answer queries based on your notes and then all the answers generated are based on the chosen LLM’s training knowledge
- Save chats: You can save your chats and continue the conversation at a later time
- Different chat views: You can choose between two chat views: the ‘comfy’ and the ‘compact’ view
🤖 Choose ANY preferred Large Language Model (LLM)
- Ollama to integrate LLMs: Ollama is a tool to run LLMs locally. Its usage is similar to Docker, but it's specifically designed for LLMs. You can use it as an interactive shell, through its REST API, or using it from a Python library.
- Quickly switch between LLMs: Comfortably change between different LLMs for different purposes, for example changing from one for scientific writing to one for persuasive writing.
- Use ChatGPT: Although, our focus lies on a privacy-focused AI Assistant you can still leverage OpenAI’s models and their advanced capabilities.
0.6.1
-
style(logo): final mvp version
-
refactor: vector store files in seperate directory
0.6.0
What's Changed
- modals to interact with ollama
- new embedding model
- some fixes
Full Changelog: 0.5.1...0.6.0
0.5.1
Please clear plugin data and reindex vault after updating.
What's Changed
Full Changelog: 0.5.0...0.5.1
0.5.0
- better chat colors
- tooltips quick settings drawer
- abort indexing