-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama, Mistral AI, etc. #7296
Merged
Merged
Ollama, Mistral AI, etc. #7296
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
haraldschilly
force-pushed
the
support-ollama-api
branch
2 times, most recently
from
February 22, 2024 15:07
932140d
to
fda426d
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
2 times, most recently
from
February 23, 2024 14:11
e06b06e
to
5b78197
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
February 23, 2024 14:20
5b78197
to
b6f4c87
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
February 23, 2024 16:08
dff1ce5
to
71140d3
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
February 23, 2024 17:45
d3126cd
to
3413932
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
February 26, 2024 15:44
57a72b4
to
d6c763e
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
2 times, most recently
from
February 27, 2024 18:38
34824c9
to
fbd281f
Compare
…add ollama description, and fix bug with ai-formula (no project context)
haraldschilly
force-pushed
the
support-ollama-api
branch
from
February 27, 2024 18:41
fbd281f
to
35abc06
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
6 times, most recently
from
March 8, 2024 17:06
37b0791
to
bd8e6e1
Compare
…etimes it is interesting
… enable/disable in course projects
…r; throw proper error when querying client/llm without an enabled LLM for a specific tag
… course project specific limitations
haraldschilly
force-pushed
the
support-ollama-api
branch
2 times, most recently
from
March 15, 2024 15:43
69ca9a2
to
783ce97
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
March 15, 2024 16:08
783ce97
to
cb66d65
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
March 15, 2024 18:46
206a108
to
e4ed26c
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
2 times, most recently
from
March 17, 2024 13:05
45d995a
to
8e45e47
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
7 times, most recently
from
March 17, 2024 19:00
dab80a5
to
055cbb9
Compare
haraldschilly
force-pushed
the
support-ollama-api
branch
from
March 17, 2024 19:08
055cbb9
to
28b989a
Compare
this is super long and I stumbled over many obstacles. however, I'm now at a point where I think it's fine to merge. I didn't do all the refactoring I wanted to do, but at least a lot of things are collected in one place now. Further details are in the first message. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
ollama-[model]
modelstesting
chat with GPT4 turbo (deliberately bad language on my side!) and testing reply-continuation.
I can also select an LLM in the slate editor in a chat input
course
Ollama
This is the main part before this PR escalated: there is now a config parameter, which contains a dict of
{[model name]: {config params}, ... }
which is described in the description of that admin field and also has some detailed checks. The main point is, you can chat with any model that is available via the ollama api, you can have several ollama instances, and for each of them several models.configuration example
and with that, there is a
@gemma
chat, a displayed name, a description, and of course it streams answers like the othersstatus
ok, the latest with this PR is everything is tested and works, but there seems to be a problem with the mistral lib itself, and modifying what
fetch
does. this is a bit ugly.This seems to be the root cause: mistralai/client-js#42
Checklist: