-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugfix Issue 259 - WIP HARD Fix using local Ollama usage #306
base: main
Are you sure you want to change the base?
Conversation
It's a nice thought, but I run Ollama on another machine so it'd be better to keep it the way it is. |
Here my recent comments on the fix for the model with ollama, now will double check about the faulty baseurl that might be ignored if the only issue was the actual model selection |
now should be good with wathever IP adress and model you want to use |
Taking a look at this today |
works for me in combination with removing the |
You identified that *.local in .dockerignore is likely causing some Docker related issues, please see #329 for some details and please feel free to provide feedback there. |
It doesn't work for me I still get error
Even when the connection to list my local models works just fine, and If I try to use it on openwebUI works flawlessly |
Hi I have a problem when I go to use Ollama, when I go to send it a message it gives me this error: There was an error processing your request: An error occurred. |
same for me Failed to get Ollama models: TypeError: fetch failed |
Fixes this issue about not being able to use local ollama yet is a hard fix, there is work to be done to allow it to be dynamically chosen the model from the dropdown as it was taken claude and the model is set by now to "llama3.1:8b" and hardcoded the ollama to 127.0.0.1:11434