We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have ollama and also have qwen2.5-coder:14b model I have made a file name modelFile
Q1: why it is requesting localhost:1234/v1/models
Ollama is running on its default port
FROM qwen2.5-coder:14b PARAMETER num_ctx 32768
After starting the server and a chat, it shows the error There was an error processing your request: An error occurred.
http://localhost:3000/chat/8
This will show a error like this
Here I found the port 1234
async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> { try { const baseUrl = settings?.baseUrl || import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'; const response = await fetch(`${baseUrl}/v1/models`); const data = (await response.json()) as any; return data.data.map((model: any) => ({ name: model.id, label: model.id, provider: 'LMStudio', })); } catch (e: any) { logStore.logError('Failed to get LMStudio models', e, { baseUrl: settings?.baseUrl }); return []; } }
if we can fix this ?
No response
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
I have ollama and also have qwen2.5-coder:14b model
I have made a file name modelFile
Q1: why it is requesting localhost:1234/v1/models
Ollama is running on its default port
After starting the server and a chat, it shows the error
There was an error processing your request: An error occurred.
Link to the Bolt URL that caused the error
http://localhost:3000/chat/8
Steps to reproduce
Expected behavior
This will show a error like this
Screen Recording / Screenshot
Here I found the port 1234
if we can fix this ?
Platform
Provider Used
No response
Model Used
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: