You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered the following issues while using LM Studio and would like some assistance. I am running bolt.diy directly on WSL without using Docker.
CORS設定を有効化
As shown in the attached image, I enabled the "CORS" option in LM Studio's server settings. After doing so, I was able to select the LLM (Large Language Model) from the dropdown menu.
APIリクエストエラー
When I selected LM Studio's LLM and attempted to generate code, I received the following error message:
`
There was an error processing your request: An error occurred.
`
Ollama関連の警告
Despite not using the Ollama server, I see these warning messages in the terminal:
‘
WARN Constants Failed to get Ollama models: fetch failed
WARN Constants Failed to get Ollama models: fetch failed
‘
Environment Details:
WSL (Windows Subsystem for Linux): Running bolt.diy directly (without Docker).
I would appreciate any guidance on:
Why the API request to LM Studio's server fails.
Why Ollama warnings appear even though I am not using it.
Thank you for your support!
I am getting the following error on the LMStudio server.
2024-12-18 21:50:37 [ERROR] Unexpected endpoint or method. (GET /api/health). Returning 200 anyway
I think the URL that bolt.diy requests to LMStudio's API is different? I thought so.
in lm studio try to enable: Serve on Local Network
if that dose not work than also try the local ip address of the computer that LM Stuido is running on
In this video I used docker for bolt.diy and ollama but it might help: https://youtu.be/TMvA10zwTbI
Sorry.
I re-cloned the repository this morning.
And I also turned on serving the LM Studio URL on the local network.
I set this URL in env.local and started it up, and it worked with WSL.
I was able to use it with WSL without using Docker.
Describe the bug
I encountered the following issues while using LM Studio and would like some assistance. I am running bolt.diy directly on WSL without using Docker.
CORS設定を有効化
As shown in the attached image, I enabled the "CORS" option in LM Studio's server settings. After doing so, I was able to select the LLM (Large Language Model) from the dropdown menu.
APIリクエストエラー
When I selected LM Studio's LLM and attempted to generate code, I received the following error message:
`
There was an error processing your request: An error occurred.
`
Ollama関連の警告
Despite not using the Ollama server, I see these warning messages in the terminal:
‘
WARN Constants Failed to get Ollama models: fetch failed
WARN Constants Failed to get Ollama models: fetch failed
‘
Environment Details:
WSL (Windows Subsystem for Linux): Running bolt.diy directly (without Docker).
I would appreciate any guidance on:
Why the API request to LM Studio's server fails.
Why Ollama warnings appear even though I am not using it.
Thank you for your support!
Link to the Bolt URL that caused the error
http://localhost:5173/chat/10
Steps to reproduce
I entered the prompt, but I get an error
Expected behavior
I noticed in the screenshot of the debug info that the URL for the chat API in LMStudio seemed different from what LMStudio shows.
Screen Recording / Screenshot
Platform
Provider Used
No response
Model Used
No response
Additional context
I am getting the following error on the LMStudio server.
2024-12-18 21:50:37 [ERROR] Unexpected endpoint or method. (GET /api/health). Returning 200 anyway
I think the URL that bolt.diy requests to LMStudio's API is different? I thought so.
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234
2024-12-18 20:52:33 [INFO]
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] Supported endpoints:
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/embeddings
The text was updated successfully, but these errors were encountered: