diff --git a/.env.example b/.env.example
index 83f29aedc..32789eb40 100644
--- a/.env.example
+++ b/.env.example
@@ -32,12 +32,18 @@ OLLAMA_API_BASE_URL=
# You only need this environment variable set if you want to use OpenAI Like models
OPENAI_LIKE_API_BASE_URL=
+# You only need this environment variable set if you want to use LM Studio models
+LM_STUDIO_API_BASE_URL=
+
# You only need this environment variable set if you want to use DeepSeek models through their API
DEEPSEEK_API_KEY=
# Get your OpenAI Like API Key
OPENAI_LIKE_API_KEY=
+# Get your LM Studio API Key
+LM_STUDIO_API_KEY=
+
# Get your Mistral API Key by following these instructions -
# https://console.mistral.ai/api-keys/
# You only need this environment variable set if you want to use Mistral models
diff --git a/Dockerfile b/Dockerfile
index 3b5a74cde..d88da0152 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -24,6 +24,8 @@ ARG ANTHROPIC_API_KEY
ARG OPEN_ROUTER_API_KEY
ARG GOOGLE_GENERATIVE_AI_API_KEY
ARG OLLAMA_API_BASE_URL
+ARG LM_STUDIO_API_BASE_URL
+ARG LM_STUDIO_API_KEY
ARG VITE_LOG_LEVEL=debug
ENV WRANGLER_SEND_METRICS=false \
@@ -33,6 +35,8 @@ ENV WRANGLER_SEND_METRICS=false \
OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
+ LM_STUDIO_API_BASE_URL=${LM_STUDIO_API_BASE_URL} \
+ LM_STUDIO_API_KEY=${LM_STUDIO_API_KEY} \
VITE_LOG_LEVEL=${VITE_LOG_LEVEL}
# Pre-configure wrangler to disable metrics
@@ -53,6 +57,8 @@ ARG ANTHROPIC_API_KEY
ARG OPEN_ROUTER_API_KEY
ARG GOOGLE_GENERATIVE_AI_API_KEY
ARG OLLAMA_API_BASE_URL
+ARG LM_STUDIO_API_BASE_URL
+ARG LM_STUDIO_API_KEY
ARG VITE_LOG_LEVEL=debug
ENV GROQ_API_KEY=${GROQ_API_KEY} \
@@ -61,6 +67,8 @@ ENV GROQ_API_KEY=${GROQ_API_KEY} \
OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
+ LM_STUDIO_API_BASE_URL=${LM_STUDIO_API_BASE_URL} \
+ LM_STUDIO_API_KEY=${LM_STUDIO_API_KEY} \
VITE_LOG_LEVEL=${VITE_LOG_LEVEL}
RUN mkdir -p ${WORKDIR}/run
diff --git a/README.md b/README.md
index 54ae824ed..a866bd2be 100644
--- a/README.md
+++ b/README.md
@@ -25,7 +25,7 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt!
- ⬜ **HIGH PRIORITY** Load local projects into the app
- ⬜ **HIGH PRIORITY** - Attach images to prompts
- ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
-- ⬜ LM Studio Integration
+- ✅ LM Studio Integration
- ⬜ Together Integration
- ⬜ Azure Open AI API Integration
- ⬜ HuggingFace Integration
@@ -68,9 +68,9 @@ Many of you are new users to installing software from Github. If you have any in
1. Install Git from https://git-scm.com/downloads
-2. Install Node.js from https://nodejs.org/en/download/
+2. Install Node.js from https://nodejs.org/en/download/
-Pay attention to the installer notes after completion.
+Pay attention to the installer notes after completion.
On all operating systems, the path to Node.js should automatically be added to your system path. But you can check your path if you want to be sure. On Windows, you can search for "edit the system environment variables" in your system, select "Environment Variables..." once you are in the system properties, and then check for a path to Node in your "Path" system variable. On a Mac or Linux machine, it will tell you to check if /usr/local/bin is in your $PATH. To determine if usr/local/bin is included in $PATH open your Terminal and run:
@@ -200,7 +200,7 @@ FROM [Ollama model ID such as qwen2.5-coder:7b]
PARAMETER num_ctx 32768
```
-- Run the command:
+- Run the command:
```
ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]
@@ -211,7 +211,7 @@ You'll see this new model in the list of Ollama models along with all the others
## Adding New LLMs:
-To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
+To make new LLMs available to use in this version of Bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
diff --git a/app/components/chat/BaseChat.tsx b/app/components/chat/BaseChat.tsx
index e91254a17..f33114739 100644
--- a/app/components/chat/BaseChat.tsx
+++ b/app/components/chat/BaseChat.tsx
@@ -49,6 +49,9 @@ const ModelSelector = ({ model, setModel, provider, setProvider, modelList, prov
+