Skip to content

Commit

Permalink
Merge branch 'main' into patch-2
Browse files Browse the repository at this point in the history
  • Loading branch information
rio2dev authored Dec 13, 2024
2 parents f60b54f + f150355 commit bd524e2
Show file tree
Hide file tree
Showing 102 changed files with 1,326 additions and 447 deletions.
3 changes: 2 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,7 @@ BINGAI_TOKEN=user_provided
# BEDROCK_AWS_DEFAULT_REGION=us-east-1 # A default region must be provided
# BEDROCK_AWS_ACCESS_KEY_ID=someAccessKey
# BEDROCK_AWS_SECRET_ACCESS_KEY=someSecretAccessKey
# BEDROCK_AWS_SESSION_TOKEN=someSessionToken

# Note: This example list is not meant to be exhaustive. If omitted, all known, supported model IDs will be included for you.
# BEDROCK_AWS_MODELS=anthropic.claude-3-5-sonnet-20240620-v1:0,meta.llama3-1-8b-instruct-v1:0
Expand All @@ -140,7 +141,7 @@ GOOGLE_KEY=user_provided
# GOOGLE_REVERSE_PROXY=

# Gemini API (AI Studio)
# GOOGLE_MODELS=gemini-exp-1121,gemini-exp-1114,gemini-1.5-flash-latest,gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision
# GOOGLE_MODELS=gemini-2.0-flash-exp,gemini-exp-1121,gemini-exp-1114,gemini-1.5-flash-latest,gemini-1.0-pro,gemini-1.0-pro-001,gemini-1.0-pro-latest,gemini-1.0-pro-vision-latest,gemini-1.5-pro-latest,gemini-pro,gemini-pro-vision

# Vertex AI
# GOOGLE_MODELS=gemini-1.5-flash-preview-0514,gemini-1.5-pro-preview-0514,gemini-1.0-pro-vision-001,gemini-1.0-pro-002,gemini-1.0-pro-001,gemini-pro-vision,gemini-1.0-pro
Expand Down
3 changes: 2 additions & 1 deletion .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,8 @@
"env": {
"NODE_ENV": "production"
},
"console": "integratedTerminal"
"console": "integratedTerminal",
"envFile": "${workspaceFolder}/.env"
}
]
}
99 changes: 66 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,42 +38,75 @@
</a>
</p>

# 📃 Features

- 🖥️ UI matching ChatGPT, including Dark mode, Streaming, and latest updates
- 🤖 AI model selection:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, BingAI, ChatGPT, Google Vertex AI, Plugins, Assistants API (including Azure Assistants)
- ✅ Compatible across both **[Remote & Local AI services](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):**
- groq, Ollama, Cohere, Mistral AI, Apple MLX, koboldcpp, OpenRouter, together.ai, Perplexity, ShuttleAI, and more
- 🪄 Generative UI with **[Code Artifacts](https://youtu.be/GfTj7O4gmd0?si=WJbdnemZpJzBrJo3)**
- Create React, HTML code, and Mermaid diagrams right in chat
- 💾 Create, Save, & Share Custom Presets
- 🔀 Switch between AI Endpoints and Presets, mid-chat
- 🔄 Edit, Resubmit, and Continue Messages with Conversation branching
- 🌿 Fork Messages & Conversations for Advanced Context control
- 💬 Multimodal Chat:
- Upload and analyze images with Claude 3, GPT-4 (including `gpt-4o` and `gpt-4o-mini`), and Gemini Vision 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, & Google. 🗃️
- Advanced Agents with Files, Code Interpreter, Tools, and API Actions 🔦
- Available through the [OpenAI Assistants API](https://platform.openai.com/docs/assistants/overview) 🌤️
- Non-OpenAI Agents in Active Development 🚧
- 🌎 Multilingual UI:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro,
# ✨ Features

- 🖥️ **UI & Experience** inspired by ChatGPT with enhanced design and features

- 🤖 **AI Model Selection**:
- Anthropic (Claude), AWS Bedrock, OpenAI, Azure OpenAI, Google, Vertex AI, OpenAI Assistants API (incl. Azure)
- [Custom Endpoints](https://www.librechat.ai/docs/quick_start/custom_endpoints): Use any OpenAI-compatible API with LibreChat, no proxy required
- Compatible with [Local & Remote AI Providers](https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints):
- Ollama, groq, Cohere, Mistral AI, Apple MLX, koboldcpp, together.ai,
- OpenRouter, Perplexity, ShuttleAI, Deepseek, Qwen, and more

- 🔧 **[Code Interpreter API](https://www.librechat.ai/docs/features/code_interpreter)**:
- Secure, Sandboxed Execution in Python, Node.js (JS/TS), Go, C/C++, Java, PHP, Rust, and Fortran
- Seamless File Handling: Upload, process, and download files directly
- No Privacy Concerns: Fully isolated and secure execution

- 🔦 **Agents & Tools Integration**:
- **[LibreChat Agents](https://www.librechat.ai/docs/features/agents)**:
- No-Code Custom Assistants: Build specialized, AI-driven helpers without coding
- Flexible & Extensible: Attach tools like DALL-E-3, file search, code execution, and more
- Compatible with Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, and more
- Use LibreChat Agents and OpenAI Assistants with Files, Code Interpreter, Tools, and API Actions

- 🪄 **Generative UI with Code Artifacts**:
- [Code Artifacts](https://youtu.be/GfTj7O4gmd0?si=WJbdnemZpJzBrJo3) allow creation of React, HTML, and Mermaid diagrams directly in chat

- 💾 **Presets & Context Management**:
- Create, Save, & Share Custom Presets
- Switch between AI Endpoints and Presets mid-chat
- Edit, Resubmit, and Continue Messages with Conversation branching
- [Fork Messages & Conversations](https://www.librechat.ai/docs/features/fork) for Advanced Context control

- 💬 **Multimodal & File Interactions**:
- Upload and analyze images with Claude 3, GPT-4o, o1, Llama-Vision, and Gemini 📸
- Chat with Files using Custom Endpoints, OpenAI, Azure, Anthropic, AWS Bedrock, & Google 🗃️

- 🌎 **Multilingual UI**:
- English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro
- Русский, 日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands, עברית
- 🎨 Customizable Dropdown & Interface: Adapts to both power users and newcomers
- 📧 Verify your email to ensure secure access
- 🗣️ Chat hands-free with Speech-to-Text and Text-to-Speech magic
- Automatically send and play Audio

- 🎨 **Customizable Interface**:
- Customizable Dropdown & Interface that adapts to both power users and newcomers

- 📧 **Secure Access**:
- Verify your email to ensure secure access

- 🗣️ **Speech & Audio**:
- Chat hands-free with Speech-to-Text and Text-to-Speech
- Automatically send and play Audio
- Supports OpenAI, Azure OpenAI, and Elevenlabs
- 📥 Import Conversations from LibreChat, ChatGPT, Chatbot UI
- 📤 Export conversations as screenshots, markdown, text, json
- 🔍 Search all messages/conversations
- 🔌 Plugins, including web access, image generation with DALL-E-3 and more
- 👥 Multi-User, Secure Authentication with Moderation and Token spend tools
- ⚙️ Configure Proxy, Reverse Proxy, Docker, & many Deployment options:

- 📥 **Import & Export Conversations**:
- Import Conversations from LibreChat, ChatGPT, Chatbot UI
- Export conversations as screenshots, markdown, text, json

- 🔍 **Search & Discovery**:
- Search all messages/conversations

- 👥 **Multi-User & Secure**:
- Multi-User, Secure Authentication with OAuth2 & Email Login Support
- Built-in Moderation, and Token spend tools

- ⚙️ **Configuration & Deployment**:
- Configure Proxy, Reverse Proxy, Docker, & many Deployment options
- Use completely local or deploy on the cloud
- 📖 Completely Open-Source & Built in Public
- 🧑‍🤝‍🧑 Community-driven development, support, and feedback

- 📖 **Open-Source & Community**:
- Completely Open-Source & Built in Public
- Community-driven development, support, and feedback

[For a thorough review of our features, see our docs here](https://docs.librechat.ai/) 📚

Expand Down
5 changes: 5 additions & 0 deletions api/app/clients/OpenAIClient.js
Original file line number Diff line number Diff line change
Expand Up @@ -1324,6 +1324,11 @@ ${convo}
/** @type {(value: void | PromiseLike<void>) => void} */
let streamResolve;

if (this.isO1Model === true && this.azure && modelOptions.stream) {
delete modelOptions.stream;
delete modelOptions.stop;
}

if (modelOptions.stream) {
streamPromise = new Promise((resolve) => {
streamResolve = resolve;
Expand Down
26 changes: 21 additions & 5 deletions api/config/parsers.js
Original file line number Diff line number Diff line change
Expand Up @@ -187,17 +187,33 @@ const debugTraverse = winston.format.printf(({ level, message, timestamp, ...met
});

const jsonTruncateFormat = winston.format((info) => {
const truncateLongStrings = (str, maxLength) => {
return str.length > maxLength ? str.substring(0, maxLength) + '...' : str;
};

const seen = new WeakSet();

const truncateObject = (obj) => {
if (typeof obj !== 'object' || obj === null) {
return obj;
}

// Handle circular references
if (seen.has(obj)) {
return '[Circular]';
}
seen.add(obj);

if (Array.isArray(obj)) {
return obj.map(item => truncateObject(item));
}

const newObj = {};
Object.entries(obj).forEach(([key, value]) => {
if (typeof value === 'string') {
newObj[key] = truncateLongStrings(value, 255);
} else if (Array.isArray(value)) {
newObj[key] = value.map(condenseArray);
} else if (typeof value === 'object' && value !== null) {
newObj[key] = truncateObject(value);
} else {
newObj[key] = value;
newObj[key] = truncateObject(value);
}
});
return newObj;
Expand Down
4 changes: 2 additions & 2 deletions api/lib/db/connectDb.js
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ async function connectDb() {
const disconnected = cached.conn && cached.conn?._readyState !== 1;
if (!cached.promise || disconnected) {
const opts = {
useNewUrlParser: true,
useUnifiedTopology: true,
bufferCommands: false,
// useNewUrlParser: true,
// useUnifiedTopology: true,
// bufferMaxEntries: 0,
// useFindAndModify: true,
// useCreateIndex: true
Expand Down
1 change: 1 addition & 0 deletions api/models/Agent.js
Original file line number Diff line number Diff line change
Expand Up @@ -200,6 +200,7 @@ const getListAgents = async (searchParameter) => {
avatar: 1,
author: 1,
projectIds: 1,
description: 1,
isCollaborative: 1,
}).lean()
).map((agent) => {
Expand Down
4 changes: 4 additions & 0 deletions api/models/schema/assistant.js
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,10 @@ const assistantSchema = mongoose.Schema(
},
file_ids: { type: [String], default: undefined },
actions: { type: [String], default: undefined },
append_current_datetime: {
type: Boolean,
default: false,
},
},
{
timestamps: true,
Expand Down
1 change: 1 addition & 0 deletions api/models/tx.js
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ const tokenValues = Object.assign(
/* cohere doesn't have rates for the older command models,
so this was from https://artificialanalysis.ai/models/command-light/providers */
command: { prompt: 0.38, completion: 0.38 },
'gemini-2.0': { prompt: 0, completion: 0 }, // https://ai.google.dev/pricing
'gemini-1.5': { prompt: 7, completion: 21 }, // May 2nd, 2024 pricing
gemini: { prompt: 0.5, completion: 1.5 }, // May 2nd, 2024 pricing
},
Expand Down
2 changes: 1 addition & 1 deletion api/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
"cors": "^2.8.5",
"dedent": "^1.5.3",
"dotenv": "^16.0.3",
"express": "^4.21.1",
"express": "^4.21.2",
"express-mongo-sanitize": "^2.2.0",
"express-rate-limit": "^7.4.1",
"express-session": "^1.18.1",
Expand Down
9 changes: 9 additions & 0 deletions api/server/controllers/EndpointController.js
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,15 @@ async function endpointController(req, res) {
capabilities,
};
}
if (mergedConfig[EModelEndpoint.agents] && req.app.locals?.[EModelEndpoint.agents]) {
const { disableBuilder, capabilities, ..._rest } = req.app.locals[EModelEndpoint.agents];

mergedConfig[EModelEndpoint.agents] = {
...mergedConfig[EModelEndpoint.agents],
disableBuilder,
capabilities,
};
}

if (
mergedConfig[EModelEndpoint.azureAssistants] &&
Expand Down
31 changes: 11 additions & 20 deletions api/server/controllers/assistants/chatV1.js
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
const { v4 } = require('uuid');
const {
Time,
Constants,
RunStatus,
CacheKeys,
Expand All @@ -24,6 +25,7 @@ const validateAuthor = require('~/server/middleware/assistants/validateAuthor');
const { formatMessage, createVisionPrompt } = require('~/app/clients/prompts');
const { createRun, StreamRunManager } = require('~/server/services/Runs');
const { addTitle } = require('~/server/services/Endpoints/assistants');
const { createRunBody } = require('~/server/services/createRunBody');
const { getTransactions } = require('~/models/Transaction');
const checkBalance = require('~/models/checkBalance');
const { getConvo } = require('~/models/Conversation');
Expand All @@ -32,8 +34,6 @@ const { getModelMaxTokens } = require('~/utils');
const { getOpenAIClient } = require('./helpers');
const { logger } = require('~/config');

const ten_minutes = 1000 * 60 * 10;

/**
* @route POST /
* @desc Chat with an assistant
Expand All @@ -59,6 +59,7 @@ const chatV1 = async (req, res) => {
messageId: _messageId,
conversationId: convoId,
parentMessageId: _parentId = Constants.NO_PARENT,
clientTimestamp,
} = req.body;

/** @type {OpenAIClient} */
Expand Down Expand Up @@ -304,24 +305,14 @@ const chatV1 = async (req, res) => {
};

/** @type {CreateRunBody | undefined} */
const body = {
const body = createRunBody({
assistant_id,
model,
};

if (promptPrefix) {
body.additional_instructions = promptPrefix;
}

if (typeof endpointOption.artifactsPrompt === 'string' && endpointOption.artifactsPrompt) {
body.additional_instructions = `${body.additional_instructions ?? ''}\n${
endpointOption.artifactsPrompt
}`.trim();
}

if (instructions) {
body.instructions = instructions;
}
promptPrefix,
instructions,
endpointOption,
clientTimestamp,
});

const getRequestFileIds = async () => {
let thread_file_ids = [];
Expand Down Expand Up @@ -518,7 +509,7 @@ const chatV1 = async (req, res) => {
});

run_id = run.id;
await cache.set(cacheKey, `${thread_id}:${run_id}`, ten_minutes);
await cache.set(cacheKey, `${thread_id}:${run_id}`, Time.TEN_MINUTES);
sendInitialResponse();

// todo: retry logic
Expand All @@ -529,7 +520,7 @@ const chatV1 = async (req, res) => {
/** @type {{[AssistantStreamEvents.ThreadRunCreated]: (event: ThreadRunCreated) => Promise<void>}} */
const handlers = {
[AssistantStreamEvents.ThreadRunCreated]: async (event) => {
await cache.set(cacheKey, `${thread_id}:${event.data.id}`, ten_minutes);
await cache.set(cacheKey, `${thread_id}:${event.data.id}`, Time.TEN_MINUTES);
run_id = event.data.id;
sendInitialResponse();
},
Expand Down
Loading

0 comments on commit bd524e2

Please sign in to comment.