-
Checked other resources
Commit to Help
Example Codefrom langchain_mistralai.chat_models import ChatMistralAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
llm = ChatMistralAI(mistral_api_key=mistral_api_key, model="mistral-large-2411")
system_message = "..."
human_message = "..."
context = ChatPromptTemplate.from_messages([
("system", system_message), #prompt
("human", human_message) #user query
])
chain = (
context
| llm
| StrOutputParser()
)
response = ""
for chunk in chain.stream({"para": para}):
response += chunk DescriptionQuestionI've noticed that Mistral has changed their prompt format in version 2411: Previous format (2407): <s>[INST] user message[/INST] assistant message</s>[INST] system prompt + "\n\n" + user message[/INST] New format (2411): <s>[SYSTEM_PROMPT] system prompt[/SYSTEM PROMPT][INST] user message[/INST] assistant message</s>[INST] user message[/INST] Currently, I'm using LangChain's ChatPromptTemplate with Mistral (please refer to my sample code) Questions:
Additional Context
What I've Tried
Expected BehaviorClear guidance on:
System Info
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I'm following up on my own question with our findings after extensive testing and implementation in production. TL;DR: The current LangChain implementation works correctly with Mistral-Large-2411 through its API abstraction layer. No changes are needed in the code structure. Detailed Findings:
{
"messages": [
{
"role": "system",
"content": "system_message"
},
{
"role": "user",
"content": "user_message"
}
],
"model": "mistral-large-2411"
}
context = ChatPromptTemplate.from_messages([
("system", system_message),
("human", human_message)
])
I've documented our complete technical journey and findings here: Medium article link Marking this as resolved since we've confirmed the compatibility and optimal implementation approach. 🎉 |
Beta Was this translation helpful? Give feedback.
I'm following up on my own question with our findings after extensive testing and implementation in production.
TL;DR: The current LangChain implementation works correctly with Mistral-Large-2411 through its API abstraction layer. No changes are needed in the code structure.
Detailed Findings:
Through our testing with a custom HTTP client interceptor, we confirmed that LangChain correctly handles the new prompt format when interacting with Mistral's API. The messages are properly formatted as: