Replies: 1 comment 3 replies
-
Hey @Rudra9214! How can I assist you today? The error you're encountering, To resolve this, ensure that when you invoke the model, you pass a dictionary with the key that matches the placeholder in your prompt. In your case, the placeholder is # Invoke the model with a user question
response = few_shot_structured_llm.invoke({"input": "connect me to an agent"})
# Format the structured JSON response
print(response) This change should align the input with the expected format for the |
Beta Was this translation helpful? Give feedback.
-
from typing import List, Dict
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
import json # Import JSON for formatting
load_dotenv()
llm = ChatOpenAI(model="gpt-4o")
Pydantic for structured output
class ResponseItem(BaseModel):
TYPE: str = Field(description="Type of the response")
PAYLOAD: Dict[str, str] = Field(description="Payload containing question and answer")
Pydantic for the full response structure
class ResponseSchema(BaseModel):
responses: List[ResponseItem] = Field(description="List of responses")
System message for chatbot
system_prompt = """You are a highly skilled JSON creator specializing in converting user input into structured JSON formats.
Your task is to process lines of text representing test cases and return their structured JSON equivalents based on a predefined schema.
The output JSON should be formatted as follows:
[
{
"TYPE": "QNA",
"PAYLOAD": {
"QUESTION": "<user's question>",
"ANSWER": ""
}
},
{
"TYPE": "HUMAN_HANDOFF",
"PAYLOAD": {}
}
]
Here are some examples of how to structure the output:
example_user: Convert the following line to JSON: "What is the capital of France?"
example_assistant: [
{
"TYPE": "QNA",
"PAYLOAD": {
"QUESTION": "What is the capital of France?",
"ANSWER": "Paris"
}
}
]
example_user: Convert the following line to JSON: "Say hello in Spanish"
example_assistant: [
{
"TYPE": "GREETING",
"PAYLOAD": {
"QUESTION": "Say hello in Spanish",
"ANSWER": "Hola"
}
}
]
example_user: Convert the following line to JSON: "How to contact customer support?"
example_assistant: [
{
"TYPE": "HUMAN_HANDOFF",
"PAYLOAD": {
"QUESTION": "How to contact customer support?",
"ANSWER": "You can reach customer support at [email protected] or call 123-456-7890."
}
}
]
example_user: Convert the following line to JSON: "What are the features of the iPhone 14?"
example_assistant: [
{
"TYPE": "PRODUCT_INFO",
"PAYLOAD": {
"QUESTION": "What are the features of the iPhone 14?",
"ANSWER": "The iPhone 14 features a 6.1-inch display, A15 Bionic chip, and improved battery life."
}
}
]"""
prompt = ChatPromptTemplate.from_messages([("system", system_prompt), ("human", "{input}")])
LLM with structured output
structured_llm = llm.with_structured_output(ResponseSchema)
Combine prompt and LLM
few_shot_structured_llm = prompt | structured_llm
Invoke the model with a user question
Format the structured JSON response
print(few_shot_structured_llm.invoke("connect me to an agent"))
Getting below error: raise TypeError(
TypeError: Expected mapping type as input to ChatPromptTemplate. Received <class 'str'>.
Beta Was this translation helpful? Give feedback.
All reactions