Replies: 2 comments 21 replies
-
Yes, it is possible to combine Query_engine or query_engine_tools with Autogen Agents. Below is a sample code demonstrating how to achieve this: from llama_index.core import (
SimpleDirectoryReader,
VectorStoreIndex,
StorageContext,
load_index_from_storage,
)
from llama_index.core.tools import QueryEngineTool, ToolMetadata
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
# Load or build the indices
try:
storage_context = StorageContext.from_defaults(persist_dir="./storage/lyft")
lyft_index = load_index_from_storage(storage_context)
storage_context = StorageContext.from_defaults(persist_dir="./storage/uber")
uber_index = load_index_from_storage(storage_context)
index_loaded = True
except:
index_loaded = False
if not index_loaded:
# Load data
lyft_docs = SimpleDirectoryReader(input_files=["./data/10k/lyft_2021.pdf"]).load_data()
uber_docs = SimpleDirectoryReader(input_files=["./data/10k/uber_2021.pdf"]).load_data()
# Build index
lyft_index = VectorStoreIndex.from_documents(lyft_docs)
uber_index = VectorStoreIndex.from_documents(uber_docs)
# Persist index
lyft_index.storage_context.persist(persist_dir="./storage/lyft")
uber_index.storage_context.persist(persist_dir="./storage/uber")
# Create query engines
lyft_engine = lyft_index.as_query_engine(similarity_top_k=3)
uber_engine = uber_index.as_query_engine(similarity_top_k=3)
# Define query engine tools
query_engine_tools = [
QueryEngineTool(
query_engine=lyft_engine,
metadata=ToolMetadata(
name="lyft_10k",
description="Provides information about Lyft financials for year 2021. Use a detailed plain text question as input to the tool.",
),
return_direct=False,
),
QueryEngineTool(
query_engine=uber_engine,
metadata=ToolMetadata(
name="uber_10k",
description="Provides information about Uber financials for year 2021. Use a detailed plain text question as input to the tool.",
),
return_direct=False,
),
]
# Setup ReAct Agent
llm = OpenAI(model="gpt-3.5-turbo-0613")
agent = ReActAgent.from_tools(query_engine_tools, llm=llm, verbose=True)
# Example query
response = agent.chat("What was Lyft's revenue growth in 2021?")
print(str(response)) This code sets up a ReAct agent that uses query engine tools to answer questions about Lyft and Uber's financials for the year 2021. The Additionally, you can explore the from llama_index.core import Document
from llama_index.packs.multi_document_agents.base import MultiDocumentAgentsPack
# Load documents
docs = [
Document(text="Content of Lyft 2021 financial report", title="Lyft 2021"),
Document(text="Content of Uber 2021 financial report", title="Uber 2021")
]
doc_titles = ["Lyft 2021", "Uber 2021"]
doc_descriptions = ["Lyft financials for year 2021", "Uber financials for year 2021"]
# Initialize MultiDocumentAgentsPack
multi_doc_agents_pack = MultiDocumentAgentsPack(docs, doc_titles, doc_descriptions)
# Example query
response = multi_doc_agents_pack.run("What was Lyft's revenue growth in 2021?")
print(str(response)) This approach leverages the |
Beta Was this translation helpful? Give feedback.
-
I think you can use LLamaIndexConversableAgent So you can try to replace query engine with chat engine. Their differences are mentioned here. If you don't want to have chat history side effects or the lib of query engine doesn't provide chat engine implementation, you can create a new class eg. A simple way to do this is to create a chat engine with the query engine. Every time calling |
Beta Was this translation helpful? Give feedback.
-
I did see a notebook about Autogen + Llamaindex. However, I use Query_engine most for all my analysis. Is that possible to combine query_engine ouput with Autogen and allow Autogen agent to take my llamaindex output for further analysis? Thoughts or sample code will be greatly appreicated.
https://microsoft.github.io/autogen/docs/notebooks/agentchat_group_chat_with_llamaindex_agents
Beta Was this translation helpful? Give feedback.
All reactions