Skip to content

Commit

Permalink
Initial falkordb blog (plus corrections to reasoning agent)
Browse files Browse the repository at this point in the history
Signed-off-by: Mark Sze <[email protected]>
  • Loading branch information
marklysze committed Dec 6, 2024
1 parent af7d69e commit f2065b4
Show file tree
Hide file tree
Showing 5 changed files with 209 additions and 4 deletions.
6 changes: 3 additions & 3 deletions website/blog/2024-12-02-ReasoningAgent2/index.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: ReasoningAgent - Tree of Thoughts with Beam Search in AutoGen
title: ReasoningAgent - Tree of Thoughts with Beam Search in AG2
authors:
- Hk669
- skzhang1
Expand All @@ -10,13 +10,13 @@ tags: [LLM, GPT, research]
![Tree of Thoughts](img/tree-of-thoughts.png)

**TL;DR:**
* We introduce **ReasoningAgent**, an AutoGen agent that implements tree-of-thought reasoning with beam search to solve complex problems.
* We introduce **ReasoningAgent**, an AG2 agent that implements tree-of-thought reasoning with beam search to solve complex problems.
* ReasoningAgent explores multiple reasoning paths in parallel and uses a grader agent to evaluate and select the most promising paths.
* The exploration trajectory and thought tree can be saved locally for further analysis. These logs can even be saved as SFT dataset and preference dataset for DPO and PPO training.

## Introduction

Large language models (LLMs) have shown impressive capabilities in various tasks, but they can still struggle with complex reasoning problems that require exploring multiple solution paths. To address this limitation, we introduce ReasoningAgent, an AutoGen agent that implements tree-of-thought reasoning with beam search.
Large language models (LLMs) have shown impressive capabilities in various tasks, but they can still struggle with complex reasoning problems that require exploring multiple solution paths. To address this limitation, we introduce ReasoningAgent, an AG2 agent that implements tree-of-thought reasoning with beam search.

The key idea behind ReasoningAgent is to:
1. Generate multiple possible reasoning steps at each point
Expand Down
3 changes: 3 additions & 0 deletions website/blog/2024-12-06-FalkorDB-Structured/img/falkordb.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
199 changes: 199 additions & 0 deletions website/blog/2024-12-06-FalkorDB-Structured/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,199 @@
---
title: Knowledgeable Agents with FalkorDB Graph RAG
authors:
- marklysze
- sternakt
- davorrunje
tags: [RAG, Graph RAG, Structured Outputs, swarm, nested chat]
---

![FalkorDB Web](img/falkordb.png)

**TL;DR:**
* We introduce a new ability for AG2 agents, Graph RAG with FalkorDB, providing the power of knowledge graphs
* Structured outputs, using OpenAI models, provide strict adherence to data models to improve reliability and agentic flows
* Nested chats are now available with a Swarm

## FalkorDB Graph RAG

Commonly, RAG uses vector databases, which store information as embeddings, mathematical representations of data points. When a query is received, it's also converted into an embedding, and the vector database retrieves the most similar embeddings based on distance metrics.

Graph-based RAG, on the other hand, leverages graph databases, which represent knowledge as a network of interconnected entities and relationships. When a query is received, Graph RAG traverses the graph to find relevant information based on the query's structure and semantics.

### Advantages of Graph RAG:

1. Enhanced Contextual Understanding
Graph RAG captures the relationships between entities in the knowledge graph, providing richer context for LLMs. This enables more accurate and nuanced responses compared to traditional RAG, which often retrieves isolated facts.

2. Improved Reasoning Abilities
The interconnected nature of graph databases allows Graph RAG to perform reasoning and inference over the knowledge. This is crucial for tasks requiring complex understanding and logical deductions, such as question answering and knowledge discovery.

3. Handling Complex Relationships
Graph RAG excels at representing and leveraging intricate relationships between entities, allowing it to tackle complex queries that involve multiple entities and their connections. This makes it suitable for domains with rich interconnected data, like healthcare or finance.

4. Explainable Retrieval
The graph traversal process in Graph RAG provides a clear path for understanding why specific information was retrieved. This transparency is valuable for visualizing, debugging and building trust in the system's outputs.

### FalkorDB Graph RAG capabilities

FalkorDB is a high performant graph database enabling queries with reduced hallucinations.

In release 0.5, AG2 has added the ability to add FalkorDB Graph RAG querying capabilities to an agent. These agents will behave like other agents in an orchestration but will query the FalkorDB and return the results as a response.

An LLM is incorporated into this capability, allowing data to be classified during ingestion, queries to be optimised, and results to be provided back in natural language.

See the [FalkorDB docs](https://docs.falkordb.com/) for how to get a database setup.

Below is a simple example of creating a FalkorDB Graph RAG agent in AG2. A data file of web page on the movie The Matrix is ingested into the database and the knowledge graph is created automatically before being queried. [Data file here](https://raw.githubusercontent.com/ag2ai/ag2/refs/heads/main/test/agentchat/contrib/graph_rag/the_matrix.txt).

For example:
```python
import os
import autogen

config_list = autogen.config_list_from_json(env_or_file="OAI_CONFIG_LIST")
os.environ["OPENAI_API_KEY"] = config_list[0]["api_key"] # Utilised by the FalkorGraphQueryEngine

from autogen import ConversableAgent, UserProxyAgent
from autogen.agentchat.contrib.graph_rag.document import Document, DocumentType
from autogen.agentchat.contrib.graph_rag.falkor_graph_query_engine import FalkorGraphQueryEngine
from autogen.agentchat.contrib.graph_rag.falkor_graph_rag_capability import FalkorGraphRagCapability

# Auto generate graph schema from unstructured data
input_path = "../test/agentchat/contrib/graph_rag/the_matrix.txt"
input_documents = [Document(doctype=DocumentType.TEXT, path_or_url=input_path)]

# Create FalkorGraphQueryEngine
query_engine = FalkorGraphQueryEngine(
name="The_Matrix_Auto",
host="172.18.0.3", # Change
port=6379, # if needed
)

# Ingest data and initialize the database
query_engine.init_db(input_doc=input_documents)

# Create a ConversableAgent
graph_rag_agent = ConversableAgent(
name="matrix_agent",
human_input_mode="NEVER",
)

# Associate the capability with the agent
graph_rag_capability = FalkorGraphRagCapability(query_engine)
graph_rag_capability.add_to_agent(graph_rag_agent)

# Create a user proxy agent to converse with our RAG agent
user_proxy = UserProxyAgent(
name="user_proxy",
human_input_mode="ALWAYS",
)

user_proxy.initiate_chat(
graph_rag_agent,
message="Name a few actors who've played in 'The Matrix'")
```

Here's the output showing the FalkorDB Graph RAG agent, matrix_agent, finding relevant actors and then being able to confirm that there are no other actors in the movie, when queried.

```bash
user_proxy (to matrix_agent):

Name a few actors who've played in 'The Matrix'
--------------------------------------------------------------------------------
matrix_agent (to user_proxy):
Keanu Reeves, Laurence Fishburne, Carrie-Anne Moss, and Hugo Weaving are a few actors who've played in 'The Matrix'.

--------------------------------------------------------------------------------
user_proxy (to matrix_agent):

Who else acted in The Matrix?

--------------------------------------------------------------------------------
matrix_agent (to user_proxy):

Based on the provided information, there is no additional data about other actors who acted in 'The Matrix' outside of Keanu Reeves, Laurence Fishburne, Carrie-Anne Moss, and Hugo Weaving.

--------------------------------------------------------------------------------
```

For a more in-depth example, [see this notebook](https://ag2ai.github.io/ag2/docs/notebooks/agentchat_swarm_graphrag_trip_planner/) where we create this Trip Planner workflow.
![Trip Planner](img/tripplanner.png)

## Structured Outputs

Also featured in the Trip Planner example above, AG2 now enables your agents to respond with a structured output, aligned with a Pydantic model.

This capability provides strict responses, where the LLM provides the data in a structure that you define. This enables you to interpret and validate information precisely, providing more robustness to an LLM-based workflow.

This is available when using OpenAI LLMs and is set in the LLM configuration (gpt-3.5-turbo-0613 or gpt-4-0613 and above):

```python
from pydantic import BaseModel

# Here is our model
class Step(BaseModel):
explanation: str
output: str

class MathReasoning(BaseModel):
steps: list[Step]
final_answer: str

# response_format is added to our configuration
llm_config = {
"config_list":
[
{
"api_type": "openai",
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
"response_format": MathReasoning
}
]
}

# This agent's responses will now be based on the MathReasoning model
assistant = autogen.AssistantAgent(
name="Math_solver",
llm_config=llm_config,
)
```

A sample response to `how can I solve 8x + 7 = -23` would be:
```python
{
"steps": [
{
"explanation": "To isolate the term with x, we first subtract 7 from both sides of the equation.",
"output": "8x + 7 - 7 = -23 - 7 -> 8x = -30."
},
{
"explanation": "Now that we have 8x = -30, we divide both sides by 8 to solve for x.",
"output": "x = -30 / 8 -> x = -3.75."
}
],
"final_answer": "x = -3.75"
}
```

See the [Trip Planner](https://ag2ai.github.io/ag2/docs/notebooks/agentchat_swarm_graphrag_trip_planner/) and [Structured Output](https://ag2ai.github.io/ag2/docs/notebooks/agentchat_structured_outputs/) notebooks to start using Structured Outputs.


## Nested Chats in Swarms

Building on the capability of Swarms, AG2 now allows you to utilise a nested chat within a swarm. By providing this capability, you can perform sub-tasks or solve more complex tasks, while maintaining a simple swarm setup.

Additionally, adding a carry over configurations allow you to control what information from the swarm messages is carried over to the nested chat. Options include bringing over context from all messages, the last message, an LLM summary of the messages, or based on a own custom function.

See the [Swarm documentation](https://ag2ai.github.io/ag2/docs/topics/swarm#registering-handoffs-to-a-nested-chat) for more information.

## For Further Reading

* [Documentation about FalkorDB](https://docs.falkordb.com/)
* [FalkorDB example notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_swarm_graphrag_trip_planner.ipynb)
* [OpenAI's Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)

*Do you have interesting use cases for FalkorDB / RAG? Would you like to see more features or improvements? Please join our [Discord](https://discord.com/invite/pAbnFJrkgZ) server for discussion.*
2 changes: 1 addition & 1 deletion website/blog/authors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ Hk669:

marklysze:
name: Mark Sze
title: AI Freelancer
title: Software Engineer at AG2.ai
url: https://github.com/marklysze
image_url: https://github.com/marklysze.png

Expand Down

0 comments on commit f2065b4

Please sign in to comment.