Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into geminitoolcalling
Browse files Browse the repository at this point in the history
Signed-off-by: Mark Sze <[email protected]>
  • Loading branch information
marklysze committed Dec 10, 2024
2 parents 49d11b3 + 9338c7a commit 663eee5
Show file tree
Hide file tree
Showing 86 changed files with 9,256 additions and 421 deletions.
46 changes: 46 additions & 0 deletions .github/workflows/contrib-graph-rag-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -64,3 +64,49 @@ jobs:
with:
file: ./coverage.xml
flags: unittests

GraphRagIntegrationTest-Neo4j-Llmaindex-Ubuntu:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11"]
services:
neo4j:
image: neo4j:latest
ports:
- 7687:7687
- 7474:7474
env:
NEO4J_AUTH: neo4j/password
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install packages and dependencies for all tests
run: |
python -m pip install --upgrade pip wheel
pip install pytest
- name: Install Neo4j and Llama-index when on linux
run: |
pip install -e .[neo4j_graph_rag]
- name: Set AUTOGEN_USE_DOCKER based on OS
shell: bash
run: |
echo "AUTOGEN_USE_DOCKER=False" >> $GITHUB_ENV
- name: Coverage
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
pip install pytest-cov>=5
pytest test/agentchat/contrib/graph_rag/test_neo4j_graph_rag.py --skip-openai
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
27 changes: 13 additions & 14 deletions .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,21 @@ jobs:
# - name: Conda list
# shell: pwsh
# run: conda list
- name: Build autogen
- name: Build pyautogen
shell: pwsh
run: |
pip install twine
python setup.py sdist bdist_wheel --name "autogen"
python setup.py sdist bdist_wheel
- name: Publish pyautogen to PyPI
env:
TWINE_USERNAME: ${{ secrets.PYAUTOGEN_PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYAUTOGEN_PYPI_PASSWORD }}
shell: pwsh
run: twine upload dist/*pyautogen*
- name: Build autogen
shell: pwsh
run: |
python setup_autogen.py sdist bdist_wheel
- name: Publish autogen to PyPI
env:
TWINE_USERNAME: ${{ secrets.AUTOGEN_PYPI_USERNAME }}
Expand All @@ -57,21 +67,10 @@ jobs:
- name: Build ag2
shell: pwsh
run: |
pip install twine
python setup.py sdist bdist_wheel --name "ag2"
python setup_ag2.py sdist bdist_wheel
- name: Publish ag2 to PyPI
env:
TWINE_USERNAME: ${{ secrets.AUTOGEN_PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.AUTOGEN_PYPI_PASSWORD }}
shell: pwsh
run: twine upload dist/ag2*
- name: Build pyautogen
shell: pwsh
run: |
python setup.py sdist bdist_wheel --name "pyautogen"
- name: Publish pyautogen to PyPI
env:
TWINE_USERNAME: ${{ secrets.PYAUTOGEN_PYPI_USERNAME }}
TWINE_PASSWORD: ${{ secrets.PYAUTOGEN_PYPI_PASSWORD }}
shell: pwsh
run: twine upload dist/*pyautogen*
5 changes: 4 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,10 @@ repos:
website/docs/tutorial/code-executors.ipynb |
website/docs/topics/code-execution/custom-executor.ipynb |
website/docs/topics/non-openai-models/cloud-gemini.ipynb |
notebook/.*
notebook/.* |
test/agentchat/contrib/graph_rag/trip_planner_data/.* |
test/agentchat/contrib/graph_rag/paul_graham_essay.txt
)$
# See https://jaredkhan.com/blog/mypy-pre-commit
- repo: local
Expand Down
4 changes: 3 additions & 1 deletion MAINTAINERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,13 +21,15 @@
| Rudy Wu | [rudyalways](https://github.com/rudyalways) | Google | all, group chats, sequential chats |
| Haiyang Li | [ohdearquant](https://github.com/ohdearquant) | - | all, sequential chats, structured output, low-level|
| Eric Moore | [emooreatx](https://github.com/emooreatx) | IBM | all|
| Evan David | [evandavid1](https://github.com/evandavid1) | - | all |
| Tvrtko Sternak | [sternakt](https://github.com/sternakt) | airt.ai | structured output |


**Pending Maintainers list (Marked with \*, Waiting for explicit approval from the maintainers)**
| Name | GitHub Handle | Organization | Features |
|-----------------|------------------------------------------------------------|------------------------|-----------------------------------------|
| Olaoluwa Ademola Salami * | [olaoluwasalami](https://github.com/olaoluwasalami) | DevOps Engineer | |
| Rajan Chari * | [rajan-chari](https://github.com/rajan-chari) | Microsoft Research | CAP |
| Evan David * | [evandavid1](https://github.com/evandavid1) | - | gpt assistant, group chat, rag, autobuild |

## I would like to join this list. How can I help the project?
> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://ag2ai.github.io/ag2/docs/contributor-guide/contributing) guide.
Expand Down
12 changes: 2 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,11 @@
> We invite collaborators from all organizations and individuals to join the development.

:fire: :tada: AG2 is available via `ag2` (or its alias `autogen` or `pyautogen`) on PyPI! Starting with version 0.3.2, you can now install AG2 using:
```
pip install ag2
```
or
:fire: :tada: AG2 is available via `pyautogen` (or its alias `autogen` or `ag2`) on PyPI!

```
pip install pyautogen
```
or
```
pip install autogen
```


📄 **License:**
We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-source collaboration while providing additional protections for contributors and users alike.
Expand Down
64 changes: 59 additions & 5 deletions autogen/agentchat/contrib/graph_rag/falkor_graph_query_engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
# SPDX-License-Identifier: Apache-2.0

import os
from dataclasses import dataclass, field
import warnings
from typing import List

from falkordb import FalkorDB, Graph
from graphrag_sdk import KnowledgeGraph, Source
from graphrag_sdk.model_config import KnowledgeGraphModelConfig
from graphrag_sdk.models import GenerativeModel
Expand Down Expand Up @@ -35,6 +36,8 @@ def __init__(
Initialize a FalkorDB knowledge graph.
Please also refer to https://github.com/FalkorDB/GraphRAG-SDK/blob/main/graphrag_sdk/kg.py
TODO: Fix LLM API cost calculation for FalkorDB useages.
Args:
name (str): Knowledge graph name.
host (str): FalkorDB hostname.
Expand All @@ -53,8 +56,38 @@ def __init__(
self.model = model
self.model_config = KnowledgeGraphModelConfig.with_model(model)
self.ontology = ontology
self.knowledge_graph = None
self.falkordb = FalkorDB(host=self.host, port=self.port, username=self.username, password=self.password)

def connect_db(self):
"""
Connect to an existing knowledge graph.
"""
if self.name in self.falkordb.list_graphs():
try:
self.ontology = self._load_ontology_from_db(self.name)
except Exception:
warnings.warn("Graph Ontology is not loaded.")

if self.ontology is None:
raise ValueError(f"Ontology of the knowledge graph '{self.name}' can't be None.")

self.knowledge_graph = KnowledgeGraph(
name=self.name,
host=self.host,
port=self.port,
username=self.username,
password=self.password,
model_config=self.model_config,
ontology=self.ontology,
)

# Establishing a chat session will maintain the history
self._chat_session = self.knowledge_graph.chat_session()
else:
raise ValueError(f"Knowledge graph '{self.name}' does not exist")

def init_db(self, input_doc: List[Document] | None):
def init_db(self, input_doc: List[Document]):
"""
Build the knowledge graph with input documents.
"""
Expand All @@ -81,10 +114,16 @@ def init_db(self, input_doc: List[Document] | None):
ontology=self.ontology,
)

# Establish a chat session, this will maintain the history
self._chat_session = self.knowledge_graph.chat_session()
self.knowledge_graph.process_sources(sources)

# Establishing a chat session will maintain the history
self._chat_session = self.knowledge_graph.chat_session()

# Save Ontology to graph for future access.
self._save_ontology_to_db(self.name, self.ontology)
else:
raise ValueError("No input documents could be loaded.")

def add_records(self, new_records: List) -> bool:
raise NotImplementedError("This method is not supported by FalkorDB SDK yet.")

Expand All @@ -101,11 +140,26 @@ def query(self, question: str, n_results: int = 1, **kwargs) -> GraphStoreQueryR
Returns: FalkorGraphQueryResult
"""
if self.knowledge_graph is None:
raise ValueError("Knowledge graph is not created.")
raise ValueError("Knowledge graph has not been selected or created.")

response = self._chat_session.send_message(question)

# History will be considered when querying by setting the last_answer
self._chat_session.last_answer = response["response"]

return GraphStoreQueryResult(answer=response["response"], results=[])

def __get_ontology_storage_graph(self, graph_name: str) -> Graph:
ontology_table_name = graph_name + "_ontology"
return self.falkordb.select_graph(ontology_table_name)

def _save_ontology_to_db(self, graph_name: str, ontology: Ontology):
"""
Save graph ontology to a separate table with {graph_name}_ontology
"""
graph = self.__get_ontology_storage_graph(graph_name)
ontology.save_to_graph(graph)

def _load_ontology_from_db(self, graph_name: str) -> Ontology:
graph = self.__get_ontology_storage_graph(graph_name)
return Ontology.from_graph(graph)
42 changes: 33 additions & 9 deletions autogen/agentchat/contrib/graph_rag/falkor_graph_rag_capability.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ def _reply_using_falkordb_query(
Query FalkorDB and return the message. Internally, it utilises OpenAI to generate a reply based on the given messages.
The history with FalkorDB is also logged and updated.
The agent's system message will be incorporated into the query, if it's not blank.
If no results are found, a default message is returned: "I'm sorry, I don't have an answer for that."
Args:
Expand All @@ -66,16 +68,38 @@ def _reply_using_falkordb_query(
Returns:
A tuple containing a boolean indicating success and the assistant's reply.
"""
question = self._get_last_question(messages[-1])
# question = self._get_last_question(messages[-1])
question = self._messages_summary(messages, recipient.system_message)
result: GraphStoreQueryResult = self.query_engine.query(question)

return True, result.answer if result.answer else "I'm sorry, I don't have an answer for that."

def _get_last_question(self, message: Union[Dict, str]):
"""Retrieves the last message from the conversation history."""
if isinstance(message, str):
return message
if isinstance(message, Dict):
if "content" in message:
return message["content"]
return None
def _messages_summary(self, messages: Union[Dict, str], system_message: str) -> str:
"""Summarize the messages in the conversation history. Excluding any message with 'tool_calls' and 'tool_responses'
Includes the 'name' (if it exists) and the 'content', with a new line between each one, like:
customer:
<content>
agent:
<content>
"""

if isinstance(messages, str):
if system_message:
summary = f"IMPORTANT: {system_message}\nContext:\n\n{messages}"
else:
return messages

elif isinstance(messages, List):
summary = ""
for message in messages:
if "content" in message and "tool_calls" not in message and "tool_responses" not in message:
summary += f"{message.get('name', '')}: {message.get('content','')}\n\n"

if system_message:
summary = f"IMPORTANT: {system_message}\nContext:\n\n{summary}"

return summary

else:
raise ValueError("Invalid messages format. Must be a list of messages or a string.")
Loading

0 comments on commit 663eee5

Please sign in to comment.