Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into telemetryphase1
Browse files Browse the repository at this point in the history
Signed-off-by: Mark Sze <[email protected]>
  • Loading branch information
marklysze committed Dec 27, 2024
2 parents cd0251a + 6d888b2 commit 14afa3b
Show file tree
Hide file tree
Showing 211 changed files with 979 additions and 816 deletions.
2 changes: 1 addition & 1 deletion .devcontainer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ These configurations can be used with Codespaces and locally.
- **Usage**: Recommended for developers who are contributing to the AutoGen project.
- **Building the Image**: Run `docker build -f dev/Dockerfile -t ag2_dev_img .`.
- **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "dev" as devcontainer configuration`. This image may require a Codespace with at least 64GB of disk space.
- **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://ag2ai.github.io/ag2/docs/Contribute) page prior to submitting any pull requests.
- **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://docs.ag2.ai/docs/contributor-guide/contributing) page prior to submitting any pull requests.


## Customizing Dockerfiles
Expand Down
7 changes: 3 additions & 4 deletions .devcontainer/dev/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,10 @@ RUN sudo pip install --upgrade pip && \
# Install pre-commit hooks
RUN pre-commit install

# Setup Docusaurus and Yarn for the documentation website
RUN sudo npm install --global yarn
RUN sudo pip install pydoc-markdown
# Setup Mintlify for the documentation website
RUN sudo pip install pydoc-markdown pyyaml termcolor nbclient
RUN cd website
RUN yarn install --frozen-lockfile --ignore-engines
RUN npm install

RUN arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) && \
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.5.23/quarto-1.5.23-linux-${arch}.tar.gz && \
Expand Down
4 changes: 2 additions & 2 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<!-- Thank you for your contribution! Please review https://ag2ai.github.io/ag2/docs/Contribute before opening a pull request. -->
<!-- Thank you for your contribution! Please review https://docs.ag2.ai/docs/contributor-guide/contributing before opening a pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->

Expand All @@ -12,6 +12,6 @@

## Checks

- [ ] I've included any doc changes needed for https://ag2ai.github.io/ag2/. See https://ag2ai.github.io/ag2/docs/Contribute#documentation to build and test documentation locally.
- [ ] I've included any doc changes needed for https://docs.ag2.ai/. See https://docs.ag2.ai/docs/contributor-guide/documentation to build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
- [ ] I've made sure all auto checks have passed.
4 changes: 2 additions & 2 deletions .github/workflows/deploy-website-mintlify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install pydoc-markdown pyyaml termcolor nbconvert
pip install pydoc-markdown pyyaml termcolor nbclient
# Pin databind packages as version 4.5.0 is not compatible with pydoc-markdown.
pip install databind.core==4.4.2 databind.json==4.4.2
Expand Down Expand Up @@ -86,7 +86,7 @@ jobs:
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install pydoc-markdown pyyaml termcolor nbconvert
pip install pydoc-markdown pyyaml termcolor nbclient
# Pin databind packages as version 4.5.0 is not compatible with pydoc-markdown.
pip install databind.core==4.4.2 databind.json==4.4.2
Expand Down
2 changes: 1 addition & 1 deletion MAINTAINERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
| Rajan Chari * | [rajan-chari](https://github.com/rajan-chari) | Microsoft Research | CAP |

## I would like to join this list. How can I help the project?
> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://ag2ai.github.io/ag2/docs/contributor-guide/contributing) guide.
> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://docs.ag2.ai/docs/contributor-guide/contributing) guide.

## Are you missing from this list?
Expand Down
32 changes: 16 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-

:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).

<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://ag2ai.github.io/ag2-for-net/) is available for .NET developers! -->
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://docs.ag2.ai/ag2-for-net/) is available for .NET developers! -->

:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).

Expand All @@ -55,7 +55,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-

:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).

<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://ag2ai.github.io/ag2/docs/Installation#python). -->
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/ag2ai/ag2/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://docs.ag2.ai/docs/installation/Installation). -->

<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://docs.ag2.ai/blog/2023-11-13-OAI-assistants) for details and examples. -->

Expand All @@ -74,7 +74,7 @@ We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-
<!--
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
:fire: [autogen](https://ag2ai.github.io/ag2/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
:fire: [autogen](https://docs.ag2.ai/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
:fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). -->

Expand Down Expand Up @@ -117,11 +117,11 @@ The easiest way to start playing is
</a>
</p>

## [Installation](https://ag2ai.github.io/ag2/docs/Installation)
## [Installation](https://docs.ag2.ai/docs/installation/Installation)

### Option 1. Install and Run AG2 in Docker

Find detailed instructions for users [here](https://ag2ai.github.io/ag2/docs/installation/Docker#step-1-install-docker), and for developers [here](https://ag2ai.github.io/ag2/docs/Contribute#docker-for-development).
Find detailed instructions for users [here](https://docs.ag2.ai/docs/installation/Docker#step-1-install-docker), and for developers [here](https://docs.ag2.ai/docs/contributor-guide/docker).

### Option 2. Install AG2 Locally

Expand All @@ -138,13 +138,13 @@ Minimal dependencies are installed without extra options. You can install extra
pip install "autogen[blendsearch]"
``` -->

Find more options in [Installation](https://ag2ai.github.io/ag2/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
Find more options in [Installation](https://docs.ag2.ai/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).

<!-- Each of the [`notebook examples`](https://github.com/ag2ai/ag2/tree/main/notebook) may require a specific option to be installed. -->

Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://ag2ai.github.io/ag2/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://ag2ai.github.io/ag2/docs/Installation#code-execution-with-docker-(default)).
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-in-docker) in docker. Find more instructions and how to change the default behaviour [here](https://docs.ag2.ai/docs/FAQ#if-you-want-to-run-code-execution-locally).

For LLM inference configurations, check the [FAQs](https://ag2ai.github.io/ag2/docs/FAQ#set-your-api-endpoints).
For LLM inference configurations, check the [FAQs](https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints).

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand All @@ -154,7 +154,7 @@ For LLM inference configurations, check the [FAQs](https://ag2ai.github.io/ag2/d

## Multi-Agent Conversation Framework

AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2ai.github.io/ag2/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://docs.ag2.ai/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:
Expand All @@ -168,7 +168,7 @@ For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
```python
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
# See https://ag2ai.github.io/ag2/docs/FAQ#set-your-api-endpoints
# See https://docs.ag2.ai/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4o', 'api_key': '<your OpenAI API key here>'},]
Expand All @@ -191,7 +191,7 @@ The figure below shows an example conversation flow with AG2.


Alternatively, the [sample code](https://github.com/ag2ai/build-with-ag2/blob/main/samples/simple_chat.py) here allows a user to chat with an AG2 agent in ChatGPT style.
Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#automated-multi-agent-chat) for this feature.
Please find more [code examples](https://docs.ag2.ai/docs/Examples#automated-multi-agent-chat) for this feature.

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand All @@ -201,7 +201,7 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#autom

## Enhanced LLM Inferences

AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://ag2ai.github.io/ag2/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
AG2 also helps maximize the utility out of the expensive LLMs such as gpt-4o. It offers [enhanced LLM inference](https://docs.ag2.ai/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.

<!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
Expand All @@ -220,7 +220,7 @@ config, analysis = autogen.Completion.tune(
response = autogen.Completion.create(context=test_instance, **config)
```
Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#tune-gpt-models) for this feature. -->
Please find more [code examples](https://docs.ag2.ai/docs/Examples#tune-gpt-models) for this feature. -->

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand All @@ -230,15 +230,15 @@ Please find more [code examples](https://ag2ai.github.io/ag2/docs/Examples#tune-

## Documentation

You can find detailed documentation about AG2 [here](https://ag2ai.github.io/ag2/).
You can find detailed documentation about AG2 [here](https://docs.ag2.ai/).

In addition, you can find:

- [Research](https://ag2ai.github.io/ag2/docs/Research), [blogposts](https://docs.ag2.ai/blog) around AG2, and [Transparency FAQs](https://github.com/ag2ai/ag2/blob/main/TRANSPARENCY_FAQS.md)
- [Research](https://docs.ag2.ai/docs/Research), [blogposts](https://docs.ag2.ai/blog) around AG2, and [Transparency FAQs](https://github.com/ag2ai/ag2/blob/main/TRANSPARENCY_FAQS.md)

- [Discord](https://discord.gg/pAbnFJrkgZ)

- [Contributing guide](https://ag2ai.github.io/ag2/docs/Contribute)
- [Contributing guide](https://docs.ag2.ai/docs/contributor-guide/contributing)

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/contrib/captainagent.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ def __init__(
agent_lib (str): the path or a JSON file of the agent library for retrieving the nested chat instantiated by CaptainAgent.
tool_lib (str): the path to the tool library for retrieving the tools used in the nested chat instantiated by CaptainAgent.
nested_config (dict): the configuration for the nested chat instantiated by CaptainAgent.
A full list of keys and their functionalities can be found in [docs](https://ag2ai.github.io/ag2/docs/topics/captainagent/configurations).
A full list of keys and their functionalities can be found in [docs](https://docs.ag2.ai/docs/topics/captainagent/configurations).
agent_config_save_path (str): the path to save the generated or retrieved agent configuration.
**kwargs (dict): Please refer to other kwargs in
[ConversableAgent](https://github.com/ag2ai/ag2/blob/main/autogen/agentchat/conversable_agent.py#L74).
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/contrib/swarm_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ class ON_CONDITION:
Args:
target: The agent to hand off to or the nested chat configuration. Can be a SwarmAgent or a Dict.
If a Dict, it should follow the convention of the nested chat configuration, with the exception of a carryover configuration which is unique to Swarms.
Swarm Nested chat documentation: https://ag2ai.github.io/ag2/docs/topics/swarm#registering-handoffs-to-a-nested-chat
Swarm Nested chat documentation: https://docs.ag2.ai/docs/topics/swarm#registering-handoffs-to-a-nested-chat
condition: The condition for transitioning to the target agent, evaluated by the LLM to determine whether to call the underlying function/tool which does the transition.
available: Optional condition to determine if this ON_CONDITION is available. Can be a Callable or a string.
If a string, it will look up the value of the context variable with that name, which should be a bool.
Expand Down
Loading

0 comments on commit 14afa3b

Please sign in to comment.