Skip to content

Commit

Permalink
udpate
Browse files Browse the repository at this point in the history
  • Loading branch information
qingyun-wu committed Nov 11, 2024
1 parent 27cda7d commit 2410316
Showing 1 changed file with 26 additions and 24 deletions.
50 changes: 26 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,20 +21,22 @@
> :fire: :tada: Nov 11, 2024: We are evolving AutoGen into AG2! A new organization [ag2labs](https://github.com/ag2labs) is created to host the development of AG2 and related projects with open governance. We invite collaborators from all organizations and individuals to join the development.

:fire: :tada: Sep 06, 2024: AG2 is available via `ag2` or `pyautogen` on PyPI! Starting with version 0.3.3, you can now install AG2 using:
:fire: :tada: Sep 06, 2024: AG2 is available via `ag2` (or its alias `autogen` or `pyautogen`) on PyPI! Starting with version 0.3.3, you can now install AG2 using:
```
pip install ag2
```
or

```
pip install pyautogen
```
or
```
pip install autogen
```

**Note:** The previous package name `pyautogen` will remain valid for a transitional period. However, we encourage users to switch to the new, more intuitive `ag2` package name, as `pyautogen` will eventually be deprecated.

📄 **License Change:**
With this new release and package name, we are officially switching to the Apache 2.0 license. This enhances our commitment to open-source collaboration while providing additional protections for contributors and users alike.
📄 **License:**
We adopt the Apache 2.0 license from v0.3. This enhances our commitment to open-source collaboration while providing additional protections for contributors and users alike.


:tada: May 29, 2024: DeepLearning.ai launched a new short course [AI Agentic Design Patterns with AutoGen](https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen), made in collaboration with Microsoft and Penn State University, and taught by AutoGen creators [Chi Wang](https://github.com/sonichi) and [Qingyun Wu](https://github.com/qingyun-wu).
Expand All @@ -45,21 +47,21 @@ With this new release and package name, we are officially switching to the Apach

:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).

<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://ag2labs.github.io/autogen-for-net/) is available for .NET developers! -->
<!-- :tada: Apr 26, 2024: [AutoGen.NET](https://ag2labs.github.io/ag2-for-net/) is available for .NET developers! -->

:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).

:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://ag2labs.github.io/autogen/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://ag2labs.github.io/ag2/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).

<!-- :tada: Mar 1, 2024: the first AutoGen multi-agent experiment on the challenging [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark achieved the No. 1 accuracy in all the three levels. -->

<!-- :tada: Jan 30, 2024: AutoGen is highlighted by Peter Lee in Microsoft Research Forum [Keynote](https://t.co/nUBSjPDjqD). -->

:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).

<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/microsoft/autogen/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://ag2labs.github.io/autogen/docs/Installation#python). -->
<!-- :fire: Nov 24: pyautogen [v0.2](https://github.com/microsoft/autogen/releases/tag/v0.2.0) is released with many updates and new features compared to v0.1.1. It switches to using openai-python v1. Please read the [migration guide](https://ag2labs.github.io/ag2/docs/Installation#python). -->

<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://ag2labs.github.io/autogen/blog/2023/11/13/OAI-assistants) for details and examples. -->
<!-- :fire: Nov 11: OpenAI's Assistants are available in AutoGen and interoperatable with other AutoGen agents! Checkout our [blogpost](https://ag2labs.github.io/ag2/blog/2023/11/13/OAI-assistants) for details and examples. -->

:tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML).

Expand All @@ -76,7 +78,7 @@ With this new release and package name, we are officially switching to the Apach
<!--
:fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).
:fire: [autogen](https://ag2labs.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
:fire: [autogen](https://ag2labs.github.io/ag2/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).
:fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/). -->

Expand Down Expand Up @@ -120,17 +122,17 @@ The easiest way to start playing is
</a>
</p>

## [Installation](https://ag2labs.github.io/autogen/docs/Installation)
## [Installation](https://ag2labs.github.io/ag2/docs/Installation)
### Option 1. Install and Run AG2 in Docker

Find detailed instructions for users [here](https://ag2labs.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://ag2labs.github.io/autogen/docs/Contribute#docker-for-development).
Find detailed instructions for users [here](https://ag2labs.github.io/ag2/docs/installation/Docker#step-1-install-docker), and for developers [here](https://ag2labs.github.io/ag2/docs/Contribute#docker-for-development).

### Option 2. Install AG2 Locally

AG2 requires **Python version >= 3.8, < 3.13**. It can be installed from pip:

```bash
pip install autogen
pip install ag2
```

Minimal dependencies are installed without extra options. You can install extra options based on the feature you need.
Expand All @@ -140,13 +142,13 @@ Minimal dependencies are installed without extra options. You can install extra
pip install "autogen[blendsearch]"
``` -->

Find more options in [Installation](https://ag2labs.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
Find more options in [Installation](https://ag2labs.github.io/ag2/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).

<!-- Each of the [`notebook examples`](https://github.com/ag2labs/ag2/tree/main/notebook) may require a specific option to be installed. -->

Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://ag2labs.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://ag2labs.github.io/autogen/docs/Installation#code-execution-with-docker-(default)).
Even if you are installing and running AG2 locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://ag2labs.github.io/ag2/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://ag2labs.github.io/ag2/docs/Installation#code-execution-with-docker-(default)).

For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints).
For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/ag2/docs/FAQ#set-your-api-endpoints).

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand All @@ -156,7 +158,7 @@ For LLM inference configurations, check the [FAQs](https://ag2labs.github.io/aut

## Multi-Agent Conversation Framework

AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
AG2 enables the next-gen LLM applications with a generic [multi-agent conversation](https://ag2labs.github.io/ag2/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code.

Features of this use case include:
Expand All @@ -170,7 +172,7 @@ For [example](https://github.com/ag2labs/ag2/blob/main/test/twoagent.py),
```python
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
# See https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints
# See https://ag2labs.github.io/ag2/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# You can also set config_list directly as a list, for example, config_list = [{'model': 'gpt-4', 'api_key': '<your OpenAI API key here>'},]
Expand All @@ -191,7 +193,7 @@ The figure below shows an example conversation flow with AG2.
![Agent Chat Example](https://github.com/ag2labs/ag2/blob/main/website/static/img/chat_example.png)

Alternatively, the [sample code](https://github.com/ag2labs/build-with-autogen/blob/main/samples/simple_chat.py) here allows a user to chat with an AG2 agent in ChatGPT style.
Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples#automated-multi-agent-chat) for this feature.
Please find more [code examples](https://ag2labs.github.io/ag2/docs/Examples#automated-multi-agent-chat) for this feature.

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand All @@ -201,7 +203,7 @@ Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples

## Enhanced LLM Inferences

AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.
AG2 also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://ag2labs.github.io/ag2/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inference and templating.

<!-- For example, you can optimize generations by LLM with your own tuning data, success metrics, and budgets.
Expand All @@ -220,7 +222,7 @@ config, analysis = autogen.Completion.tune(
response = autogen.Completion.create(context=test_instance, **config)
```
Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples#tune-gpt-models) for this feature. -->
Please find more [code examples](https://ag2labs.github.io/ag2/docs/Examples#tune-gpt-models) for this feature. -->

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand All @@ -230,15 +232,15 @@ Please find more [code examples](https://ag2labs.github.io/autogen/docs/Examples

## Documentation

You can find detailed documentation about AG2 [here](https://ag2labs.github.io/autogen/).
You can find detailed documentation about AG2 [here](https://ag2labs.github.io/ag2/).

In addition, you can find:

- [Research](https://ag2labs.github.io/autogen/docs/Research), [blogposts](https://ag2labs.github.io/autogen/blog) around AG2, and [Transparency FAQs](https://github.com/ag2labs/ag2/blob/main/TRANSPARENCY_FAQS.md)
- [Research](https://ag2labs.github.io/ag2/docs/Research), [blogposts](https://ag2labs.github.io/ag2/blog) around AG2, and [Transparency FAQs](https://github.com/ag2labs/ag2/blob/main/TRANSPARENCY_FAQS.md)

- [Discord](https://discord.gg/pAbnFJrkgZ)

- [Contributing guide](https://ag2labs.github.io/autogen/docs/Contribute)
- [Contributing guide](https://ag2labs.github.io/ag2/docs/Contribute)

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;">
Expand Down

0 comments on commit 2410316

Please sign in to comment.