diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile
index 64391862c4f2..5604047f9ef0 100644
--- a/.devcontainer/Dockerfile
+++ b/.devcontainer/Dockerfile
@@ -1,7 +1,7 @@
#-------------------------------------------------------------------------------------------------------------
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
# SPDX-License-Identifier: Apache-2.0
-# Contributions to this project, i.e., https://github.com/autogen-ai/autogen, are licensed under the Apache License, Version 2.0 (Apache-2.0).
+# Contributions to this project, i.e., https://github.com/autogenhub/autogen, are licensed under the Apache License, Version 2.0 (Apache-2.0).
# Portions derived from https://github.com/microsoft/autogen under the MIT License.
# SPDX-License-Identifier: MIT
diff --git a/.devcontainer/README.md b/.devcontainer/README.md
index 74974078b88e..40bb98fb236d 100644
--- a/.devcontainer/README.md
+++ b/.devcontainer/README.md
@@ -26,7 +26,7 @@ These configurations can be used with Codespaces and locally.
- **Usage**: Recommended for developers who are contributing to the AutoGen project.
- **Building the Image**: Run `docker build -f dev/Dockerfile -t autogen_ai_dev_img .`.
- **Using with Codespaces**: `Code > Codespaces > Click on ...> New with options > Choose "dev" as devcontainer configuration`. This image may require a Codespace with at least 64GB of disk space.
-- **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://autogen-ai.github.io/autogen/docs/Contribute) page prior to submitting any pull requests.
+- **Before using**: We highly encourage all potential contributors to read the [AutoGen Contributing](https://autogenhub.github.io/autogen/docs/Contribute) page prior to submitting any pull requests.
## Customizing Dockerfiles
diff --git a/.devcontainer/dev/Dockerfile b/.devcontainer/dev/Dockerfile
index dd46421a06a3..ca9b5abdb3a4 100644
--- a/.devcontainer/dev/Dockerfile
+++ b/.devcontainer/dev/Dockerfile
@@ -10,18 +10,18 @@ RUN apt-get update && apt-get -y update
RUN apt-get install -y sudo git npm vim nano curl wget git-lfs
# Setup a non-root user 'autogen' with sudo access
-RUN adduser --home /home/autogen-ai --disabled-password --gecos '' autogen
+RUN adduser --home /home/autogenhub --disabled-password --gecos '' autogen
RUN adduser autogen sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER autogen
-WORKDIR /home/autogen-ai
+WORKDIR /home/autogenhub
# Set environment variable
# ENV OPENAI_API_KEY="{OpenAI-API-Key}"
# Clone the AutoGen repository
-RUN git clone https://github.com/autogen-ai/autogen.git /home/autogen-ai/autogen
-WORKDIR /home/autogen-ai/autogen
+RUN git clone https://github.com/autogenhub/autogen.git /home/autogenhub/autogen
+WORKDIR /home/autogenhub/autogen
# Install AutoGen in editable mode with extra components
RUN sudo pip install --upgrade pip && \
@@ -39,11 +39,11 @@ RUN yarn install --frozen-lockfile --ignore-engines
RUN arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) && \
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.5.23/quarto-1.5.23-linux-${arch}.tar.gz && \
- mkdir -p /home/autogen-ai/quarto/ && \
- tar -xzf quarto-1.5.23-linux-${arch}.tar.gz --directory /home/autogen-ai/quarto/ && \
+ mkdir -p /home/autogenhub/quarto/ && \
+ tar -xzf quarto-1.5.23-linux-${arch}.tar.gz --directory /home/autogenhub/quarto/ && \
rm quarto-1.5.23-linux-${arch}.tar.gz
-ENV PATH="${PATH}:/home/autogen-ai/quarto/quarto-1.5.23/bin/"
+ENV PATH="${PATH}:/home/autogenhub/quarto/quarto-1.5.23/bin/"
# Exposes the Yarn port for Docusaurus
EXPOSE 3000
diff --git a/.devcontainer/full/Dockerfile b/.devcontainer/full/Dockerfile
index 7fb38e416f5f..a59cd985aa64 100644
--- a/.devcontainer/full/Dockerfile
+++ b/.devcontainer/full/Dockerfile
@@ -11,11 +11,11 @@ RUN apt-get update \
&& rm -rf /var/lib/apt/lists/*
# Setup a non-root user 'autogen' with sudo access
-RUN adduser --home /home/autogen-ai --disabled-password --gecos '' autogen
+RUN adduser --home /home/autogenhub --disabled-password --gecos '' autogen
RUN adduser autogen sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER autogen
-WORKDIR /home/autogen-ai
+WORKDIR /home/autogenhub
# Set environment variable if needed
# ENV OPENAI_API_KEY="{OpenAI-API-Key}"
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index 2edd7e33c60a..e56751d1369b 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,4 +1,4 @@
-
+
@@ -12,6 +12,6 @@
## Checks
-- [ ] I've included any doc changes needed for https://autogen-ai.github.io/autogen/. See https://autogen-ai.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
+- [ ] I've included any doc changes needed for https://autogenhub.github.io/autogen/. See https://autogenhub.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
- [ ] I've made sure all auto checks have passed.
diff --git a/LICENSE b/LICENSE
index 01659b2dc03e..d7d09047d691 100644
--- a/LICENSE
+++ b/LICENSE
@@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
- Copyright AutoGen-AI organization, i.e., https://github.com/autogen-ai, owners.
+ Copyright autogenhub organization, i.e., https://github.com/autogenhub, owners.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
diff --git a/MAINTAINERS.md b/MAINTAINERS.md
index 603731e6a149..236de0e5e7e5 100644
--- a/MAINTAINERS.md
+++ b/MAINTAINERS.md
@@ -26,7 +26,7 @@
| Rajan Chari * | [rajan-chari](https://github.com/rajan-chari) | Microsoft Research | CAP |
## I would like to join this list. How can I help the project?
-> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://autogen-ai.github.io/autogen/docs/contributor-guide/contributing) guide.
+> We're always looking for new contributors to join our team and help improve the project. For more information, please refer to our [CONTRIBUTING](https://autogenhub.github.io/autogen/docs/contributor-guide/contributing) guide.
## Are you missing from this list?
diff --git a/NOTICE.md b/NOTICE.md
index 638931c6cdc5..63310f668712 100644
--- a/NOTICE.md
+++ b/NOTICE.md
@@ -1,13 +1,13 @@
## NOTICE
-Copyright (c) 2023-2024, Owners of https://github.com/autogen-ai
+Copyright (c) 2023-2024, Owners of https://github.com/autogenhub
This project is a fork of https://github.com/microsoft/autogen.
The [original project](https://github.com/microsoft/autogen) is licensed under the MIT License as detailed in [LICENSE_original_MIT](./license_original/LICENSE_original_MIT). The fork was created from version v0.2.35 of the original project.
-This project, i.e., https://github.com/autogen-ai/autogen, is licensed under the Apache License, Version 2.0 as detailed in [LICENSE](./LICENSE)
+This project, i.e., https://github.com/autogenhub/autogen, is licensed under the Apache License, Version 2.0 as detailed in [LICENSE](./LICENSE)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
diff --git a/README.md b/README.md
index 2ea360cf29f6..8b24541acd21 100644
--- a/README.md
+++ b/README.md
@@ -1,18 +1,18 @@
[![PyPI version](https://badge.fury.io/py/autogen.svg)](https://badge.fury.io/py/autogen)
-[![Build](https://github.com/autogen-ai/autogen/actions/workflows/python-package.yml/badge.svg)](https://github.com/autogen-ai/autogen/actions/workflows/python-package.yml)
+[![Build](https://github.com/autogenhub/autogen/actions/workflows/python-package.yml/badge.svg)](https://github.com/autogenhub/autogen/actions/workflows/python-package.yml)
![Python Version](https://img.shields.io/badge/3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-blue)
[![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAbnFJrkgZ)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40Chi_Wang_)](https://x.com/Chi_Wang_)
[![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core)
-# [AutoGen](https://github.com/autogen-ai/autogen)
+# [AutoGen](https://github.com/autogenhub/autogen)
[📚 Cite paper](#related-papers).
:fire: :tada: Sep 06, 2024: AutoGen now available as `autogen` on PyPI! We're excited to announce a more convenient package name for AutoGen: Starting with version 0.3.0, you can now install AutoGen using:
@@ -26,7 +26,7 @@ We extend our sincere gratitude to the original owner of `autogen` pypi package
📄 **License Change:**
With this new release and package name, we are officially switching to the Apache 2.0 license. This enhances our commitment to open-source collaboration while providing additional protections for contributors and users alike.
-:fire: Aug 24, 2024: A new organization [autogen-ai](https://github.com/autogen-ai) is created to host the development of AutoGen and related projects with open governance. We invite collaborators from all organizations and individuals.
+:fire: Oct 20, 2024: A new organization [autogenhub](https://github.com/autogenhub) is created to host the development of AutoGen and related projects with open governance. We invite collaborators from all organizations and individuals.
:tada: May 29, 2024: DeepLearning.ai launched a new short course [AI Agentic Design Patterns with AutoGen](https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen), made in collaboration with Microsoft and Penn State University, and taught by AutoGen creators [Chi Wang](https://github.com/sonichi) and [Qingyun Wu](https://github.com/qingyun-wu).
@@ -36,11 +36,11 @@ With this new release and package name, we are officially switching to the Apach
:tada: May 11, 2024: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation](https://openreview.net/pdf?id=uAjxFFing2) received the best paper award at the [ICLR 2024 LLM Agents Workshop](https://llmagents.github.io/).
-:tada: Apr 26, 2024: [AutoGen.NET](https://autogen-ai.github.io/autogen-for-net/) is available for .NET developers!
+:tada: Apr 26, 2024: [AutoGen.NET](https://autogenhub.github.io/autogen-for-net/) is available for .NET developers!
:tada: Apr 17, 2024: Andrew Ng cited AutoGen in [The Batch newsletter](https://www.deeplearning.ai/the-batch/issue-245/) and [What's next for AI agentic workflows](https://youtu.be/sal78ACtGTc?si=JduUzN_1kDnMq0vF) at Sequoia Capital's AI Ascent (Mar 26).
-:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://autogen-ai.github.io/autogen/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
+:tada: Mar 3, 2024: What's new in AutoGen? 📰[Blog](https://autogenhub.github.io/autogen/blog/2024/03/03/AutoGen-Update); 📺[Youtube](https://www.youtube.com/watch?v=j_mtwQiaLGU).
@@ -48,9 +48,9 @@ With this new release and package name, we are officially switching to the Apach
:tada: Dec 31, 2023: [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155) is selected by [TheSequence: My Five Favorite AI Papers of 2023](https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023).
-
+
-
+
:tada: Nov 8, 2023: AutoGen is selected into [Open100: Top 100 Open Source achievements](https://www.benchcouncil.org/evaluation/opencs/annual.html) 35 days after spinoff from [FLAML](https://github.com/microsoft/FLAML).
@@ -67,7 +67,7 @@ With this new release and package name, we are officially switching to the Apach
@@ -80,16 +80,16 @@ AutoGen is an open-source programming framework for building AI agents and facil
The project is currently maintained by a [dynamic group of volunteers](MAINTAINERS.md) from several organizations. Contact project administrators Chi Wang and Qingyun Wu via auto-gen@outlook.com if you are interested in becoming a maintainer.
-![AutoGen Overview](https://github.com/autogen-ai/autogen/blob/main/website/static/img/autogen_agentchat.png)
+![AutoGen Overview](https://github.com/autogenhub/autogen/blob/main/website/static/img/autogen_agentchat.png)
-AutoGen is created out of collaborative [research](https://autogen-ai.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington.
+AutoGen is created out of collaborative [research](https://autogenhub.github.io/autogen/docs/Research) from Microsoft, Penn State University, and the University of Washington.
-## [Installation](https://autogen-ai.github.io/autogen/docs/Installation)
+## [Installation](https://autogenhub.github.io/autogen/docs/Installation)
### Option 1. Install and Run AutoGen in Docker
-Find detailed instructions for users [here](https://autogen-ai.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://autogen-ai.github.io/autogen/docs/Contribute#docker-for-development).
+Find detailed instructions for users [here](https://autogenhub.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://autogenhub.github.io/autogen/docs/Contribute#docker-for-development).
### Option 2. Install AutoGen Locally
@@ -138,13 +138,13 @@ Minimal dependencies are installed without extra options. You can install extra
pip install "autogen[blendsearch]"
``` -->
-Find more options in [Installation](https://autogen-ai.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
+Find more options in [Installation](https://autogenhub.github.io/autogen/docs/Installation#option-2-install-autogen-locally-using-virtual-environment).
-
+
-Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://autogen-ai.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://autogen-ai.github.io/autogen/docs/Installation#code-execution-with-docker-(default)).
+Even if you are installing and running AutoGen locally outside of docker, the recommendation and default behavior of agents is to perform [code execution](https://autogenhub.github.io/autogen/docs/FAQ/#code-execution) in docker. Find more instructions and how to change the default behaviour [here](https://autogenhub.github.io/autogen/docs/Installation#code-execution-with-docker-(default)).
-For LLM inference configurations, check the [FAQs](https://autogen-ai.github.io/autogen/docs/FAQ#set-your-api-endpoints).
+For LLM inference configurations, check the [FAQs](https://autogenhub.github.io/autogen/docs/FAQ#set-your-api-endpoints).
\n",
"\n",
- ":fire: Heads-up: We have migrated [AutoGen](https://autogen-ai.github.io/autogen/) into a dedicated [github repository](https://github.com/autogen-ai/autogen). Alongside this move, we have also launched a dedicated [Discord](https://discord.gg/pAbnFJrkgZ) server and a [website](https://autogen-ai.github.io/autogen/) for comprehensive documentation.\n",
+ ":fire: Heads-up: We have migrated [AutoGen](https://autogenhub.github.io/autogen/) into a dedicated [github repository](https://github.com/autogenhub/autogen). Alongside this move, we have also launched a dedicated [Discord](https://discord.gg/pAbnFJrkgZ) server and a [website](https://autogenhub.github.io/autogen/) for comprehensive documentation.\n",
"\n",
- ":fire: The automated multi-agent chat framework in [AutoGen](https://autogen-ai.github.io/autogen/) is in preview from v2.0.0.\n",
+ ":fire: The automated multi-agent chat framework in [AutoGen](https://autogenhub.github.io/autogen/) is in preview from v2.0.0.\n",
"\n",
":fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).\n",
"\n",
- ":fire: [autogen](https://autogen-ai.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).\n",
+ ":fire: [autogen](https://autogenhub.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).\n",
"\n",
":fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/).\n",
"\n",
@@ -308,7 +308,7 @@
"pip install flaml\n",
"```\n",
"\n",
- "Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. For example, use the following to install the dependencies needed by the [`autogen`](https://autogen-ai.github.io/autogen/) package.\n",
+ "Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. For example, use the following to install the dependencies needed by the [`autogen`](https://autogenhub.github.io/autogen/) package.\n",
"\n",
"```bash\n",
"pip install \"flaml[autogen]\"\n",
@@ -319,7 +319,7 @@
"\n",
"## Quickstart\n",
"\n",
- "- (New) The [autogen](https://autogen-ai.github.io/autogen/) package enables the next-gen GPT-X applications with a generic multi-agent conversation framework.\n",
+ "- (New) The [autogen](https://autogenhub.github.io/autogen/) package enables the next-gen GPT-X applications with a generic multi-agent conversation framework.\n",
" It offers customizable and conversable agents which integrate LLMs, tools and human.\n",
" By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example,\n",
"\n",
diff --git a/notebook/agentchat_agentoptimizer.ipynb b/notebook/agentchat_agentoptimizer.ipynb
index d762750f436e..9f5cd110fb35 100644
--- a/notebook/agentchat_agentoptimizer.ipynb
+++ b/notebook/agentchat_agentoptimizer.ipynb
@@ -7,7 +7,7 @@
"# AgentOptimizer: An Agentic Way to Train Your LLM Agent\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In traditional ML pipeline, we train a model by updating its parameter according to the loss on the training set, while in the era of LLM agents, how should we train an agent? Here, we take an initial step towards the agent training. Inspired by the [function calling](https://platform.openai.com/docs/guides/function-calling) capabilities provided by OpenAI, we draw an analogy between model parameters and agent functions/skills, and update agent’s functions/skills based on its historical performance on the training set. As an agentic way of training an agent, our approach help enhance the agents’ abilities without requiring access to the LLMs parameters.\n",
"\n",
@@ -16,7 +16,7 @@
"Specifically, given a set of training data, AgentOptimizer would iteratively prompt the LLM to optimize the existing function list of the AssistantAgent and UserProxyAgent with code implementation if necessary. It also includes two strategies, roll-back, and early-stop, to streamline the training process.\n",
"In the example scenario, we test the proposed AgentOptimizer in solving problems from the [MATH dataset](https://github.com/hendrycks/math). \n",
"\n",
- "![AgentOptimizer](https://media.githubusercontent.com/media/autogen-ai/autogen/main/website/blog/2023-12-23-AgentOptimizer/img/agentoptimizer.png)\n",
+ "![AgentOptimizer](https://media.githubusercontent.com/media/autogenhub/autogen/main/website/blog/2023-12-23-AgentOptimizer/img/agentoptimizer.png)\n",
"\n",
"More information could be found in the [paper](https://arxiv.org/abs/2402.11359).\n",
"\n",
@@ -53,7 +53,7 @@
"source": [
"# MathUserProxy with function_call\n",
"\n",
- "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/autogen-ai/autogen/blob/main/autogen/agentchat/contrib/math_user_proxy_agent.py).\n",
+ "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/autogenhub/autogen/blob/main/autogen/agentchat/contrib/math_user_proxy_agent.py).\n",
"\n",
"It supports using both function_call and python to solve math problems.\n"
]
diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb
index 0f83bfb0dbe3..623a0b070b2a 100644
--- a/notebook/agentchat_cost_token_tracking.ipynb
+++ b/notebook/agentchat_cost_token_tracking.ipynb
@@ -53,7 +53,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
]
},
{
@@ -98,7 +98,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogen-ai/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogenhub/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb
index 6b3d2baec32d..fa91959cf971 100644
--- a/notebook/agentchat_custom_model.ipynb
+++ b/notebook/agentchat_custom_model.ipynb
@@ -210,7 +210,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for an environment variable of a specified name (\"OAI_CONFIG_LIST\" in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).\n",
"\n",
diff --git a/notebook/agentchat_databricks_dbrx.ipynb b/notebook/agentchat_databricks_dbrx.ipynb
index cc3de76ed648..37d4be93ad39 100644
--- a/notebook/agentchat_databricks_dbrx.ipynb
+++ b/notebook/agentchat_databricks_dbrx.ipynb
@@ -10,7 +10,7 @@
"\n",
"In March 2024, Databricks released [DBRX](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm), a general-purpose LLM that sets a new standard for open LLMs. While available as an open-source model on Hugging Face ([databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct/tree/main) and [databricks/dbrx-base](https://huggingface.co/databricks/dbrx-base) ), customers of Databricks can also tap into the [Foundation Model APIs](https://docs.databricks.com/en/machine-learning/model-serving/score-foundation-models.html#query-a-chat-completion-model), which make DBRX available through an OpenAI-compatible, autoscaling REST API.\n",
"\n",
- "[Autogen](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat) is becoming a popular standard for agent creation. Built to support any \"LLM as a service\" that implements the OpenAI SDK, it can easily be extended to integrate with powerful open source models. \n",
+ "[Autogen](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat) is becoming a popular standard for agent creation. Built to support any \"LLM as a service\" that implements the OpenAI SDK, it can easily be extended to integrate with powerful open source models. \n",
"\n",
"This notebook will demonstrate a few basic examples of Autogen with DBRX, including the use of `AssistantAgent`, `UserProxyAgent`, and `ConversableAgent`. These demos are not intended to be exhaustive - feel free to use them as a base to build upon!\n",
"\n",
@@ -76,7 +76,7 @@
"source": [
"## Setup DBRX config list\n",
"\n",
- "See Autogen docs for more inforation on the use of `config_list`: [LLM Configuration](https://autogen-ai.github.io/autogen/docs/topics/llm_configuration#why-is-it-a-list)"
+ "See Autogen docs for more inforation on the use of `config_list`: [LLM Configuration](https://autogenhub.github.io/autogen/docs/topics/llm_configuration#why-is-it-a-list)"
]
},
{
@@ -116,7 +116,7 @@
"source": [
"## Hello World Example\n",
"\n",
- "Our first example will be with a simple `UserProxyAgent` asking a question to an `AssistantAgent`. This is based on the tutorial demo [here](https://autogen-ai.github.io/autogen/docs/tutorial/introduction).\n",
+ "Our first example will be with a simple `UserProxyAgent` asking a question to an `AssistantAgent`. This is based on the tutorial demo [here](https://autogenhub.github.io/autogen/docs/tutorial/introduction).\n",
"\n",
"After sending the question and seeing a response, you can type `exit` to end the chat or continue to converse."
]
@@ -207,7 +207,7 @@
"source": [
"## Simple Coding Agent\n",
"\n",
- "In this example, we will implement a \"coding agent\" that can execute code. You will see how this code is run alongside your notebook in your current workspace, taking advantage of the performance benefits of Databricks clusters. This is based off the demo [here](https://autogen-ai.github.io/autogen/docs/topics/non-openai-models/cloud-mistralai/).\n",
+ "In this example, we will implement a \"coding agent\" that can execute code. You will see how this code is run alongside your notebook in your current workspace, taking advantage of the performance benefits of Databricks clusters. This is based off the demo [here](https://autogenhub.github.io/autogen/docs/topics/non-openai-models/cloud-mistralai/).\n",
"\n",
"First, set up a directory: "
]
@@ -430,7 +430,7 @@
"source": [
"## Conversable Bots\n",
"\n",
- "We can also implement the [two-agent chat pattern](https://autogen-ai.github.io/autogen/docs/tutorial/conversation-patterns/#two-agent-chat-and-chat-result) using DBRX to \"talk to itself\" in a teacher/student exchange:"
+ "We can also implement the [two-agent chat pattern](https://autogenhub.github.io/autogen/docs/tutorial/conversation-patterns/#two-agent-chat-and-chat-result) using DBRX to \"talk to itself\" in a teacher/student exchange:"
]
},
{
@@ -498,7 +498,7 @@
"\n",
"It can be useful to display chat logs to the notebook for debugging, and then persist those logs to a Delta table. The following section demonstrates how to extend the default AutoGen logging libraries.\n",
"\n",
- "First, we will implement a Python `class` that extends the capabilities of `autogen.runtime_logging` [docs](https://autogen-ai.github.io/autogen/docs/notebooks/agentchat_logging):"
+ "First, we will implement a Python `class` that extends the capabilities of `autogen.runtime_logging` [docs](https://autogenhub.github.io/autogen/docs/notebooks/agentchat_logging):"
]
},
{
diff --git a/notebook/agentchat_function_call.ipynb b/notebook/agentchat_function_call.ipynb
index d9317f59de2f..e7b6db2db5c5 100644
--- a/notebook/agentchat_function_call.ipynb
+++ b/notebook/agentchat_function_call.ipynb
@@ -8,7 +8,7 @@
"source": [
"# Auto Generated Agent Chat: Task Solving with Provided Tools as Functions\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
@@ -38,7 +38,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_function_call_async.ipynb b/notebook/agentchat_function_call_async.ipynb
index a6acbd6eb5d5..77e0610d1eb2 100644
--- a/notebook/agentchat_function_call_async.ipynb
+++ b/notebook/agentchat_function_call_async.ipynb
@@ -14,7 +14,7 @@
"id": "9a71fa36",
"metadata": {},
"source": [
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb
index f8ff390c4ff8..4c77a66e34ef 100644
--- a/notebook/agentchat_function_call_currency_calculator.ipynb
+++ b/notebook/agentchat_function_call_currency_calculator.ipynb
@@ -15,7 +15,7 @@
"id": "9a71fa36",
"metadata": {},
"source": [
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
@@ -45,7 +45,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat.ipynb b/notebook/agentchat_groupchat.ipynb
index abf418ee03c6..11b1312ada4c 100644
--- a/notebook/agentchat_groupchat.ipynb
+++ b/notebook/agentchat_groupchat.ipynb
@@ -8,9 +8,9 @@
"# Group Chat\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
- "This notebook is modified based on https://github.com/autogen-ai/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
+ "This notebook is modified based on https://github.com/autogenhub/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
@@ -31,7 +31,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_RAG.ipynb b/notebook/agentchat_groupchat_RAG.ipynb
index 7b2f1e453c6b..6a9e8c01b3fc 100644
--- a/notebook/agentchat_groupchat_RAG.ipynb
+++ b/notebook/agentchat_groupchat_RAG.ipynb
@@ -8,7 +8,7 @@
"# Group Chat with Retrieval Augmented Generation\n",
"\n",
"AutoGen supports conversable agents powered by LLMs, tools, or humans, performing tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
@@ -30,7 +30,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_customized.ipynb b/notebook/agentchat_groupchat_customized.ipynb
index e439da12b89b..2ffa24ede027 100644
--- a/notebook/agentchat_groupchat_customized.ipynb
+++ b/notebook/agentchat_groupchat_customized.ipynb
@@ -8,7 +8,7 @@
"# Group Chat with Customized Speaker Selection Method\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to pass a cumstomized agent selection method to GroupChat. The customized function looks like this:\n",
"\n",
@@ -56,7 +56,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_finite_state_machine.ipynb b/notebook/agentchat_groupchat_finite_state_machine.ipynb
index 7587b5e6bffe..d4291dd12f59 100644
--- a/notebook/agentchat_groupchat_finite_state_machine.ipynb
+++ b/notebook/agentchat_groupchat_finite_state_machine.ipynb
@@ -8,7 +8,7 @@
"# FSM - User can input speaker transition constraints\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"This notebook is about using graphs to define the transition paths amongst speakers.\n",
"\n",
diff --git a/notebook/agentchat_groupchat_research.ipynb b/notebook/agentchat_groupchat_research.ipynb
index a5954bdca245..a60b89c3ab15 100644
--- a/notebook/agentchat_groupchat_research.ipynb
+++ b/notebook/agentchat_groupchat_research.ipynb
@@ -8,7 +8,7 @@
"# Perform Research with Multi-Agent Group Chat\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"## Requirements\n",
"\n",
@@ -31,7 +31,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_stateflow.ipynb b/notebook/agentchat_groupchat_stateflow.ipynb
index 969bbca51096..55749d4cefed 100644
--- a/notebook/agentchat_groupchat_stateflow.ipynb
+++ b/notebook/agentchat_groupchat_stateflow.ipynb
@@ -29,7 +29,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -62,7 +62,7 @@
"## A workflow for research\n",
"\n",
"\n",
diff --git a/notebook/agentchat_groupchat_vis.ipynb b/notebook/agentchat_groupchat_vis.ipynb
index b97513e0173c..e342da9e5168 100644
--- a/notebook/agentchat_groupchat_vis.ipynb
+++ b/notebook/agentchat_groupchat_vis.ipynb
@@ -8,7 +8,7 @@
"# Group Chat with Coder and Visualization Critic\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
@@ -29,7 +29,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_human_feedback.ipynb b/notebook/agentchat_human_feedback.ipynb
index 5df9d9e724c0..fcf62c2f67eb 100644
--- a/notebook/agentchat_human_feedback.ipynb
+++ b/notebook/agentchat_human_feedback.ipynb
@@ -12,7 +12,7 @@
"# Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to solve a challenging math problem with human feedback. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. By setting `human_input_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `AssistantAgent`. For example, when `human_input_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `AssistantAgent`. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `AssistantAgent` and return the execution results (success or failure and corresponding outputs) to `AssistantAgent`.\n",
"\n",
@@ -47,7 +47,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_inception_function.ipynb b/notebook/agentchat_inception_function.ipynb
index a2bd3f4242b7..c011a3c87785 100644
--- a/notebook/agentchat_inception_function.ipynb
+++ b/notebook/agentchat_inception_function.ipynb
@@ -6,7 +6,7 @@
"source": [
"# Auto Generated Agent Chat: Function Inception\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to give them the ability to auto-extend the list of functions the model may call. Functions need to be registered to `UserProxyAgent`, which will be responsible for executing any function calls made by `AssistantAgent`. The assistant also needs to know the signature of functions that may be called. A special `define_function` function is registered, which registers a new function in `UserProxyAgent` and updates the configuration of the assistant.\n",
"\n",
diff --git a/notebook/agentchat_langchain.ipynb b/notebook/agentchat_langchain.ipynb
index 5253bfbba294..8d6b755df417 100644
--- a/notebook/agentchat_langchain.ipynb
+++ b/notebook/agentchat_langchain.ipynb
@@ -10,7 +10,7 @@
"source": [
"# Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participants through multi-agent conversation. Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participants through multi-agent conversation. Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613) with a set of Langchain-provided tools and toolkits, to demonstrate how to leverage the 35+ tools available. \n",
"A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
@@ -49,7 +49,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_models`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "The [`config_list_from_models`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
"\n",
"- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
"- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
@@ -128,7 +128,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogen-ai/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogenhub/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_microsoft_fabric.ipynb b/notebook/agentchat_microsoft_fabric.ipynb
index 20cf771b440e..1a2b9573866e 100644
--- a/notebook/agentchat_microsoft_fabric.ipynb
+++ b/notebook/agentchat_microsoft_fabric.ipynb
@@ -13,8 +13,8 @@
"source": [
"## Use AutoGen in Microsoft Fabric\n",
"\n",
- "[AutoGen](https://github.com/autogen-ai/autogen) offers conversable LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "[AutoGen](https://github.com/autogenhub/autogen) offers conversable LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"[Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. Its pre-built AI models include GPT-x models such as `gpt-4o`, `gpt-4-turbo`, `gpt-4`, `gpt-4-8k`, `gpt-4-32k`, `gpt-35-turbo`, `gpt-35-turbo-16k` and `gpt-35-turbo-instruct`, etc. It's important to note that the Azure Open AI service is not supported on trial SKUs and only paid SKUs (F64 or higher, or P1 or higher) are supported.\n",
"\n",
@@ -282,7 +282,7 @@
"http_client = get_openai_httpx_sync_client() # http_client is needed for openai>1\n",
"http_client.__deepcopy__ = types.MethodType(\n",
" lambda self, memo: self, http_client\n",
- ") # https://autogen-ai.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\\n\",\n",
+ ") # https://autogenhub.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\\n\",\n",
"\n",
"config_list = [\n",
" {\n",
@@ -447,7 +447,7 @@
"http_client = get_openai_httpx_sync_client() # http_client is needed for openai>1\n",
"http_client.__deepcopy__ = types.MethodType(\n",
" lambda self, memo: self, http_client\n",
- ") # https://autogen-ai.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\n",
+ ") # https://autogenhub.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\n",
"\n",
"config_list = [\n",
" {\n",
@@ -708,7 +708,7 @@
"### Example 2\n",
"How to use `AssistantAgent` and `RetrieveUserProxyAgent` to do Retrieval Augmented Generation (RAG) for QA and Code Generation.\n",
"\n",
- "Check out this [blog](https://autogen-ai.github.io/autogen/blog/2023/10/18/RetrieveChat) for more details."
+ "Check out this [blog](https://autogenhub.github.io/autogen/blog/2023/10/18/RetrieveChat) for more details."
]
},
{
@@ -3229,7 +3229,7 @@
"### Example 3\n",
"How to use `MultimodalConversableAgent` to chat with images.\n",
"\n",
- "Check out this [blog](https://autogen-ai.github.io/autogen/blog/2023/11/06/LMM-Agent) for more details."
+ "Check out this [blog](https://autogenhub.github.io/autogen/blog/2023/11/06/LMM-Agent) for more details."
]
},
{
diff --git a/notebook/agentchat_oai_assistant_groupchat.ipynb b/notebook/agentchat_oai_assistant_groupchat.ipynb
index a110f788929d..4bd70fc4b846 100644
--- a/notebook/agentchat_oai_assistant_groupchat.ipynb
+++ b/notebook/agentchat_oai_assistant_groupchat.ipynb
@@ -7,7 +7,7 @@
"# Auto Generated Agent Chat: Group Chat with GPTAssistantAgent\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to get multiple `GPTAssistantAgent` converse through group chat.\n",
"\n",
@@ -32,7 +32,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -139,12 +139,12 @@
"text": [
"\u001b[33mUser_proxy\u001b[0m (to chat_manager):\n",
"\n",
- "Get the number of issues and pull requests for the repository 'autogen-ai/autogen' over the past three weeks and offer analyzes to the data. You should print the data in csv format grouped by weeks.\n",
+ "Get the number of issues and pull requests for the repository 'autogenhub/autogen' over the past three weeks and offer analyzes to the data. You should print the data in csv format grouped by weeks.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to chat_manager):\n",
"\n",
- "To gather the number of issues and pull requests for the repository 'autogen-ai/autogen' over the past three weeks and to offer an analysis of the data, we'll need to modify the previous script.\n",
+ "To gather the number of issues and pull requests for the repository 'autogenhub/autogen' over the past three weeks and to offer an analysis of the data, we'll need to modify the previous script.\n",
"\n",
"We will enhance the script to gather data from the past three weeks, separated by each week, and then output the data in CSV format, grouped by the week during which the issues and pull requests were created. This will require us to make multiple API calls for each week and aggregate the data accordingly.\n",
"\n",
@@ -467,7 +467,7 @@
"source": [
"user_proxy.initiate_chat(\n",
" manager,\n",
- " message=\"Get the number of issues and pull requests for the repository 'autogen-ai/autogen' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.\",\n",
+ " message=\"Get the number of issues and pull requests for the repository 'autogenhub/autogen' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.\",\n",
")\n",
"# type exit to terminate the chat"
]
diff --git a/notebook/agentchat_oai_assistant_retrieval.ipynb b/notebook/agentchat_oai_assistant_retrieval.ipynb
index 2a050ba7fc0a..594bda97fa59 100644
--- a/notebook/agentchat_oai_assistant_retrieval.ipynb
+++ b/notebook/agentchat_oai_assistant_retrieval.ipynb
@@ -6,7 +6,7 @@
"source": [
"# RAG OpenAI Assistants in AutoGen\n",
"\n",
- "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/autogen-ai/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
+ "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/autogenhub/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
"`UserProxyAgent` in AutoGen."
]
},
diff --git a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb
index 1b24fa12b9dc..6bb1d51ca10c 100644
--- a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb
+++ b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb
@@ -6,7 +6,7 @@
"source": [
"# OpenAI Assistants in AutoGen\n",
"\n",
- "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/autogen-ai/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
+ "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/autogenhub/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
"`UserProxyAgent` in AutoGen."
]
},
diff --git a/notebook/agentchat_oai_code_interpreter.ipynb b/notebook/agentchat_oai_code_interpreter.ipynb
index 351b45d464c8..c5200caeebc8 100644
--- a/notebook/agentchat_oai_code_interpreter.ipynb
+++ b/notebook/agentchat_oai_code_interpreter.ipynb
@@ -28,7 +28,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_planning.ipynb b/notebook/agentchat_planning.ipynb
index 6dc2983ee56d..bf337ef9c63f 100644
--- a/notebook/agentchat_planning.ipynb
+++ b/notebook/agentchat_planning.ipynb
@@ -12,7 +12,7 @@
"# Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use multiple agents to work together and accomplish a task that requires finding info from the web and coding. `AssistantAgent` is an LLM-based agent that can write and debug Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. We further create a planning agent for the assistant agent to consult. The planning agent is a variation of the LLM-based `AssistantAgent` with a different system message.\n",
"\n",
@@ -47,7 +47,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
"\n",
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.\n"
]
@@ -97,7 +97,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogen-ai/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n",
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogenhub/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n",
"\n",
"## Construct Agents\n",
"\n",
diff --git a/notebook/agentchat_stream.ipynb b/notebook/agentchat_stream.ipynb
index 09a2819c0df3..45701fc9085f 100644
--- a/notebook/agentchat_stream.ipynb
+++ b/notebook/agentchat_stream.ipynb
@@ -12,7 +12,7 @@
"# Interactive LLM Agent Dealing with Data Stream\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use customized agents to continuously acquire news from the web and ask for investment suggestions.\n",
"\n",
@@ -47,7 +47,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
]
},
{
@@ -94,7 +94,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogen-ai/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogenhub/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_surfer.ipynb b/notebook/agentchat_surfer.ipynb
index a69838d00d29..8bb75bc195fc 100644
--- a/notebook/agentchat_surfer.ipynb
+++ b/notebook/agentchat_surfer.ipynb
@@ -35,7 +35,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
"\n",
diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb
index c531ef3457da..38139a3167cc 100644
--- a/notebook/agentchat_teachability.ipynb
+++ b/notebook/agentchat_teachability.ipynb
@@ -13,7 +13,7 @@
"\n",
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
"\n",
- "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/autogen-ai/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
+ "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/autogenhub/autogen/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
"\n",
"## Requirements\n",
"\n",
@@ -37,7 +37,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_teachable_oai_assistants.ipynb b/notebook/agentchat_teachable_oai_assistants.ipynb
index 8181c5a46026..67e2bede4ce6 100644
--- a/notebook/agentchat_teachable_oai_assistants.ipynb
+++ b/notebook/agentchat_teachable_oai_assistants.ipynb
@@ -14,7 +14,7 @@
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
"\n",
"This notebook demonstrates how `Teachability` can be added to instances of `GPTAssistantAgent`\n",
- "so that they can learn facts, preferences, and skills from users. As explained [here](https://autogen-ai.github.io/autogen/docs/topics/openai-assistant/gpt_assistant_agent), each instance of `GPTAssistantAgent` wraps an OpenAI Assistant that can be given a set of tools including functions, code interpreter, and retrieval. Assistants with these tools are demonstrated in separate standalone sections below, which can be run independently.\n",
+ "so that they can learn facts, preferences, and skills from users. As explained [here](https://autogenhub.github.io/autogen/docs/topics/openai-assistant/gpt_assistant_agent), each instance of `GPTAssistantAgent` wraps an OpenAI Assistant that can be given a set of tools including functions, code interpreter, and retrieval. Assistants with these tools are demonstrated in separate standalone sections below, which can be run independently.\n",
"\n",
"## Requirements\n",
"\n",
@@ -41,7 +41,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_teaching.ipynb b/notebook/agentchat_teaching.ipynb
index fe7e5a69a0c0..319c5403496c 100644
--- a/notebook/agentchat_teaching.ipynb
+++ b/notebook/agentchat_teaching.ipynb
@@ -10,9 +10,9 @@
"TODO: Implement advanced teachability based on this example.\n",
"\n",
"AutoGen offers conversable agents powered by LLMs, tools, or humans, which can be used to perform tasks collectively via automated chat. This framework makes it easy to build many advanced applications of LLMs.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
- "This notebook demonstrates how AutoGen enables a user to teach AI new skills via natural agent interactions, without requiring knowledge of programming language. It is modified based on https://github.com/autogen-ai/FLAML/blob/evaluation/notebook/research_paper/teaching.ipynb and https://github.com/autogen-ai/FLAML/blob/evaluation/notebook/research_paper/teaching_recipe_reuse.ipynb.\n",
+ "This notebook demonstrates how AutoGen enables a user to teach AI new skills via natural agent interactions, without requiring knowledge of programming language. It is modified based on https://github.com/autogenhub/FLAML/blob/evaluation/notebook/research_paper/teaching.ipynb and https://github.com/autogenhub/FLAML/blob/evaluation/notebook/research_paper/teaching_recipe_reuse.ipynb.\n",
"\n",
"## Requirements\n",
"\n",
diff --git a/notebook/agentchat_two_users.ipynb b/notebook/agentchat_two_users.ipynb
index cd0b2bbcf2dc..1ba6a627668e 100644
--- a/notebook/agentchat_two_users.ipynb
+++ b/notebook/agentchat_two_users.ipynb
@@ -11,7 +11,7 @@
"source": [
"# Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate an application involving multiple agents and human users to work together and accomplish a task. `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. We create multiple `UserProxyAgent` instances that can represent different human users.\n",
"\n",
@@ -46,7 +46,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for an environment variable of a specified name (\"OAI_CONFIG_LIST\" in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).\n",
"\n",
@@ -74,7 +74,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogen-ai/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogenhub/autogen/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb
index 2fdf07542acd..a3171978d8e1 100644
--- a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb
+++ b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb
@@ -8,7 +8,7 @@
"# Translating Video audio using Whisper and GPT-3.5-turbo\n",
"\n",
"In this notebook, we demonstrate how to use whisper and GPT-3.5-turbo with `AssistantAgent` and `UserProxyAgent` to recognize and translate\n",
- "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_function_call.ipynb)\n"
+ "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_function_call.ipynb)\n"
]
},
{
diff --git a/notebook/agentchat_web_info.ipynb b/notebook/agentchat_web_info.ipynb
index 488024b3b291..4cfac9cfa231 100644
--- a/notebook/agentchat_web_info.ipynb
+++ b/notebook/agentchat_web_info.ipynb
@@ -12,7 +12,7 @@
"# Auto Generated Agent Chat: Solving Tasks Requiring Web Info\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to perform tasks which require acquiring info from the web:\n",
"* discuss a paper based on its URL.\n",
@@ -51,7 +51,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
]
},
{
@@ -108,7 +108,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogen-ai/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/autogenhub/autogen/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_websockets.ipynb b/notebook/agentchat_websockets.ipynb
index aad1d0f72be9..10bed59a14ac 100644
--- a/notebook/agentchat_websockets.ipynb
+++ b/notebook/agentchat_websockets.ipynb
@@ -8,16 +8,16 @@
"source": [
"# Websockets: Streaming input and output using websockets\n",
"\n",
- "This notebook demonstrates how to use the [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) class to stream both input and output using websockets. The use of websockets allows you to build web clients that are more responsive than the one using web methods. The main difference is that the webosockets allows you to push data while you need to poll the server for new response using web mothods.\n",
+ "This notebook demonstrates how to use the [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) class to stream both input and output using websockets. The use of websockets allows you to build web clients that are more responsive than the one using web methods. The main difference is that the webosockets allows you to push data while you need to poll the server for new response using web mothods.\n",
"\n",
"\n",
- "In this guide, we explore the capabilities of the [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) class. It is specifically designed to enhance the development of clients such as web clients which use websockets for streaming both input and output. The [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) stands out by enabling a more dynamic and interactive user experience for web applications.\n",
+ "In this guide, we explore the capabilities of the [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) class. It is specifically designed to enhance the development of clients such as web clients which use websockets for streaming both input and output. The [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) stands out by enabling a more dynamic and interactive user experience for web applications.\n",
"\n",
"Websockets technology is at the core of this functionality, offering a significant advancement over traditional web methods by allowing data to be \"pushed\" to the client in real-time. This is a departure from the conventional approach where clients must repeatedly \"poll\" the server to check for any new responses. By employing the underlining [websockets](https://websockets.readthedocs.io/) library, the IOStream class facilitates a continuous, two-way communication channel between the server and client. This ensures that updates are received instantly, without the need for constant polling, thereby making web clients more efficient and responsive.\n",
"\n",
- "The real power of websockets, leveraged through the [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) class, lies in its ability to create highly responsive web clients. This responsiveness is critical for applications requiring real-time data updates such as chat applications. By integrating the [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) class into your web application, you not only enhance user experience through immediate data transmission but also reduce the load on your server by eliminating unnecessary polling.\n",
+ "The real power of websockets, leveraged through the [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) class, lies in its ability to create highly responsive web clients. This responsiveness is critical for applications requiring real-time data updates such as chat applications. By integrating the [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) class into your web application, you not only enhance user experience through immediate data transmission but also reduce the load on your server by eliminating unnecessary polling.\n",
"\n",
- "In essence, the transition to using websockets through the [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) class marks a significant enhancement in web client development. This approach not only streamlines the data exchange process between clients and servers but also opens up new possibilities for creating more interactive and engaging web applications. By following this guide, developers can harness the full potential of websockets and the [`IOStream`](https://autogen-ai.github.io/autogen/docs/reference/io/base/IOStream) class to push the boundaries of what is possible with web client responsiveness and interactivity.\n",
+ "In essence, the transition to using websockets through the [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) class marks a significant enhancement in web client development. This approach not only streamlines the data exchange process between clients and servers but also opens up new possibilities for creating more interactive and engaging web applications. By following this guide, developers can harness the full potential of websockets and the [`IOStream`](https://autogenhub.github.io/autogen/docs/reference/io/base/IOStream) class to push the boundaries of what is possible with web client responsiveness and interactivity.\n",
"\n",
"## Requirements\n",
"\n",
@@ -42,7 +42,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -92,7 +92,7 @@
"An `on_connect` function is a crucial part of applications that utilize websockets, acting as an event handler that is called whenever a new client connection is established. This function is designed to initiate any necessary setup, communication protocols, or data exchange procedures specific to the newly connected client. Essentially, it lays the groundwork for the interactive session that follows, configuring how the server and the client will communicate and what initial actions are to be taken once a connection is made. Now, let's delve into the details of how to define this function, especially in the context of using the AutoGen framework with websockets.\n",
"\n",
"\n",
- "Upon a client's connection to the websocket server, the server automatically initiates a new instance of the [`IOWebsockets`](https://autogen-ai.github.io/autogen/docs/reference/io/websockets/IOWebsockets) class. This instance is crucial for managing the data flow between the server and the client. The `on_connect` function leverages this instance to set up the communication protocol, define interaction rules, and initiate any preliminary data exchanges or configurations required for the client-server interaction to proceed smoothly.\n"
+ "Upon a client's connection to the websocket server, the server automatically initiates a new instance of the [`IOWebsockets`](https://autogenhub.github.io/autogen/docs/reference/io/websockets/IOWebsockets) class. This instance is crucial for managing the data flow between the server and the client. The `on_connect` function leverages this instance to set up the communication protocol, define interaction rules, and initiate any preliminary data exchanges or configurations required for the client-server interaction to proceed smoothly.\n"
]
},
{
diff --git a/notebook/agenteval_cq_math.ipynb b/notebook/agenteval_cq_math.ipynb
index d13eec50be57..152d243961d0 100644
--- a/notebook/agenteval_cq_math.ipynb
+++ b/notebook/agenteval_cq_math.ipynb
@@ -15,9 +15,9 @@
"\n",
"- `quantify_criteria`: This function quantifies the performance of any sample task based on the criteria generated in the `generate_criteria` step in the following way: $(c_1=a_1, \\dots, c_n=a_n)$\n",
"\n",
- "![AgentEval](https://media.githubusercontent.com/media/autogen-ai/autogen/main/website/blog/2023-11-20-AgentEval/img/agenteval-CQ.png)\n",
+ "![AgentEval](https://media.githubusercontent.com/media/autogenhub/autogen/main/website/blog/2023-11-20-AgentEval/img/agenteval-CQ.png)\n",
"\n",
- "For more detailed explanations, please refer to the accompanying [blog post](https://autogen-ai.github.io/autogen/blog/2023/11/20/AgentEval)\n",
+ "For more detailed explanations, please refer to the accompanying [blog post](https://autogenhub.github.io/autogen/blog/2023/11/20/AgentEval)\n",
"\n",
"## Requirements\n",
"\n",
diff --git a/notebook/autobuild_basic.ipynb b/notebook/autobuild_basic.ipynb
index 1420efff709c..249066951d91 100644
--- a/notebook/autobuild_basic.ipynb
+++ b/notebook/autobuild_basic.ipynb
@@ -9,10 +9,10 @@
"source": [
"# AutoBuild\n",
"By: [Linxin Song](https://linxins97.github.io/), [Jieyu Zhang](https://jieyuz2.github.io/)\n",
- "Reference: [Agent AutoBuild](https://autogen-ai.github.io/autogen/blog/2023/11/26/Agent-AutoBuild/)\n",
+ "Reference: [Agent AutoBuild](https://autogenhub.github.io/autogen/blog/2023/11/26/Agent-AutoBuild/)\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://autogen-ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://autogenhub.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we introduce a new class, `AgentBuilder`, to help user build an automatic task solving process powered by multi-agent system. Specifically, in `build()`, we prompt a LLM to create multiple participant agent and initialize a group chat, and specify whether this task need programming to solve. AgentBuilder also support open-source LLMs by [vLLM](https://docs.vllm.ai/en/latest/index.html) and [Fastchat](https://github.com/lm-sys/FastChat). Check the supported model list [here](https://docs.vllm.ai/en/latest/models/supported_models.html)."
]
diff --git a/notebook/autogen_uniformed_api_calling.ipynb b/notebook/autogen_uniformed_api_calling.ipynb
index 0ae7fd9da529..f356e0d3f2a7 100644
--- a/notebook/autogen_uniformed_api_calling.ipynb
+++ b/notebook/autogen_uniformed_api_calling.ipynb
@@ -7,7 +7,7 @@
"# A Uniform interface to call different LLMs\n",
"\n",
"Autogen provides a uniform interface for API calls to different LLMs, and creating LLM agents from them.\n",
- "Through setting up a configuration file, you can easily switch between different LLMs by just changing the model name, while enjoying all the [enhanced features](https://autogen-ai.github.io/autogen/docs/topics/llm-caching) such as [caching](https://autogen-ai.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary) and [cost calculation](https://autogen-ai.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary)!\n",
+ "Through setting up a configuration file, you can easily switch between different LLMs by just changing the model name, while enjoying all the [enhanced features](https://autogenhub.github.io/autogen/docs/topics/llm-caching) such as [caching](https://autogenhub.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary) and [cost calculation](https://autogenhub.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary)!\n",
"\n",
"In this notebook, we will show you how to use AutoGen to call different LLMs and create LLM agents from them.\n",
"\n",
@@ -22,7 +22,7 @@
"\n",
"... and more to come!\n",
"\n",
- "You can also [plug in your local deployed LLM](https://autogen-ai.github.io/autogen/blog/2024/01/26/Custom-Models) into AutoGen if needed."
+ "You can also [plug in your local deployed LLM](https://autogenhub.github.io/autogen/blog/2024/01/26/Custom-Models) into AutoGen if needed."
]
},
{
diff --git a/notebook/config_loader_utility_functions.ipynb b/notebook/config_loader_utility_functions.ipynb
index b6b31345fa22..cbb51b3aa5ce 100644
--- a/notebook/config_loader_utility_functions.ipynb
+++ b/notebook/config_loader_utility_functions.ipynb
@@ -6,7 +6,7 @@
"source": [
"# Config loader utility functions\n",
"\n",
- "For an introduction to configuring LLMs, refer to the [main configuration docs](https://autogen-ai.github.io/autogen/docs/topics/llm_configuration). This guide will run through examples of the more advanced utility functions for managing API configurations.\n",
+ "For an introduction to configuring LLMs, refer to the [main configuration docs](https://autogenhub.github.io/autogen/docs/topics/llm_configuration). This guide will run through examples of the more advanced utility functions for managing API configurations.\n",
"\n",
"Managing API configurations can be tricky, especially when dealing with multiple models and API versions. The provided utility functions assist users in managing these configurations effectively. Ensure your API keys and other sensitive data are stored securely. You might store keys in `.txt` or `.env` files or environment variables for local development. Never expose your API keys publicly. If you insist on storing your key files locally on your repo (you shouldn't), ensure the key file path is added to the `.gitignore` file.\n",
"\n",
diff --git a/notebook/oai_chatgpt_gpt4.ipynb b/notebook/oai_chatgpt_gpt4.ipynb
index b588763f6c13..2149daf7a40e 100644
--- a/notebook/oai_chatgpt_gpt4.ipynb
+++ b/notebook/oai_chatgpt_gpt4.ipynb
@@ -17,8 +17,8 @@
}
},
"source": [
- "Contributions to this project, i.e., https://github.com/autogen-ai/autogen, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
- "Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai\n",
+ "Contributions to this project, i.e., https://github.com/autogenhub/autogen, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
+ "Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub\n",
"SPDX-License-Identifier: Apache-2.0\n",
"Portions derived from https://github.com/microsoft/autogen under the MIT License.\n",
"SPDX-License-Identifier: MIT\n",
@@ -33,7 +33,7 @@
"\n",
"In this notebook, we tune OpenAI ChatGPT (both GPT-3.5 and GPT-4) models for math problem solving. We use [the MATH benchmark](https://crfm.stanford.edu/helm/latest/?group=math_chain_of_thought) for measuring mathematical problem solving on competition math problems with chain-of-thoughts style reasoning.\n",
"\n",
- "Related link: [Blogpost](https://autogen-ai.github.io/autogen/blog/2023/04/21/LLM-tuning-math) based on this experiment.\n",
+ "Related link: [Blogpost](https://autogenhub.github.io/autogen/blog/2023/04/21/LLM-tuning-math) based on this experiment.\n",
"\n",
"## Requirements\n",
"\n",
@@ -98,7 +98,7 @@
"source": [
"### Set your API Endpoint\n",
"\n",
- "The [`config_list_openai_aoai`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "The [`config_list_openai_aoai`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
"\n",
"- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
"- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
diff --git a/notebook/oai_completion.ipynb b/notebook/oai_completion.ipynb
index e283eb0b248c..1a7fb4957745 100644
--- a/notebook/oai_completion.ipynb
+++ b/notebook/oai_completion.ipynb
@@ -17,8 +17,8 @@
}
},
"source": [
- "Contributions to this project, i.e., https://github.com/autogen-ai/autogen, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
- "Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai\n",
+ "Contributions to this project, i.e., https://github.com/autogenhub/autogen, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
+ "Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub\n",
"SPDX-License-Identifier: Apache-2.0\n",
"Portions derived from https://github.com/microsoft/autogen under the MIT License.\n",
"SPDX-License-Identifier: MIT\n",
@@ -64,11 +64,11 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "* The [`config_list_openai_aoai`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "* The [`config_list_openai_aoai`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
" - OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
" - Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
" - Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
- "* The [`config_list_from_json`](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for the environment variable `env_or_file`, which must be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
+ "* The [`config_list_from_json`](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for the environment variable `env_or_file`, which must be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
"\n",
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base. If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n"
]
diff --git a/setup.py b/setup.py
index 4ca0587e7412..64004e772435 100644
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -125,7 +125,7 @@
description="A programming framework for agentic AI",
long_description=long_description,
long_description_content_type="text/markdown",
- url="https://github.com/autogen-ai/autogen",
+ url="https://github.com/autogenhub/autogen",
packages=setuptools.find_packages(include=["autogen*"], exclude=["test"]),
install_requires=install_requires,
extras_require=extra_require,
diff --git a/test/agentchat/contrib/agent_eval/test_agent_eval.py b/test/agentchat/contrib/agent_eval/test_agent_eval.py
index 05c547d25efe..57023662d258 100644
--- a/test/agentchat/contrib/agent_eval/test_agent_eval.py
+++ b/test/agentchat/contrib/agent_eval/test_agent_eval.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/agent_eval/test_criterion.py b/test/agentchat/contrib/agent_eval/test_criterion.py
index 0befdf6d0416..a065ee09fbbb 100644
--- a/test/agentchat/contrib/agent_eval/test_criterion.py
+++ b/test/agentchat/contrib/agent_eval/test_criterion.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/agent_eval/test_task.py b/test/agentchat/contrib/agent_eval/test_task.py
index 5489a60b5a8a..2beca045d780 100644
--- a/test/agentchat/contrib/agent_eval/test_task.py
+++ b/test/agentchat/contrib/agent_eval/test_task.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py b/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py
index 46f02337d66a..c3a57824db30 100755
--- a/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py
+++ b/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -28,7 +28,7 @@
def create_teachable_agent(reset_db=False):
"""Instantiates a teachable agent using the settings from the top of this file."""
# Load LLM inference endpoints from an env variable or a file
- # See https://autogen-ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
+ # See https://autogenhub.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file=OAI_CONFIG_LIST, filter_dict=filter_dict, file_location=KEY_LOC)
diff --git a/test/agentchat/contrib/capabilities/test_image_generation_capability.py b/test/agentchat/contrib/capabilities/test_image_generation_capability.py
index 9703c3b2f49e..7efa394924aa 100644
--- a/test/agentchat/contrib/capabilities/test_image_generation_capability.py
+++ b/test/agentchat/contrib/capabilities/test_image_generation_capability.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_teachable_agent.py b/test/agentchat/contrib/capabilities/test_teachable_agent.py
index d148f511bd2d..05de50a7ceed 100755
--- a/test/agentchat/contrib/capabilities/test_teachable_agent.py
+++ b/test/agentchat/contrib/capabilities/test_teachable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -40,7 +40,7 @@
def create_teachable_agent(reset_db=False, verbosity=0):
"""Instantiates a teachable agent using the settings from the top of this file."""
# Load LLM inference endpoints from an env variable or a file
- # See https://autogen-ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
+ # See https://autogenhub.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file=OAI_CONFIG_LIST, filter_dict=filter_dict, file_location=KEY_LOC)
diff --git a/test/agentchat/contrib/capabilities/test_transform_messages.py b/test/agentchat/contrib/capabilities/test_transform_messages.py
index 831ee13277cb..cbd8be24b0b1 100644
--- a/test/agentchat/contrib/capabilities/test_transform_messages.py
+++ b/test/agentchat/contrib/capabilities/test_transform_messages.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_transforms.py b/test/agentchat/contrib/capabilities/test_transforms.py
index 20233c77e456..6868ad1c825f 100644
--- a/test/agentchat/contrib/capabilities/test_transforms.py
+++ b/test/agentchat/contrib/capabilities/test_transforms.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_transforms_util.py b/test/agentchat/contrib/capabilities/test_transforms_util.py
index ebe49e167de3..a2efe049ef8c 100644
--- a/test/agentchat/contrib/capabilities/test_transforms_util.py
+++ b/test/agentchat/contrib/capabilities/test_transforms_util.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_vision_capability.py b/test/agentchat/contrib/capabilities/test_vision_capability.py
index 24238fc5731e..3d215a540a53 100644
--- a/test/agentchat/contrib/capabilities/test_vision_capability.py
+++ b/test/agentchat/contrib/capabilities/test_vision_capability.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py b/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py
index e0809867b099..d8fc0263ccef 100644
--- a/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py
+++ b/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py b/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py
index aeda74cf4127..0b92acb8fec5 100755
--- a/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py
+++ b/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/retrievechat/test_retrievechat.py b/test/agentchat/contrib/retrievechat/test_retrievechat.py
index 92b256b93f75..882593467226 100755
--- a/test/agentchat/contrib/retrievechat/test_retrievechat.py
+++ b/test/agentchat/contrib/retrievechat/test_retrievechat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_agent_builder.py b/test/agentchat/contrib/test_agent_builder.py
index 4612577c46ec..9a4ad6b55ea8 100755
--- a/test/agentchat/contrib/test_agent_builder.py
+++ b/test/agentchat/contrib/test_agent_builder.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_agent_optimizer.py b/test/agentchat/contrib/test_agent_optimizer.py
index 9f1d8cc01a7f..1dc6ce405d64 100644
--- a/test/agentchat/contrib/test_agent_optimizer.py
+++ b/test/agentchat/contrib/test_agent_optimizer.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_gpt_assistant.py b/test/agentchat/contrib/test_gpt_assistant.py
index 7ff0e7285cef..5617991bab2e 100755
--- a/test/agentchat/contrib/test_gpt_assistant.py
+++ b/test/agentchat/contrib/test_gpt_assistant.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_img_utils.py b/test/agentchat/contrib/test_img_utils.py
index 7057585c29fe..f62014e6f1cd 100755
--- a/test/agentchat/contrib/test_img_utils.py
+++ b/test/agentchat/contrib/test_img_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_llamaindex_conversable_agent.py b/test/agentchat/contrib/test_llamaindex_conversable_agent.py
index 96e0b8eda2e2..deca47926fb6 100644
--- a/test/agentchat/contrib/test_llamaindex_conversable_agent.py
+++ b/test/agentchat/contrib/test_llamaindex_conversable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_llava.py b/test/agentchat/contrib/test_llava.py
index d3b644cfc2a2..1bb19a633a70 100755
--- a/test/agentchat/contrib/test_llava.py
+++ b/test/agentchat/contrib/test_llava.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_lmm.py b/test/agentchat/contrib/test_lmm.py
index 5d2d45aaff43..f174855bfbeb 100755
--- a/test/agentchat/contrib/test_lmm.py
+++ b/test/agentchat/contrib/test_lmm.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_society_of_mind_agent.py b/test/agentchat/contrib/test_society_of_mind_agent.py
index c96702d7c14a..a70131c7c434 100755
--- a/test/agentchat/contrib/test_society_of_mind_agent.py
+++ b/test/agentchat/contrib/test_society_of_mind_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_web_surfer.py b/test/agentchat/contrib/test_web_surfer.py
index 508ebb72e487..0d8c60e0eed8 100755
--- a/test/agentchat/contrib/test_web_surfer.py
+++ b/test/agentchat/contrib/test_web_surfer.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -21,7 +21,7 @@
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST # noqa: E402
-BLOG_POST_URL = "https://autogen-ai.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
+BLOG_POST_URL = "https://autogenhub.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
BLOG_POST_TITLE = "Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH | AutoGen"
BING_QUERY = "Microsoft"
diff --git a/test/agentchat/contrib/vectordb/test_chromadb.py b/test/agentchat/contrib/vectordb/test_chromadb.py
index daf51f371e71..b14c906d50c1 100644
--- a/test/agentchat/contrib/vectordb/test_chromadb.py
+++ b/test/agentchat/contrib/vectordb/test_chromadb.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_mongodb.py b/test/agentchat/contrib/vectordb/test_mongodb.py
index 51279848784b..387380139502 100644
--- a/test/agentchat/contrib/vectordb/test_mongodb.py
+++ b/test/agentchat/contrib/vectordb/test_mongodb.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_pgvectordb.py b/test/agentchat/contrib/vectordb/test_pgvectordb.py
index 01ad3dc99035..462c2c4f5b6d 100644
--- a/test/agentchat/contrib/vectordb/test_pgvectordb.py
+++ b/test/agentchat/contrib/vectordb/test_pgvectordb.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_qdrant.py b/test/agentchat/contrib/vectordb/test_qdrant.py
index 056f7e89b9a5..a2a284373c96 100644
--- a/test/agentchat/contrib/vectordb/test_qdrant.py
+++ b/test/agentchat/contrib/vectordb/test_qdrant.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_vectordb_utils.py b/test/agentchat/contrib/vectordb/test_vectordb_utils.py
index 71be547b58be..b85680f08890 100644
--- a/test/agentchat/contrib/vectordb/test_vectordb_utils.py
+++ b/test/agentchat/contrib/vectordb/test_vectordb_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/extensions/tsp.py b/test/agentchat/extensions/tsp.py
index 11877abedd08..4d1e68cdb6d0 100644
--- a/test/agentchat/extensions/tsp.py
+++ b/test/agentchat/extensions/tsp.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/extensions/tsp_api.py b/test/agentchat/extensions/tsp_api.py
index 4ceb93b56c2c..67176162ba26 100644
--- a/test/agentchat/extensions/tsp_api.py
+++ b/test/agentchat/extensions/tsp_api.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_file_logging.py b/test/agentchat/test_agent_file_logging.py
index 9c014b09a55e..181c8c143433 100644
--- a/test/agentchat/test_agent_file_logging.py
+++ b/test/agentchat/test_agent_file_logging.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_logging.py b/test/agentchat/test_agent_logging.py
index 0ae39eaa65e0..f94ae64a00af 100644
--- a/test/agentchat/test_agent_logging.py
+++ b/test/agentchat/test_agent_logging.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_setup_with_use_docker_settings.py b/test/agentchat/test_agent_setup_with_use_docker_settings.py
index 61e2073d9ebc..04d4c01aa9c4 100644
--- a/test/agentchat/test_agent_setup_with_use_docker_settings.py
+++ b/test/agentchat/test_agent_setup_with_use_docker_settings.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_usage.py b/test/agentchat/test_agent_usage.py
index 781727c52483..d9906b09b2de 100755
--- a/test/agentchat/test_agent_usage.py
+++ b/test/agentchat/test_agent_usage.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agentchat_utils.py b/test/agentchat/test_agentchat_utils.py
index cfdc609d59c2..530dcf5f8ffb 100644
--- a/test/agentchat/test_agentchat_utils.py
+++ b/test/agentchat/test_agentchat_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_assistant_agent.py b/test/agentchat/test_assistant_agent.py
index b4cd995b5156..1854752f8f35 100755
--- a/test/agentchat/test_assistant_agent.py
+++ b/test/agentchat/test_assistant_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_async.py b/test/agentchat/test_async.py
index 07f7a75b07e2..01fee6bbde5d 100755
--- a/test/agentchat/test_async.py
+++ b/test/agentchat/test_async.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_async_chats.py b/test/agentchat/test_async_chats.py
index 7d83464d66f9..4749e1d46e5a 100755
--- a/test/agentchat/test_async_chats.py
+++ b/test/agentchat/test_async_chats.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_async_get_human_input.py b/test/agentchat/test_async_get_human_input.py
index 80cd929b6a79..8ff215ccd8d0 100755
--- a/test/agentchat/test_async_get_human_input.py
+++ b/test/agentchat/test_async_get_human_input.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_cache_agent.py b/test/agentchat/test_cache_agent.py
index 4c6d75cd3416..9a16eaac7d9c 100644
--- a/test/agentchat/test_cache_agent.py
+++ b/test/agentchat/test_cache_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_chats.py b/test/agentchat/test_chats.py
index ec15ce1789a0..318d52674c53 100755
--- a/test/agentchat/test_chats.py
+++ b/test/agentchat/test_chats.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_conversable_agent.py b/test/agentchat/test_conversable_agent.py
index da1ad92f5d4c..5f79f72ab02b 100755
--- a/test/agentchat/test_conversable_agent.py
+++ b/test/agentchat/test_conversable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_function_and_tool_calling.py b/test/agentchat/test_function_and_tool_calling.py
index ed3977ff6704..ffe57dd8da0b 100644
--- a/test/agentchat/test_function_and_tool_calling.py
+++ b/test/agentchat/test_function_and_tool_calling.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_function_call.py b/test/agentchat/test_function_call.py
index c8f73b4f4ad7..deb554ff9978 100755
--- a/test/agentchat/test_function_call.py
+++ b/test/agentchat/test_function_call.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_function_call_groupchat.py b/test/agentchat/test_function_call_groupchat.py
index 0e4bc9586413..fbffebc9ff28 100755
--- a/test/agentchat/test_function_call_groupchat.py
+++ b/test/agentchat/test_function_call_groupchat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_groupchat.py b/test/agentchat/test_groupchat.py
index c716ef51758a..5efb839317cd 100755
--- a/test/agentchat/test_groupchat.py
+++ b/test/agentchat/test_groupchat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_human_input.py b/test/agentchat/test_human_input.py
index 5c2e99f2431d..5f848e1bd19c 100755
--- a/test/agentchat/test_human_input.py
+++ b/test/agentchat/test_human_input.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_math_user_proxy_agent.py b/test/agentchat/test_math_user_proxy_agent.py
index e2baed12bae5..214295e984e7 100755
--- a/test/agentchat/test_math_user_proxy_agent.py
+++ b/test/agentchat/test_math_user_proxy_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_nested.py b/test/agentchat/test_nested.py
index e9cad254f413..150a35e60721 100755
--- a/test/agentchat/test_nested.py
+++ b/test/agentchat/test_nested.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_tool_calls.py b/test/agentchat/test_tool_calls.py
index cdb428cae4ca..99b19725f023 100755
--- a/test/agentchat/test_tool_calls.py
+++ b/test/agentchat/test_tool_calls.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_cache.py b/test/cache/test_cache.py
index fc7653590eb9..602f1c87fc91 100755
--- a/test/cache/test_cache.py
+++ b/test/cache/test_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_cosmos_db_cache.py b/test/cache/test_cosmos_db_cache.py
index 8ff466dd6fa9..34bac89037fa 100644
--- a/test/cache/test_cosmos_db_cache.py
+++ b/test/cache/test_cosmos_db_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_disk_cache.py b/test/cache/test_disk_cache.py
index 4724503c53d7..b91c8ca80f8c 100755
--- a/test/cache/test_disk_cache.py
+++ b/test/cache/test_disk_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_in_memory_cache.py b/test/cache/test_in_memory_cache.py
index 9f8878de622b..3d447d3377c9 100644
--- a/test/cache/test_in_memory_cache.py
+++ b/test/cache/test_in_memory_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_redis_cache.py b/test/cache/test_redis_cache.py
index 120f2bc58d52..25b85bb23532 100755
--- a/test/cache/test_redis_cache.py
+++ b/test/cache/test_redis_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/conftest.py b/test/conftest.py
index 242a2ac693cb..df6a650d0c5c 100644
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/io/test_base.py b/test/io/test_base.py
index 21c59813dead..d9ac8947e343 100644
--- a/test/io/test_base.py
+++ b/test/io/test_base.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/io/test_console.py b/test/io/test_console.py
index eb6ed0fff34f..d7e04ca2a148 100644
--- a/test/io/test_console.py
+++ b/test/io/test_console.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/io/test_websockets.py b/test/io/test_websockets.py
index 335bade6e4fd..340188a134bb 100644
--- a/test/io/test_websockets.py
+++ b/test/io/test_websockets.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/_test_completion.py b/test/oai/_test_completion.py
index 592dc1ce5bb2..fd6a6752b7ef 100755
--- a/test/oai/_test_completion.py
+++ b/test/oai/_test_completion.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_anthropic.py b/test/oai/test_anthropic.py
index 5b7e0a9b4387..9ccba3f3c25c 100644
--- a/test/oai/test_anthropic.py
+++ b/test/oai/test_anthropic.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_client.py b/test/oai/test_client.py
index b9bff919246f..288dfbb6b732 100755
--- a/test/oai/test_client.py
+++ b/test/oai/test_client.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_client_stream.py b/test/oai/test_client_stream.py
index c658104d4d55..407cfad9948d 100755
--- a/test/oai/test_client_stream.py
+++ b/test/oai/test_client_stream.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_client_utils.py b/test/oai/test_client_utils.py
index 2fb7f1ea719d..504519e7e53c 100644
--- a/test/oai/test_client_utils.py
+++ b/test/oai/test_client_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_cohere.py b/test/oai/test_cohere.py
index a8f55c22883f..6e3038f31f03 100644
--- a/test/oai/test_cohere.py
+++ b/test/oai/test_cohere.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_custom_client.py b/test/oai/test_custom_client.py
index d4dc987d2786..48bcc728ebde 100644
--- a/test/oai/test_custom_client.py
+++ b/test/oai/test_custom_client.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_gemini.py b/test/oai/test_gemini.py
index f2fe15684dfd..8ce0d3cda64c 100644
--- a/test/oai/test_gemini.py
+++ b/test/oai/test_gemini.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_groq.py b/test/oai/test_groq.py
index aca4282b5d85..de2385a237d7 100644
--- a/test/oai/test_groq.py
+++ b/test/oai/test_groq.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_mistral.py b/test/oai/test_mistral.py
index 188a975dd909..9663a2d0191a 100644
--- a/test/oai/test_mistral.py
+++ b/test/oai/test_mistral.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_together.py b/test/oai/test_together.py
index 838a7e908bff..000729c40de9 100644
--- a/test/oai/test_together.py
+++ b/test/oai/test_together.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_utils.py b/test/oai/test_utils.py
index ca2ad57959ac..adf9fc2c9e78 100755
--- a/test/oai/test_utils.py
+++ b/test/oai/test_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_browser_utils.py b/test/test_browser_utils.py
index 1986d4fb4e02..c47fd7fe8ebf 100755
--- a/test/test_browser_utils.py
+++ b/test/test_browser_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -16,7 +16,7 @@
import requests
from agentchat.test_assistant_agent import KEY_LOC # noqa: E402
-BLOG_POST_URL = "https://autogen-ai.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
+BLOG_POST_URL = "https://autogenhub.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
BLOG_POST_TITLE = "Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH | AutoGen"
BLOG_POST_STRING = "Large language models (LLMs) are powerful tools that can generate natural language texts for various applications, such as chatbots, summarization, translation, and more. GPT-4 is currently the state of the art LLM in the world. Is model selection irrelevant? What about inference parameters?"
diff --git a/test/test_code_utils.py b/test/test_code_utils.py
index 354fd1cc1a8a..c10262d70efd 100755
--- a/test/test_code_utils.py
+++ b/test/test_code_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_function_utils.py b/test/test_function_utils.py
index 0c23a74899dc..bb7a6d44807e 100644
--- a/test/test_function_utils.py
+++ b/test/test_function_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_graph_utils.py b/test/test_graph_utils.py
index 020bc0a3151e..9b4537bd3898 100644
--- a/test/test_graph_utils.py
+++ b/test/test_graph_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_logging.py b/test/test_logging.py
index 531cda1181dc..0e26574771c9 100644
--- a/test/test_logging.py
+++ b/test/test_logging.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_notebook.py b/test/test_notebook.py
index 17d57e22dcdb..f961525fd0ab 100755
--- a/test/test_notebook.py
+++ b/test/test_notebook.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_pydantic.py b/test/test_pydantic.py
index 5cea6574d89e..d496fe0b7dbc 100644
--- a/test/test_pydantic.py
+++ b/test/test_pydantic.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_retrieve_utils.py b/test/test_retrieve_utils.py
index 243d2d5cadff..83be2f4425fc 100755
--- a/test/test_retrieve_utils.py
+++ b/test/test_retrieve_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_token_count.py b/test/test_token_count.py
index ee096c16cbd8..ffd71d068ef8 100755
--- a/test/test_token_count.py
+++ b/test/test_token_count.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/twoagent.py b/test/twoagent.py
index 20e9b59e60c0..96e6af63e5b8 100644
--- a/test/twoagent.py
+++ b/test/twoagent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
+# Copyright (c) 2023 - 2024, Owners of https://github.com/autogenhub
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -7,7 +7,7 @@
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
-# See https://autogen-ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
+# See https://autogenhub.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx
index a95632ab71b9..188e762e635d 100644
--- a/website/blog/2023-10-18-RetrieveChat/index.mdx
+++ b/website/blog/2023-10-18-RetrieveChat/index.mdx
@@ -82,7 +82,7 @@ from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProx
2. Create an 'AssistantAgent' instance named "assistant" and an 'RetrieveUserProxyAgent' instance named "ragproxyagent"
-Refer to the [doc](https://autogen-ai.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent)
+Refer to the [doc](https://autogenhub.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent)
for more information on the detailed configurations.
```python
diff --git a/website/blog/2023-10-26-TeachableAgent/index.mdx b/website/blog/2023-10-26-TeachableAgent/index.mdx
index ee292cf6a186..adbbbbc08b58 100644
--- a/website/blog/2023-10-26-TeachableAgent/index.mdx
+++ b/website/blog/2023-10-26-TeachableAgent/index.mdx
@@ -54,7 +54,7 @@ from autogen import ConversableAgent # As an example
```python
# Load LLM inference endpoints from an env variable or a file
-# See https://autogen-ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
+# See https://autogenhub.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
filter_dict = {"model": ["gpt-4"]} # GPT-3.5 is less reliable than GPT-4 at learning from user feedback.
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST", filter_dict=filter_dict)
diff --git a/website/blog/2023-11-06-LMM-Agent/index.mdx b/website/blog/2023-11-06-LMM-Agent/index.mdx
index f8af3f9c298d..b60b876a4d26 100644
--- a/website/blog/2023-11-06-LMM-Agent/index.mdx
+++ b/website/blog/2023-11-06-LMM-Agent/index.mdx
@@ -9,8 +9,8 @@ tags: [LMM, multimodal]
**In Brief:**
* Introducing the **Multimodal Conversable Agent** and the **LLaVA Agent** to enhance LMM functionalities.
* Users can input text and images simultaneously using the `` tag to specify image loading.
-* Demonstrated through the [GPT-4V notebook](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb).
-* Demonstrated through the [LLaVA notebook](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb).
+* Demonstrated through the [GPT-4V notebook](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb).
+* Demonstrated through the [LLaVA notebook](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb).
## Introduction
Large multimodal models (LMMs) augment large language models (LLMs) with the ability to process multi-sensory data.
@@ -62,7 +62,7 @@ The `MultimodalConversableAgent` interprets the input prompt, extracting images
## Advanced Usage
Similar to other AutoGen agents, multimodal agents support multi-round dialogues with other agents, code generation, factual queries, and management via a GroupChat interface.
-For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent).
+For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent).
The coder drafts Python code for visualizations, while the critics provide insights for enhancement. Collaboratively, these agents aim to refine visual outputs.
With `human_input_mode=ALWAYS`, you can also contribute suggestions for better visualizations.
@@ -72,6 +72,6 @@ With `human_input_mode=ALWAYS`, you can also contribute suggestions for better v
## Future Enhancements
-For further inquiries or suggestions, please open an issue in the [AutoGen repository](https://github.com/autogen-ai/autogen/) or contact me directly at beibin.li@microsoft.com.
+For further inquiries or suggestions, please open an issue in the [AutoGen repository](https://github.com/autogenhub/autogen/) or contact me directly at beibin.li@microsoft.com.
AutoGen will continue to evolve, incorporating more multimodal functionalities such as DALLE model integration, audio interaction, and video comprehension. Stay tuned for these exciting developments.
diff --git a/website/blog/2023-11-13-OAI-assistants/index.mdx b/website/blog/2023-11-13-OAI-assistants/index.mdx
index 63dcfc97ae75..128c8724b1e8 100644
--- a/website/blog/2023-11-13-OAI-assistants/index.mdx
+++ b/website/blog/2023-11-13-OAI-assistants/index.mdx
@@ -9,12 +9,12 @@ tags: [openai-assistant]
## TL;DR
-OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/autogen-ai/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py).
+OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/autogenhub/autogen/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py).
This enables multiple OpenAI assistants, which form the backend of the now popular GPTs, to collaborate and tackle complex tasks.
Checkout example notebooks for reference:
-* [Basic example](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb)
-* [Code interpreter](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb)
-* [Function calls](https://github.com/autogen-ai/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb)
+* [Basic example](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb)
+* [Code interpreter](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_oai_code_interpreter.ipynb)
+* [Function calls](https://github.com/autogenhub/autogen/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb)
## Introduction
@@ -100,7 +100,7 @@ user_proxy = UserProxyAgent(name="user_proxy",
user_proxy.initiate_chat(gpt_assistant, message="Print hello world")
```
-Checkout more examples [here](https://github.com/autogen-ai/autogen/tree/main/notebook).
+Checkout more examples [here](https://github.com/autogenhub/autogen/tree/main/notebook).
## Limitations and Future Work
diff --git a/website/blog/2023-11-20-AgentEval/index.mdx b/website/blog/2023-11-20-AgentEval/index.mdx
index b0761b2cdac6..20c0c4ad6e01 100644
--- a/website/blog/2023-11-20-AgentEval/index.mdx
+++ b/website/blog/2023-11-20-AgentEval/index.mdx
@@ -14,7 +14,7 @@ tags: [LLM, GPT, evaluation, task utility]
**TL;DR:**
* As a developer of an LLM-powered application, how can you assess the utility it brings to end users while helping them with their tasks?
* To shed light on the question above, we introduce `AgentEval` — the first version of the framework to assess the utility of any LLM-powered application crafted to assist users in specific tasks. AgentEval aims to simplify the evaluation process by automatically proposing a set of criteria tailored to the unique purpose of your application. This allows for a comprehensive assessment, quantifying the utility of your application against the suggested criteria.
-* We demonstrate how `AgentEval` work using [math problems dataset](https://autogen-ai.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc).
+* We demonstrate how `AgentEval` work using [math problems dataset](https://autogenhub.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc).
## Introduction
diff --git a/website/blog/2023-12-01-AutoGenStudio/index.mdx b/website/blog/2023-12-01-AutoGenStudio/index.mdx
index 6d7cdd513e8a..0e1628369f75 100644
--- a/website/blog/2023-12-01-AutoGenStudio/index.mdx
+++ b/website/blog/2023-12-01-AutoGenStudio/index.mdx
@@ -26,9 +26,9 @@ To help you rapidly prototype multi-agent solutions for your tasks, we are intro
- Publish your sessions to a local gallery.
-See the official AutoGen Studio documentation [here](https://autogen-ai.github.io/autogen/docs/autogen-studio/getting-started) for more details.
+See the official AutoGen Studio documentation [here](https://autogenhub.github.io/autogen/docs/autogen-studio/getting-started) for more details.
-AutoGen Studio is open source [code here](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/apps/autogen-studio), and can be installed via pip. Give it a try!
+AutoGen Studio is open source [code here](https://github.com/autogenhub/build-with-autogen/blob/main/samples/apps/autogen-studio), and can be installed via pip. Give it a try!
```bash
pip install autogenstudio
@@ -36,7 +36,7 @@ pip install autogenstudio
## Introduction
-The accelerating pace of technology has ushered us into an era where digital assistants (or agents) are becoming integral to our lives. [AutoGen](https://github.com/autogen-ai/autogen/tree/main/autogen) has emerged as a leading framework for orchestrating the power of agents. In the spirit of expanding this frontier and democratizing this capability, we are thrilled to introduce a new user-friendly interface: **AutoGen Studio**.
+The accelerating pace of technology has ushered us into an era where digital assistants (or agents) are becoming integral to our lives. [AutoGen](https://github.com/autogenhub/autogen/tree/main/autogen) has emerged as a leading framework for orchestrating the power of agents. In the spirit of expanding this frontier and democratizing this capability, we are thrilled to introduce a new user-friendly interface: **AutoGen Studio**.
With AutoGen Studio, users can rapidly create, manage, and interact with agents that can learn, adapt, and collaborate. As we release this interface into the open-source community, our ambition is not only to enhance productivity but to inspire a level of personalized interaction between humans and agents.
@@ -48,7 +48,7 @@ The following guide will help you get AutoGen Studio up and running on your syst
### Configuring an LLM Provider
-To get started, you need access to a language model. You can get this set up by following the steps in the AutoGen documentation [here](https://autogen-ai.github.io/autogen/docs/FAQ#set-your-api-endpoints). Configure your environment with either `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY`.
+To get started, you need access to a language model. You can get this set up by following the steps in the AutoGen documentation [here](https://autogenhub.github.io/autogen/docs/FAQ#set-your-api-endpoints). Configure your environment with either `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY`.
For example, in your terminal, you would set the API key like this:
@@ -104,7 +104,7 @@ There are two ways to install AutoGen Studio - from PyPi or from source. We **re
yarn build
```
- For Windows users, to build the frontend, you may need alternative commands provided in the [autogen studio readme](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/apps/autogen-studio).
+ For Windows users, to build the frontend, you may need alternative commands provided in the [autogen studio readme](https://github.com/autogenhub/build-with-autogen/blob/main/samples/apps/autogen-studio).
### Running the Application
@@ -139,7 +139,7 @@ This section focuses on defining the properties of agents and agent workflows. I
-**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/autogen-ai/autogen/blob/main/autogen/agentchat/conversable_agent.py) class).
+**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/autogenhub/autogen/blob/main/autogen/agentchat/conversable_agent.py) class).
**Agent Workflows**: An agent workflow is a specification of a set of agents that can work together to accomplish a task. The simplest version of this is a setup with two agents – a user proxy agent (that represents a user i.e. it compiles code and prints result) and an assistant that can address task requests (e.g., generating plans, writing code, evaluating responses, proposing error recovery steps, etc.). A more complex flow could be a group chat where even more agents work towards a solution.
@@ -168,7 +168,7 @@ AutoGen Studio comes with 3 example skills: `fetch_profile`, `find_papers`, `gen
## The AutoGen Studio API
-While AutoGen Studio is a web interface, it is powered by an underlying python API that is reusable and modular. Importantly, we have implemented an API where agent workflows can be declaratively specified (in JSON), loaded and run. An example of the current API is shown below. Please consult the [AutoGen Studio repo](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/apps/autogen-studio) for more details.
+While AutoGen Studio is a web interface, it is powered by an underlying python API that is reusable and modular. Importantly, we have implemented an API where agent workflows can be declaratively specified (in JSON), loaded and run. An example of the current API is shown below. Please consult the [AutoGen Studio repo](https://github.com/autogenhub/build-with-autogen/blob/main/samples/apps/autogen-studio) for more details.
```python
import json
@@ -201,7 +201,7 @@ As we continue to develop and refine AutoGen Studio, the road map below outlines
We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project:
-- Review the overall AutoGen project [AutoGen](https://github.com/autogen-ai/autogen).
+- Review the overall AutoGen project [AutoGen](https://github.com/autogenhub/autogen).
- Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/737) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted`.
- Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution.
- Submit a pull request with your contribution!
@@ -219,7 +219,7 @@ A: To reset your conversation history, you can delete the `database.sqlite` file
A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.
**Q: Where can I find documentation and support for AutoGen Studio?**
-A: We are constantly working to improve AutoGen Studio. For the latest updates, please refer to the [AutoGen Studio Readme](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/apps/autogen-studio). For additional support, please open an issue on [GitHub](https://github.com/autogen-ai/autogen) or ask questions on [Discord](https://aka.ms/autogen-dc).
+A: We are constantly working to improve AutoGen Studio. For the latest updates, please refer to the [AutoGen Studio Readme](https://github.com/autogenhub/build-with-autogen/blob/main/samples/apps/autogen-studio). For additional support, please open an issue on [GitHub](https://github.com/autogenhub/autogen) or ask questions on [Discord](https://aka.ms/autogen-dc).
**Q: Can I use Other Models with AutoGen Studio?**
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model name`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure".
diff --git a/website/blog/2023-12-23-AgentOptimizer/index.mdx b/website/blog/2023-12-23-AgentOptimizer/index.mdx
index ea13ad2447b1..38ad81d917dd 100644
--- a/website/blog/2023-12-23-AgentOptimizer/index.mdx
+++ b/website/blog/2023-12-23-AgentOptimizer/index.mdx
@@ -36,7 +36,7 @@ It contains three main methods:
This method records the conversation history and performance of the agents in solving one problem.
It includes two inputs: conversation_history (List[Dict]) and is_satisfied (bool).
-conversation_history is a list of dictionaries which could be got from chat_messages_for_summary in the [AgentChat](https://autogen-ai.github.io/autogen/docs/reference/agentchat/agentchat/) class.
+conversation_history is a list of dictionaries which could be got from chat_messages_for_summary in the [AgentChat](https://autogenhub.github.io/autogen/docs/reference/agentchat/agentchat/) class.
is_satisfied is a bool value that represents whether the user is satisfied with the solution. If it is none, the user will be asked to input the satisfaction.
Example:
diff --git a/website/blog/2023-12-29-AgentDescriptions/index.mdx b/website/blog/2023-12-29-AgentDescriptions/index.mdx
index 3f8580b47c6b..50477caa9857 100644
--- a/website/blog/2023-12-29-AgentDescriptions/index.mdx
+++ b/website/blog/2023-12-29-AgentDescriptions/index.mdx
@@ -8,7 +8,7 @@ tags: [AutoGen]
## TL;DR
-AutoGen 0.2.2 introduces a [description](https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to ConversableAgent (and all subclasses), and changes GroupChat so that it uses agent `description`s rather than `system_message`s when choosing which agents should speak next.
+AutoGen 0.2.2 introduces a [description](https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to ConversableAgent (and all subclasses), and changes GroupChat so that it uses agent `description`s rather than `system_message`s when choosing which agents should speak next.
This is expected to simplify GroupChat’s job, improve orchestration, and make it easier to implement new GroupChat or GroupChat-like alternatives.
@@ -18,9 +18,9 @@ However, if you were struggling with getting GroupChat to work, you can now try
## Introduction
-As AutoGen matures and developers build increasingly complex combinations of agents, orchestration is becoming an important capability. At present, [GroupChat](https://autogen-ai.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) and the [GroupChatManager](https://autogen-ai.github.io/autogen/docs/reference/agentchat/groupchat#groupchatmanager-objects) are the main built-in tools for orchestrating conversations between 3 or more agents. For orchestrators like GroupChat to work well, they need to know something about each agent so that they can decide who should speak and when. Prior to AutoGen 0.2.2, GroupChat relied on each agent's `system_message` and `name` to learn about each participating agent. This is likely fine when the system prompt is short and sweet, but can lead to problems when the instructions are very long (e.g., with the [AssistantAgent](https://autogen-ai.github.io/autogen/docs/reference/agentchat/assistant_agent)), or non-existent (e.g., with the [UserProxyAgent](https://autogen-ai.github.io/autogen/docs/reference/agentchat/user_proxy_agent)).
+As AutoGen matures and developers build increasingly complex combinations of agents, orchestration is becoming an important capability. At present, [GroupChat](https://autogenhub.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) and the [GroupChatManager](https://autogenhub.github.io/autogen/docs/reference/agentchat/groupchat#groupchatmanager-objects) are the main built-in tools for orchestrating conversations between 3 or more agents. For orchestrators like GroupChat to work well, they need to know something about each agent so that they can decide who should speak and when. Prior to AutoGen 0.2.2, GroupChat relied on each agent's `system_message` and `name` to learn about each participating agent. This is likely fine when the system prompt is short and sweet, but can lead to problems when the instructions are very long (e.g., with the [AssistantAgent](https://autogenhub.github.io/autogen/docs/reference/agentchat/assistant_agent)), or non-existent (e.g., with the [UserProxyAgent](https://autogenhub.github.io/autogen/docs/reference/agentchat/user_proxy_agent)).
-AutoGen 0.2.2 introduces a [description](https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to all agents, and replaces the use of the `system_message` for orchestration in GroupChat and all future orchestrators. The `description` field defaults to the `system_message` to ensure backwards compatibility, so you may not need to change anything with your code if things are working well for you. However, if you were struggling with GroupChat, give setting the `description` field a try.
+AutoGen 0.2.2 introduces a [description](https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to all agents, and replaces the use of the `system_message` for orchestration in GroupChat and all future orchestrators. The `description` field defaults to the `system_message` to ensure backwards compatibility, so you may not need to change anything with your code if things are working well for you. However, if you were struggling with GroupChat, give setting the `description` field a try.
The remainder of this post provides an example of how using the `description` field simplifies GroupChat's job, provides some evidence of its effectiveness, and provides tips for writing good descriptions.
diff --git a/website/blog/2024-01-23-Code-execution-in-docker/index.mdx b/website/blog/2024-01-23-Code-execution-in-docker/index.mdx
index c6cb0c690da9..4aaf0e7cd193 100644
--- a/website/blog/2024-01-23-Code-execution-in-docker/index.mdx
+++ b/website/blog/2024-01-23-Code-execution-in-docker/index.mdx
@@ -55,8 +55,8 @@ user_proxy = autogen.UserProxyAgent(name="user_proxy", llm_config=llm_config,
## Related documentation
-- [Code execution with docker](https://autogen-ai.github.io/autogen/docs/Installation#code-execution-with-docker-default)
-- [How to disable code execution in docker](https://autogen-ai.github.io/autogen/docs/FAQ#agents-are-throwing-due-to-docker-not-running-how-can-i-resolve-this)
+- [Code execution with docker](https://autogenhub.github.io/autogen/docs/Installation#code-execution-with-docker-default)
+- [How to disable code execution in docker](https://autogenhub.github.io/autogen/docs/FAQ#agents-are-throwing-due-to-docker-not-running-how-can-i-resolve-this)
## Conclusion
diff --git a/website/blog/2024-01-25-AutoGenBench/index.mdx b/website/blog/2024-01-25-AutoGenBench/index.mdx
index c20db0836c86..7967958e1b1e 100644
--- a/website/blog/2024-01-25-AutoGenBench/index.mdx
+++ b/website/blog/2024-01-25-AutoGenBench/index.mdx
@@ -21,8 +21,8 @@ Today we are releasing AutoGenBench - a tool for evaluating AutoGen agents and w
AutoGenBench is a standalone command line tool, installable from PyPI, which handles downloading, configuring, running, and reporting supported benchmarks. AutoGenBench works best when run alongside Docker, since it uses Docker to isolate tests from one another.
-- See the [AutoGenBench README](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for information on installation and running benchmarks.
-- See the [AutoGenBench CONTRIBUTING guide](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets.
+- See the [AutoGenBench README](https://github.com/autogenhub/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for information on installation and running benchmarks.
+- See the [AutoGenBench CONTRIBUTING guide](https://github.com/autogenhub/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets.
### Quick Start
@@ -42,7 +42,7 @@ autogenbench tabulate Results/human_eval_two_agents
## Introduction
-Measurement and evaluation are core components of every major AI or ML research project. The same is true for AutoGen. To this end, today we are releasing AutoGenBench, a standalone command line tool that we have been using to guide development of AutoGen. Conveniently, AutoGenBench handles: downloading, configuring, running, and reporting results of agents on various public benchmark datasets. In addition to reporting top-line numbers, each AutoGenBench run produces a comprehensive set of logs and telemetry that can be used for debugging, profiling, computing custom metrics, and as input to [AgentEval](https://autogen-ai.github.io/autogen/blog/2023/11/20/AgentEval). In the remainder of this blog post, we outline core design principles for AutoGenBench (key to understanding its operation); present a guide to installing and running AutoGenBench; outline a roadmap for evaluation; and conclude with an open call for contributions.
+Measurement and evaluation are core components of every major AI or ML research project. The same is true for AutoGen. To this end, today we are releasing AutoGenBench, a standalone command line tool that we have been using to guide development of AutoGen. Conveniently, AutoGenBench handles: downloading, configuring, running, and reporting results of agents on various public benchmark datasets. In addition to reporting top-line numbers, each AutoGenBench run produces a comprehensive set of logs and telemetry that can be used for debugging, profiling, computing custom metrics, and as input to [AgentEval](https://autogenhub.github.io/autogen/blog/2023/11/20/AgentEval). In the remainder of this blog post, we outline core design principles for AutoGenBench (key to understanding its operation); present a guide to installing and running AutoGenBench; outline a roadmap for evaluation; and conclude with an open call for contributions.
## Design Principles
@@ -52,7 +52,7 @@ AutoGenBench is designed around three core design principles. Knowing these prin
- **Isolation:** Agents interact with their worlds in both subtle and overt ways. For example an agent may install a python library or write a file to disk. This can lead to ordering effects that can impact future measurements. Consider, for example, comparing two agents on a common benchmark. One agent may appear more efficient than the other simply because it ran second, and benefitted from the hard work the first agent did in installing and debugging necessary Python libraries. To address this, AutoGenBench isolates each task in its own Docker container. This ensures that all runs start with the same initial conditions. (Docker is also a _much safer way to run agent-produced code_, in general.)
-- **Instrumentation:** While top-line metrics are great for comparing agents or models, we often want much more information about how the agents are performing, where they are getting stuck, and how they can be improved. We may also later think of new research questions that require computing a different set of metrics. To this end, AutoGenBench is designed to log everything, and to compute metrics from those logs. This ensures that one can always go back to the logs to answer questions about what happened, run profiling software, or feed the logs into tools like [AgentEval](https://autogen-ai.github.io/autogen/blog/2023/11/20/AgentEval).
+- **Instrumentation:** While top-line metrics are great for comparing agents or models, we often want much more information about how the agents are performing, where they are getting stuck, and how they can be improved. We may also later think of new research questions that require computing a different set of metrics. To this end, AutoGenBench is designed to log everything, and to compute metrics from those logs. This ensures that one can always go back to the logs to answer questions about what happened, run profiling software, or feed the logs into tools like [AgentEval](https://autogenhub.github.io/autogen/blog/2023/11/20/AgentEval).
## Installing and Running AutoGenBench
@@ -125,7 +125,7 @@ Please do not cite these values in academic work without first inspecting and ve
From this output we can see the results of the three separate repetitions of each task, and final summary statistics of each run. In this case, the results were generated via GPT-4 (as defined in the OAI_CONFIG_LIST that was provided), and used the `TwoAgents` template. **It is important to remember that AutoGenBench evaluates _specific_ end-to-end configurations of agents (as opposed to evaluating a model or cognitive framework more generally).**
-Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via:
+Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/autogenhub/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via:
- `autogenbench --help`
- `autogenbench clone --help`
@@ -145,4 +145,4 @@ For an up to date tracking of our work items on this project, please see [AutoGe
## Call for Participation
-Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel!
+Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/autogenhub/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel!
diff --git a/website/blog/2024-02-02-AutoAnny/index.mdx b/website/blog/2024-02-02-AutoAnny/index.mdx
index 7b59761f0b8a..860331a9e5f0 100644
--- a/website/blog/2024-02-02-AutoAnny/index.mdx
+++ b/website/blog/2024-02-02-AutoAnny/index.mdx
@@ -16,7 +16,7 @@ import AutoAnnyLogo from './img/AutoAnnyLogo.jpg';
## TL;DR
We are adding a new sample app called Anny-- a simple Discord bot powered
-by AutoGen that's intended to assist AutoGen Devs. See [`samples/apps/auto-anny`](https://github.com/autogen-ai/build-with-autogen/tree/main/samples/apps/auto-anny) for details.
+by AutoGen that's intended to assist AutoGen Devs. See [`samples/apps/auto-anny`](https://github.com/autogenhub/build-with-autogen/tree/main/samples/apps/auto-anny) for details.
## Introduction
@@ -41,7 +41,7 @@ The current version of Anny is pretty simple -- it uses the Discord API and Auto
For example, it supports commands like `/heyanny help` for command listing, `/heyanny ghstatus` for
GitHub activity summary, `/heyanny ghgrowth` for GitHub repo growth indicators, and `/heyanny ghunattended` for listing unattended issues and PRs. Most of these commands use multiple AutoGen agents to accomplish these task.
-To use Anny, please follow instructions in [`samples/apps/auto-anny`](https://github.com/autogen-ai/build-with-autogen/tree/main/samples/apps/auto-anny).
+To use Anny, please follow instructions in [`samples/apps/auto-anny`](https://github.com/autogenhub/build-with-autogen/tree/main/samples/apps/auto-anny).
## It's Not Just for AutoGen
If you're an open-source developer managing your own project, you can probably relate to our challenges. We invite you to check out Anny and contribute to its development and roadmap.
diff --git a/website/blog/2024-02-11-FSM-GroupChat/index.mdx b/website/blog/2024-02-11-FSM-GroupChat/index.mdx
index d96d7d993c2b..7ab50021bbd5 100644
--- a/website/blog/2024-02-11-FSM-GroupChat/index.mdx
+++ b/website/blog/2024-02-11-FSM-GroupChat/index.mdx
@@ -285,4 +285,4 @@ pip install autogen[graph]
```
## Notebook examples
-More examples can be found in the [notebook](https://autogen-ai.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine/). The notebook includes more examples of possible transition paths such as (1) hub and spoke, (2) sequential team operations, and (3) think aloud and debate. It also uses the function `visualize_speaker_transitions_dict` from `autogen.graph_utils` to visualize the various graphs.
+More examples can be found in the [notebook](https://autogenhub.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine/). The notebook includes more examples of possible transition paths such as (1) hub and spoke, (2) sequential team operations, and (3) think aloud and debate. It also uses the function `visualize_speaker_transitions_dict` from `autogen.graph_utils` to visualize the various graphs.
diff --git a/website/blog/2024-05-24-Agent/index.mdx b/website/blog/2024-05-24-Agent/index.mdx
index 3aaaca23df84..0e063ea44804 100644
--- a/website/blog/2024-05-24-Agent/index.mdx
+++ b/website/blog/2024-05-24-Agent/index.mdx
@@ -143,7 +143,7 @@ better with low cost. [EcoAssistant](/blog/2023/11/09/EcoAssistant) is a good ex
There are certainly tradeoffs to make. The large design space of multi-agents offers these tradeoffs and opens up new opportunities for optimization.
-> Over a year since the debut of Ask AT&T, the generative AI platform to which we’ve onboarded over 80,000 users, AT&T has been enhancing its capabilities by incorporating 'AI Agents'. These agents, powered by the Autogen framework pioneered by Microsoft (https://autogen-ai.github.io/autogen/blog/2023/12/01/AutoGenStudio/), are designed to tackle complicated workflows and tasks that traditional language models find challenging. To drive collaboration, AT&T is contributing back to the open-source project by introducing features that facilitate enhanced security and role-based access for various projects and data.
+> Over a year since the debut of Ask AT&T, the generative AI platform to which we’ve onboarded over 80,000 users, AT&T has been enhancing its capabilities by incorporating 'AI Agents'. These agents, powered by the Autogen framework pioneered by Microsoft (https://autogenhub.github.io/autogen/blog/2023/12/01/AutoGenStudio/), are designed to tackle complicated workflows and tasks that traditional language models find challenging. To drive collaboration, AT&T is contributing back to the open-source project by introducing features that facilitate enhanced security and role-based access for various projects and data.
>
> > Andy Markus, Chief Data Officer at AT&T
diff --git a/website/blog/2024-06-21-AgentEval/index.mdx b/website/blog/2024-06-21-AgentEval/index.mdx
index 12babeec3e63..ce2ee4192951 100644
--- a/website/blog/2024-06-21-AgentEval/index.mdx
+++ b/website/blog/2024-06-21-AgentEval/index.mdx
@@ -15,13 +15,13 @@ tags: [LLM, GPT, evaluation, task utility]
TL;DR:
* As a developer, how can you assess the utility and effectiveness of an LLM-powered application in helping end users with their tasks?
-* To shed light on the question above, we previously introduced [`AgentEval`](https://autogen-ai.github.io/autogen/blog/2023/11/20/AgentEval/) — a framework to assess the multi-dimensional utility of any LLM-powered application crafted to assist users in specific tasks. We have now embedded it as part of the AutoGen library to ease developer adoption.
+* To shed light on the question above, we previously introduced [`AgentEval`](https://autogenhub.github.io/autogen/blog/2023/11/20/AgentEval/) — a framework to assess the multi-dimensional utility of any LLM-powered application crafted to assist users in specific tasks. We have now embedded it as part of the AutoGen library to ease developer adoption.
* Here, we introduce an updated version of AgentEval that includes a verification process to estimate the robustness of the QuantifierAgent. More details can be found in [this paper](https://arxiv.org/abs/2405.02178).
## Introduction
-Previously introduced [`AgentEval`](https://autogen-ai.github.io/autogen/blog/2023/11/20/AgentEval/) is a comprehensive framework designed to bridge the gap in assessing the utility of LLM-powered applications. It leverages recent advancements in LLMs to offer a scalable and cost-effective alternative to traditional human evaluations. The framework comprises three main agents: `CriticAgent`, `QuantifierAgent`, and `VerifierAgent`, each playing a crucial role in assessing the task utility of an application.
+Previously introduced [`AgentEval`](https://autogenhub.github.io/autogen/blog/2023/11/20/AgentEval/) is a comprehensive framework designed to bridge the gap in assessing the utility of LLM-powered applications. It leverages recent advancements in LLMs to offer a scalable and cost-effective alternative to traditional human evaluations. The framework comprises three main agents: `CriticAgent`, `QuantifierAgent`, and `VerifierAgent`, each playing a crucial role in assessing the task utility of an application.
**CriticAgent: Defining the Criteria**
diff --git a/website/blog/2024-06-24-AltModels-Classes/index.mdx b/website/blog/2024-06-24-AltModels-Classes/index.mdx
index 251faebbe716..29ec6a563418 100644
--- a/website/blog/2024-06-24-AltModels-Classes/index.mdx
+++ b/website/blog/2024-06-24-AltModels-Classes/index.mdx
@@ -48,7 +48,7 @@ AutoGen's ability to associate specific configurations to each agent means you c
The common requirements of text generation and function/tool calling are supported by these client classes.
-Multi-modal support, such as for image/audio/video, is an area of active development. The [Google Gemini](https://autogen-ai.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) client class can be
+Multi-modal support, such as for image/audio/video, is an area of active development. The [Google Gemini](https://autogenhub.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) client class can be
used to create a multimodal agent.
## Tips
@@ -58,9 +58,9 @@ Here are some tips when working with these client classes:
- **Most to least capable** - start with larger models and get your workflow working, then iteratively try smaller models.
- **Right model** - choose one that's suited to your task, whether it's coding, function calling, knowledge, or creative writing.
- **Agent names** - these cloud providers do not use the `name` field on a message, so be sure to use your agent's name in their `system_message` and `description` fields, as well as instructing the LLM to 'act as' them. This is particularly important for "auto" speaker selection in group chats as we need to guide the LLM to choose the next agent based on a name, so tweak `select_speaker_message_template`, `select_speaker_prompt_template`, and `select_speaker_auto_multiple_template` with more guidance.
-- **Context length** - as your conversation gets longer, models need to support larger context lengths, be mindful of what the model supports and consider using [Transform Messages](https://autogen-ai.github.io/autogen/docs/topics/handling_long_contexts/intro_to_transform_messages) to manage context size.
-- **Provider parameters** - providers have parameters you can set such as temperature, maximum tokens, top-k, top-p, and safety. See each client class in AutoGen's [API Reference](https://autogen-ai.github.io/autogen/docs/reference/oai/gemini) or [documentation](https://autogen-ai.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) for details.
-- **Prompts** - prompt engineering is critical in guiding smaller LLMs to do what you need. [ConversableAgent](https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent), [GroupChat](https://autogen-ai.github.io/autogen/docs/reference/agentchat/groupchat), [UserProxyAgent](https://autogen-ai.github.io/autogen/docs/reference/agentchat/user_proxy_agent), and [AssistantAgent](https://autogen-ai.github.io/autogen/docs/reference/agentchat/assistant_agent) all have customizable prompt attributes that you can tailor. Here are some prompting tips from [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)([+Library](https://docs.anthropic.com/en/prompt-library/library)), [Mistral AI](https://docs.mistral.ai/guides/prompting_capabilities/), [Together.AI](https://docs.together.ai/docs/examples), and [Meta](https://llama.meta.com/docs/how-to-guides/prompting/).
+- **Context length** - as your conversation gets longer, models need to support larger context lengths, be mindful of what the model supports and consider using [Transform Messages](https://autogenhub.github.io/autogen/docs/topics/handling_long_contexts/intro_to_transform_messages) to manage context size.
+- **Provider parameters** - providers have parameters you can set such as temperature, maximum tokens, top-k, top-p, and safety. See each client class in AutoGen's [API Reference](https://autogenhub.github.io/autogen/docs/reference/oai/gemini) or [documentation](https://autogenhub.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) for details.
+- **Prompts** - prompt engineering is critical in guiding smaller LLMs to do what you need. [ConversableAgent](https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent), [GroupChat](https://autogenhub.github.io/autogen/docs/reference/agentchat/groupchat), [UserProxyAgent](https://autogenhub.github.io/autogen/docs/reference/agentchat/user_proxy_agent), and [AssistantAgent](https://autogenhub.github.io/autogen/docs/reference/agentchat/assistant_agent) all have customizable prompt attributes that you can tailor. Here are some prompting tips from [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)([+Library](https://docs.anthropic.com/en/prompt-library/library)), [Mistral AI](https://docs.mistral.ai/guides/prompting_capabilities/), [Together.AI](https://docs.together.ai/docs/examples), and [Meta](https://llama.meta.com/docs/how-to-guides/prompting/).
- **Help!** - reach out on the AutoGen [Discord](https://discord.gg/pAbnFJrkgZ) or [log an issue](https://github.com/microsoft/autogen/issues) if you need help with or can help improve these client classes.
Now it's time to try them out.
@@ -109,7 +109,7 @@ Add your model configurations to the `OAI_CONFIG_LIST`. Ensure you specify the `
### Usage
-The `[config_list_from_json](https://autogen-ai.github.io/autogen/docs/reference/oai/openai_utils/#config_list_from_json)` function loads a list of configurations from an environment variable or a json file.
+The `[config_list_from_json](https://autogenhub.github.io/autogen/docs/reference/oai/openai_utils/#config_list_from_json)` function loads a list of configurations from an environment variable or a json file.
```py
import autogen
@@ -150,7 +150,7 @@ user_proxy.intiate_chat(assistant, message="Write python code to print Hello Wor
```
-**NOTE: To integrate this setup into GroupChat, follow the [tutorial](https://autogen-ai.github.io/autogen/docs/notebooks/agentchat_groupchat) with the same config as above.**
+**NOTE: To integrate this setup into GroupChat, follow the [tutorial](https://autogenhub.github.io/autogen/docs/notebooks/agentchat_groupchat) with the same config as above.**
## Function Calls
@@ -390,4 +390,4 @@ So we can see how Anthropic's Sonnet is able to suggest multiple tools in a sing
## More tips and tricks
-For an interesting chess game between Anthropic's Sonnet and Mistral's Mixtral, we've put together a sample notebook that highlights some of the tips and tricks for working with non-OpenAI LLMs. [See the notebook here](https://autogen-ai.github.io/autogen/docs/notebooks/agentchat_nested_chats_chess_altmodels).
+For an interesting chess game between Anthropic's Sonnet and Mistral's Mixtral, we've put together a sample notebook that highlights some of the tips and tricks for working with non-OpenAI LLMs. [See the notebook here](https://autogenhub.github.io/autogen/docs/notebooks/agentchat_nested_chats_chess_altmodels).
diff --git a/website/blog/2024-07-25-AgentOps/index.mdx b/website/blog/2024-07-25-AgentOps/index.mdx
index 49cad634428d..6be18abb2968 100644
--- a/website/blog/2024-07-25-AgentOps/index.mdx
+++ b/website/blog/2024-07-25-AgentOps/index.mdx
@@ -28,7 +28,7 @@ Agent observability, in its most basic form, allows you to monitor, troubleshoot
## Why AgentOps?
-AutoGen has simplified the process of building agents, yet we recognized the need for an easy-to-use, native tool for observability. We've previously discussed AgentOps, and now we're excited to partner with AgentOps as our official agent observability tool. Integrating AgentOps with AutoGen simplifies your workflow and boosts your agents' performance through clear observability, ensuring they operate optimally. For more details, check out our [AgentOps documentation](https://autogen-ai.github.io/autogen/docs/notebooks/agentchat_agentops/).
+AutoGen has simplified the process of building agents, yet we recognized the need for an easy-to-use, native tool for observability. We've previously discussed AgentOps, and now we're excited to partner with AgentOps as our official agent observability tool. Integrating AgentOps with AutoGen simplifies your workflow and boosts your agents' performance through clear observability, ensuring they operate optimally. For more details, check out our [AgentOps documentation](https://autogenhub.github.io/autogen/docs/notebooks/agentchat_agentops/).
diff --git a/website/docs/Examples.md b/website/docs/Examples.md
index af3d5cb10213..9f5f9424d8f0 100644
--- a/website/docs/Examples.md
+++ b/website/docs/Examples.md
@@ -40,7 +40,7 @@ Links to notebook examples:
- Automated Continual Learning from New Data - [View Notebook](/docs/notebooks/agentchat_stream)
-- [AutoAnny](https://github.com/autogen-ai/build-with-autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen
+- [AutoAnny](https://github.com/autogenhub/build-with-autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen
### Tool Use
@@ -59,7 +59,7 @@ Links to notebook examples:
### Human Involvement
-- Simple example in ChatGPT style [View example](https://github.com/autogen-ai/build-with-autogen/blob/main/samples/simple_chat.py)
+- Simple example in ChatGPT style [View example](https://github.com/autogenhub/build-with-autogen/blob/main/samples/simple_chat.py)
- Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](/docs/notebooks/agentchat_human_feedback)
- Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](/docs/notebooks/agentchat_two_users)
- Agent Chat with **Async Human Inputs** - [View Notebook](/docs/notebooks/async_human_input)
@@ -91,7 +91,7 @@ Links to notebook examples:
### Long Context Handling
-
+
- Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages)
### Evaluation and Assessment
@@ -111,7 +111,7 @@ Links to notebook examples:
### Utilities
-- API Unification - [View Documentation with Code Example](https://autogen-ai.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification)
+- API Unification - [View Documentation with Code Example](https://autogenhub.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification)
- Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration)
### Inference Hyperparameters Tuning
@@ -120,5 +120,5 @@ AutoGen offers a cost-effective hyperparameter optimization technique [EcoOptiGe
Please find documentation about this feature [here](/docs/Use-Cases/enhanced_inference).
Links to notebook examples:
-* [Optimize for Code Generation](https://github.com/autogen-ai/autogen/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/autogen-ai/autogen/blob/main/notebook/oai_completion.ipynb)
-* [Optimize for Math](https://github.com/autogen-ai/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/autogen-ai/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
+* [Optimize for Code Generation](https://github.com/autogenhub/autogen/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/autogenhub/autogen/blob/main/notebook/oai_completion.ipynb)
+* [Optimize for Math](https://github.com/autogenhub/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/autogenhub/autogen/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
diff --git a/website/docs/FAQ.mdx b/website/docs/FAQ.mdx
index 07b18e9d896d..842920c78f50 100644
--- a/website/docs/FAQ.mdx
+++ b/website/docs/FAQ.mdx
@@ -34,8 +34,8 @@ In version >=1, OpenAI renamed their `api_base` parameter to `base_url`. So for
Yes. You currently have two options:
-- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://autogen-ai.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
-- You can supply your own custom model implementation and use it with Autogen. Please check https://autogen-ai.github.io/autogen/blog/2024/01/26/Custom-Models for more information.
+- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://autogenhub.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
+- You can supply your own custom model implementation and use it with Autogen. Please check https://autogenhub.github.io/autogen/blog/2024/01/26/Custom-Models for more information.
## Handle Rate Limit Error and Timeout Error
@@ -52,9 +52,9 @@ When you call `initiate_chat` the conversation restarts by default. You can use
## `max_consecutive_auto_reply` vs `max_turn` vs `max_round`
-- [`max_consecutive_auto_reply`](https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS".
-- [`max_turns` in `ConversableAgent.initiate_chat`](https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)
-- [`max_round` in GroupChat](https://autogen-ai.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session.
+- [`max_consecutive_auto_reply`](https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS".
+- [`max_turns` in `ConversableAgent.initiate_chat`](https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)
+- [`max_round` in GroupChat](https://autogenhub.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session.
## How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group?
@@ -106,7 +106,7 @@ for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an
environment variable.
You can also develop your AutoGen application in a docker container.
-For example, when developing in [GitHub codespace](https://codespaces.new/autogen-ai/autogen?quickstart=1),
+For example, when developing in [GitHub codespace](https://codespaces.new/autogenhub/autogen?quickstart=1),
AutoGen runs in a docker container.
If you are not developing in GitHub Codespaces,
follow instructions [here](installation/Docker.md#option-1-install-and-run-autogen-in-docker)
@@ -159,7 +159,7 @@ Explanation: Per [this gist](https://gist.github.com/defulmere/8b9695e415a442710
(from [issue #478](https://github.com/microsoft/autogen/issues/478))
-See here https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent/#register_reply
+See here https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent/#register_reply
For example, you can register a reply function that gets called when `generate_reply` is called for an agent.
@@ -188,11 +188,11 @@ In the above, we register a `print_messages` function that is called each time t
## How to get last message ?
-Refer to https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent/#last_message
+Refer to https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent/#last_message
## How to get each agent message ?
-Please refer to https://autogen-ai.github.io/autogen/docs/reference/agentchat/conversable_agent#chat_messages
+Please refer to https://autogenhub.github.io/autogen/docs/reference/agentchat/conversable_agent#chat_messages
## When using autogen docker, is it always necessary to reinstall modules?
diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx
index a66838c36024..5510b4f0443c 100644
--- a/website/docs/Getting-Started.mdx
+++ b/website/docs/Getting-Started.mdx
@@ -121,7 +121,7 @@ Learn more about configuring LLMs for agents [here](/docs/topics/llm_configurati
#### Multi-Agent Conversation Framework
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
-By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/autogen-ai/autogen/blob/main/test/twoagent.py),
+By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/autogenhub/autogen/blob/main/test/twoagent.py),
The figure below shows an example conversation flow with AutoGen.
@@ -138,10 +138,10 @@ The figure below shows an example conversation flow with AutoGen.
- Follow on [Twitter](https://twitter.com/Chi_Wang_)
- See our [roadmaps](https://aka.ms/autogen-roadmap)
-If you like our project, please give it a [star](https://github.com/autogen-ai/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).
+If you like our project, please give it a [star](https://github.com/autogenhub/autogen/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).