\n",
"\n",
- ":fire: Heads-up: We have migrated [AutoGen](https://ag2labs.github.io/autogen/) into a dedicated [github repository](https://github.com/ag2labs/ag2). Alongside this move, we have also launched a dedicated [Discord](https://discord.gg/pAbnFJrkgZ) server and a [website](https://ag2labs.github.io/autogen/) for comprehensive documentation.\n",
+ ":fire: Heads-up: We have migrated [AutoGen](https://ag2ai.github.io/autogen/) into a dedicated [github repository](https://github.com/ag2ai/ag2). Alongside this move, we have also launched a dedicated [Discord](https://discord.gg/pAbnFJrkgZ) server and a [website](https://ag2ai.github.io/autogen/) for comprehensive documentation.\n",
"\n",
- ":fire: The automated multi-agent chat framework in [AutoGen](https://ag2labs.github.io/autogen/) is in preview from v2.0.0.\n",
+ ":fire: The automated multi-agent chat framework in [AutoGen](https://ag2ai.github.io/autogen/) is in preview from v2.0.0.\n",
"\n",
":fire: FLAML is highlighted in OpenAI's [cookbook](https://github.com/openai/openai-cookbook#related-resources-from-around-the-web).\n",
"\n",
- ":fire: [autogen](https://ag2labs.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).\n",
+ ":fire: [autogen](https://ag2ai.github.io/autogen/) is released with support for ChatGPT and GPT-4, based on [Cost-Effective Hyperparameter Optimization for Large Language Model Generation Inference](https://arxiv.org/abs/2303.04673).\n",
"\n",
":fire: FLAML supports Code-First AutoML & Tuning – Private Preview in [Microsoft Fabric Data Science](https://learn.microsoft.com/en-us/fabric/data-science/).\n",
"\n",
@@ -308,7 +308,7 @@
"pip install flaml\n",
"```\n",
"\n",
- "Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. For example, use the following to install the dependencies needed by the [`autogen`](https://ag2labs.github.io/autogen/) package.\n",
+ "Minimal dependencies are installed without extra options. You can install extra options based on the feature you need. For example, use the following to install the dependencies needed by the [`autogen`](https://ag2ai.github.io/autogen/) package.\n",
"\n",
"```bash\n",
"pip install \"flaml[autogen]\"\n",
@@ -319,7 +319,7 @@
"\n",
"## Quickstart\n",
"\n",
- "- (New) The [autogen](https://ag2labs.github.io/autogen/) package enables the next-gen GPT-X applications with a generic multi-agent conversation framework.\n",
+ "- (New) The [autogen](https://ag2ai.github.io/autogen/) package enables the next-gen GPT-X applications with a generic multi-agent conversation framework.\n",
" It offers customizable and conversable agents which integrate LLMs, tools and human.\n",
" By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example,\n",
"\n",
diff --git a/notebook/agentchat_agentoptimizer.ipynb b/notebook/agentchat_agentoptimizer.ipynb
index 56fbc69bf3..3e81b4ac47 100644
--- a/notebook/agentchat_agentoptimizer.ipynb
+++ b/notebook/agentchat_agentoptimizer.ipynb
@@ -7,7 +7,7 @@
"# AgentOptimizer: An Agentic Way to Train Your LLM Agent\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In traditional ML pipeline, we train a model by updating its parameter according to the loss on the training set, while in the era of LLM agents, how should we train an agent? Here, we take an initial step towards the agent training. Inspired by the [function calling](https://platform.openai.com/docs/guides/function-calling) capabilities provided by OpenAI, we draw an analogy between model parameters and agent functions/skills, and update agent’s functions/skills based on its historical performance on the training set. As an agentic way of training an agent, our approach help enhance the agents’ abilities without requiring access to the LLMs parameters.\n",
"\n",
@@ -16,7 +16,7 @@
"Specifically, given a set of training data, AgentOptimizer would iteratively prompt the LLM to optimize the existing function list of the AssistantAgent and UserProxyAgent with code implementation if necessary. It also includes two strategies, roll-back, and early-stop, to streamline the training process.\n",
"In the example scenario, we test the proposed AgentOptimizer in solving problems from the [MATH dataset](https://github.com/hendrycks/math). \n",
"\n",
- "![AgentOptimizer](https://media.githubusercontent.com/media/ag2labs/ag2/main/website/blog/2023-12-23-AgentOptimizer/img/agentoptimizer.png)\n",
+ "![AgentOptimizer](https://media.githubusercontent.com/media/ag2ai/ag2/main/website/blog/2023-12-23-AgentOptimizer/img/agentoptimizer.png)\n",
"\n",
"More information could be found in the [paper](https://arxiv.org/abs/2402.11359).\n",
"\n",
@@ -53,7 +53,7 @@
"source": [
"# MathUserProxy with function_call\n",
"\n",
- "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/ag2labs/ag2/blob/main/autogen/agentchat/contrib/math_user_proxy_agent.py).\n",
+ "This agent is a customized MathUserProxy inherits from its [parent class](https://github.com/ag2ai/ag2/blob/main/autogen/agentchat/contrib/math_user_proxy_agent.py).\n",
"\n",
"It supports using both function_call and python to solve math problems.\n"
]
diff --git a/notebook/agentchat_cost_token_tracking.ipynb b/notebook/agentchat_cost_token_tracking.ipynb
index a19bf8073e..cd4ddac30a 100644
--- a/notebook/agentchat_cost_token_tracking.ipynb
+++ b/notebook/agentchat_cost_token_tracking.ipynb
@@ -53,7 +53,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
]
},
{
@@ -98,7 +98,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2labs/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2ai/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_custom_model.ipynb b/notebook/agentchat_custom_model.ipynb
index f8d0f590cc..3f13efef23 100644
--- a/notebook/agentchat_custom_model.ipynb
+++ b/notebook/agentchat_custom_model.ipynb
@@ -210,7 +210,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for an environment variable of a specified name (\"OAI_CONFIG_LIST\" in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).\n",
"\n",
diff --git a/notebook/agentchat_databricks_dbrx.ipynb b/notebook/agentchat_databricks_dbrx.ipynb
index 457394197c..63a57d8130 100644
--- a/notebook/agentchat_databricks_dbrx.ipynb
+++ b/notebook/agentchat_databricks_dbrx.ipynb
@@ -10,7 +10,7 @@
"\n",
"In March 2024, Databricks released [DBRX](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm), a general-purpose LLM that sets a new standard for open LLMs. While available as an open-source model on Hugging Face ([databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct/tree/main) and [databricks/dbrx-base](https://huggingface.co/databricks/dbrx-base) ), customers of Databricks can also tap into the [Foundation Model APIs](https://docs.databricks.com/en/machine-learning/model-serving/score-foundation-models.html#query-a-chat-completion-model), which make DBRX available through an OpenAI-compatible, autoscaling REST API.\n",
"\n",
- "[Autogen](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat) is becoming a popular standard for agent creation. Built to support any \"LLM as a service\" that implements the OpenAI SDK, it can easily be extended to integrate with powerful open source models. \n",
+ "[Autogen](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat) is becoming a popular standard for agent creation. Built to support any \"LLM as a service\" that implements the OpenAI SDK, it can easily be extended to integrate with powerful open source models. \n",
"\n",
"This notebook will demonstrate a few basic examples of Autogen with DBRX, including the use of `AssistantAgent`, `UserProxyAgent`, and `ConversableAgent`. These demos are not intended to be exhaustive - feel free to use them as a base to build upon!\n",
"\n",
@@ -76,7 +76,7 @@
"source": [
"## Setup DBRX config list\n",
"\n",
- "See Autogen docs for more inforation on the use of `config_list`: [LLM Configuration](https://ag2labs.github.io/autogen/docs/topics/llm_configuration#why-is-it-a-list)"
+ "See Autogen docs for more inforation on the use of `config_list`: [LLM Configuration](https://ag2ai.github.io/autogen/docs/topics/llm_configuration#why-is-it-a-list)"
]
},
{
@@ -116,7 +116,7 @@
"source": [
"## Hello World Example\n",
"\n",
- "Our first example will be with a simple `UserProxyAgent` asking a question to an `AssistantAgent`. This is based on the tutorial demo [here](https://ag2labs.github.io/autogen/docs/tutorial/introduction).\n",
+ "Our first example will be with a simple `UserProxyAgent` asking a question to an `AssistantAgent`. This is based on the tutorial demo [here](https://ag2ai.github.io/autogen/docs/tutorial/introduction).\n",
"\n",
"After sending the question and seeing a response, you can type `exit` to end the chat or continue to converse."
]
@@ -207,7 +207,7 @@
"source": [
"## Simple Coding Agent\n",
"\n",
- "In this example, we will implement a \"coding agent\" that can execute code. You will see how this code is run alongside your notebook in your current workspace, taking advantage of the performance benefits of Databricks clusters. This is based off the demo [here](https://ag2labs.github.io/autogen/docs/topics/non-openai-models/cloud-mistralai/).\n",
+ "In this example, we will implement a \"coding agent\" that can execute code. You will see how this code is run alongside your notebook in your current workspace, taking advantage of the performance benefits of Databricks clusters. This is based off the demo [here](https://ag2ai.github.io/autogen/docs/topics/non-openai-models/cloud-mistralai/).\n",
"\n",
"First, set up a directory: "
]
@@ -430,7 +430,7 @@
"source": [
"## Conversable Bots\n",
"\n",
- "We can also implement the [two-agent chat pattern](https://ag2labs.github.io/autogen/docs/tutorial/conversation-patterns/#two-agent-chat-and-chat-result) using DBRX to \"talk to itself\" in a teacher/student exchange:"
+ "We can also implement the [two-agent chat pattern](https://ag2ai.github.io/autogen/docs/tutorial/conversation-patterns/#two-agent-chat-and-chat-result) using DBRX to \"talk to itself\" in a teacher/student exchange:"
]
},
{
@@ -498,7 +498,7 @@
"\n",
"It can be useful to display chat logs to the notebook for debugging, and then persist those logs to a Delta table. The following section demonstrates how to extend the default AutoGen logging libraries.\n",
"\n",
- "First, we will implement a Python `class` that extends the capabilities of `autogen.runtime_logging` [docs](https://ag2labs.github.io/autogen/docs/notebooks/agentchat_logging):"
+ "First, we will implement a Python `class` that extends the capabilities of `autogen.runtime_logging` [docs](https://ag2ai.github.io/autogen/docs/notebooks/agentchat_logging):"
]
},
{
diff --git a/notebook/agentchat_function_call.ipynb b/notebook/agentchat_function_call.ipynb
index 581147264b..394105a852 100644
--- a/notebook/agentchat_function_call.ipynb
+++ b/notebook/agentchat_function_call.ipynb
@@ -8,7 +8,7 @@
"source": [
"# Auto Generated Agent Chat: Task Solving with Provided Tools as Functions\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
@@ -38,7 +38,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_function_call_async.ipynb b/notebook/agentchat_function_call_async.ipynb
index 65a2627692..9dbf3b0e2e 100644
--- a/notebook/agentchat_function_call_async.ipynb
+++ b/notebook/agentchat_function_call_async.ipynb
@@ -14,7 +14,7 @@
"id": "9a71fa36",
"metadata": {},
"source": [
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
diff --git a/notebook/agentchat_function_call_currency_calculator.ipynb b/notebook/agentchat_function_call_currency_calculator.ipynb
index 33f008f811..06b922cec2 100644
--- a/notebook/agentchat_function_call_currency_calculator.ipynb
+++ b/notebook/agentchat_function_call_currency_calculator.ipynb
@@ -15,7 +15,7 @@
"id": "9a71fa36",
"metadata": {},
"source": [
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
"\n",
@@ -45,7 +45,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat.ipynb b/notebook/agentchat_groupchat.ipynb
index ad8ade28c8..9036964a69 100644
--- a/notebook/agentchat_groupchat.ipynb
+++ b/notebook/agentchat_groupchat.ipynb
@@ -8,9 +8,9 @@
"# Group Chat\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
- "This notebook is modified based on https://github.com/ag2labs/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
+ "This notebook is modified based on https://github.com/ag2ai/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
@@ -31,7 +31,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_RAG.ipynb b/notebook/agentchat_groupchat_RAG.ipynb
index 12bc95170a..85e02c9ec1 100644
--- a/notebook/agentchat_groupchat_RAG.ipynb
+++ b/notebook/agentchat_groupchat_RAG.ipynb
@@ -8,7 +8,7 @@
"# Group Chat with Retrieval Augmented Generation\n",
"\n",
"AutoGen supports conversable agents powered by LLMs, tools, or humans, performing tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
@@ -30,7 +30,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_customized.ipynb b/notebook/agentchat_groupchat_customized.ipynb
index e94412b66a..dc8b3c51dd 100644
--- a/notebook/agentchat_groupchat_customized.ipynb
+++ b/notebook/agentchat_groupchat_customized.ipynb
@@ -8,7 +8,7 @@
"# Group Chat with Customized Speaker Selection Method\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to pass a cumstomized agent selection method to GroupChat. The customized function looks like this:\n",
"\n",
@@ -56,7 +56,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_finite_state_machine.ipynb b/notebook/agentchat_groupchat_finite_state_machine.ipynb
index e6f0e56bcd..b6bd300bf3 100644
--- a/notebook/agentchat_groupchat_finite_state_machine.ipynb
+++ b/notebook/agentchat_groupchat_finite_state_machine.ipynb
@@ -8,7 +8,7 @@
"# FSM - User can input speaker transition constraints\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"This notebook is about using graphs to define the transition paths amongst speakers.\n",
"\n",
diff --git a/notebook/agentchat_groupchat_research.ipynb b/notebook/agentchat_groupchat_research.ipynb
index 45c972802b..e484a2e419 100644
--- a/notebook/agentchat_groupchat_research.ipynb
+++ b/notebook/agentchat_groupchat_research.ipynb
@@ -8,7 +8,7 @@
"# Perform Research with Multi-Agent Group Chat\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"## Requirements\n",
"\n",
@@ -31,7 +31,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_groupchat_stateflow.ipynb b/notebook/agentchat_groupchat_stateflow.ipynb
index db856471aa..77efb476cb 100644
--- a/notebook/agentchat_groupchat_stateflow.ipynb
+++ b/notebook/agentchat_groupchat_stateflow.ipynb
@@ -29,7 +29,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -62,7 +62,7 @@
"## A workflow for research\n",
"\n",
"\n",
diff --git a/notebook/agentchat_groupchat_vis.ipynb b/notebook/agentchat_groupchat_vis.ipynb
index 55bd07ced4..a26768ee12 100644
--- a/notebook/agentchat_groupchat_vis.ipynb
+++ b/notebook/agentchat_groupchat_vis.ipynb
@@ -8,7 +8,7 @@
"# Group Chat with Coder and Visualization Critic\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"````{=mdx}\n",
":::info Requirements\n",
@@ -29,7 +29,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_human_feedback.ipynb b/notebook/agentchat_human_feedback.ipynb
index e7db04a887..af058a8eb8 100644
--- a/notebook/agentchat_human_feedback.ipynb
+++ b/notebook/agentchat_human_feedback.ipynb
@@ -12,7 +12,7 @@
"# Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to solve a challenging math problem with human feedback. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. By setting `human_input_mode` properly, the `UserProxyAgent` can also prompt the user for feedback to `AssistantAgent`. For example, when `human_input_mode` is set to \"ALWAYS\", the `UserProxyAgent` will always prompt the user for feedback. When user feedback is provided, the `UserProxyAgent` will directly pass the feedback to `AssistantAgent`. When no user feedback is provided, the `UserProxyAgent` will execute the code written by `AssistantAgent` and return the execution results (success or failure and corresponding outputs) to `AssistantAgent`.\n",
"\n",
@@ -47,7 +47,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_inception_function.ipynb b/notebook/agentchat_inception_function.ipynb
index a9b357a01a..e2c6756e42 100644
--- a/notebook/agentchat_inception_function.ipynb
+++ b/notebook/agentchat_inception_function.ipynb
@@ -6,7 +6,7 @@
"source": [
"# Auto Generated Agent Chat: Function Inception\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to give them the ability to auto-extend the list of functions the model may call. Functions need to be registered to `UserProxyAgent`, which will be responsible for executing any function calls made by `AssistantAgent`. The assistant also needs to know the signature of functions that may be called. A special `define_function` function is registered, which registers a new function in `UserProxyAgent` and updates the configuration of the assistant.\n",
"\n",
diff --git a/notebook/agentchat_langchain.ipynb b/notebook/agentchat_langchain.ipynb
index 41546a5722..0da9b36088 100644
--- a/notebook/agentchat_langchain.ipynb
+++ b/notebook/agentchat_langchain.ipynb
@@ -10,7 +10,7 @@
"source": [
"# Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participants through multi-agent conversation. Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participants through multi-agent conversation. Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613) with a set of Langchain-provided tools and toolkits, to demonstrate how to leverage the 35+ tools available. \n",
"A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
@@ -49,7 +49,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_models`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "The [`config_list_from_models`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_models) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints for the provided list of models. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
"\n",
"- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
"- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
@@ -128,7 +128,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2labs/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2ai/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_microsoft_fabric.ipynb b/notebook/agentchat_microsoft_fabric.ipynb
index 3a7af51816..6fa0d5aac3 100644
--- a/notebook/agentchat_microsoft_fabric.ipynb
+++ b/notebook/agentchat_microsoft_fabric.ipynb
@@ -13,8 +13,8 @@
"source": [
"## Use AutoGen in Microsoft Fabric\n",
"\n",
- "[AutoGen](https://github.com/ag2labs/ag2) offers conversable LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "[AutoGen](https://github.com/ag2ai/ag2) offers conversable LLM agents, which can be used to solve various tasks with human or automatic feedback, including tasks that require using tools via code.\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"[Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligence. It offers a comprehensive suite of services, including data lake, data engineering, and data integration, all in one place. Its pre-built AI models include GPT-x models such as `gpt-4o`, `gpt-4-turbo`, `gpt-4`, `gpt-4-8k`, `gpt-4-32k`, `gpt-35-turbo`, `gpt-35-turbo-16k` and `gpt-35-turbo-instruct`, etc. It's important to note that the Azure Open AI service is not supported on trial SKUs and only paid SKUs (F64 or higher, or P1 or higher) are supported.\n",
"\n",
@@ -282,7 +282,7 @@
"http_client = get_openai_httpx_sync_client() # http_client is needed for openai>1\n",
"http_client.__deepcopy__ = types.MethodType(\n",
" lambda self, memo: self, http_client\n",
- ") # https://ag2labs.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\\n\",\n",
+ ") # https://ag2ai.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\\n\",\n",
"\n",
"config_list = [\n",
" {\n",
@@ -447,7 +447,7 @@
"http_client = get_openai_httpx_sync_client() # http_client is needed for openai>1\n",
"http_client.__deepcopy__ = types.MethodType(\n",
" lambda self, memo: self, http_client\n",
- ") # https://ag2labs.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\n",
+ ") # https://ag2ai.github.io/autogen/docs/topics/llm_configuration#adding-http-client-in-llm_config-for-proxy\n",
"\n",
"config_list = [\n",
" {\n",
@@ -708,7 +708,7 @@
"### Example 2\n",
"How to use `AssistantAgent` and `RetrieveUserProxyAgent` to do Retrieval Augmented Generation (RAG) for QA and Code Generation.\n",
"\n",
- "Check out this [blog](https://ag2labs.github.io/autogen/blog/2023/10/18/RetrieveChat) for more details."
+ "Check out this [blog](https://ag2ai.github.io/autogen/blog/2023/10/18/RetrieveChat) for more details."
]
},
{
@@ -3229,7 +3229,7 @@
"### Example 3\n",
"How to use `MultimodalConversableAgent` to chat with images.\n",
"\n",
- "Check out this [blog](https://ag2labs.github.io/autogen/blog/2023/11/06/LMM-Agent) for more details."
+ "Check out this [blog](https://ag2ai.github.io/autogen/blog/2023/11/06/LMM-Agent) for more details."
]
},
{
diff --git a/notebook/agentchat_oai_assistant_groupchat.ipynb b/notebook/agentchat_oai_assistant_groupchat.ipynb
index d7ba4809a3..ced5e4c50c 100644
--- a/notebook/agentchat_oai_assistant_groupchat.ipynb
+++ b/notebook/agentchat_oai_assistant_groupchat.ipynb
@@ -7,7 +7,7 @@
"# Auto Generated Agent Chat: Group Chat with GPTAssistantAgent\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to get multiple `GPTAssistantAgent` converse through group chat.\n",
"\n",
@@ -32,7 +32,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -139,12 +139,12 @@
"text": [
"\u001b[33mUser_proxy\u001b[0m (to chat_manager):\n",
"\n",
- "Get the number of issues and pull requests for the repository 'ag2labs/ag2' over the past three weeks and offer analyzes to the data. You should print the data in csv format grouped by weeks.\n",
+ "Get the number of issues and pull requests for the repository 'ag2ai/ag2' over the past three weeks and offer analyzes to the data. You should print the data in csv format grouped by weeks.\n",
"\n",
"--------------------------------------------------------------------------------\n",
"\u001b[33mCoder\u001b[0m (to chat_manager):\n",
"\n",
- "To gather the number of issues and pull requests for the repository 'ag2labs/ag2' over the past three weeks and to offer an analysis of the data, we'll need to modify the previous script.\n",
+ "To gather the number of issues and pull requests for the repository 'ag2ai/ag2' over the past three weeks and to offer an analysis of the data, we'll need to modify the previous script.\n",
"\n",
"We will enhance the script to gather data from the past three weeks, separated by each week, and then output the data in CSV format, grouped by the week during which the issues and pull requests were created. This will require us to make multiple API calls for each week and aggregate the data accordingly.\n",
"\n",
@@ -467,7 +467,7 @@
"source": [
"user_proxy.initiate_chat(\n",
" manager,\n",
- " message=\"Get the number of issues and pull requests for the repository 'ag2labs/ag2' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.\",\n",
+ " message=\"Get the number of issues and pull requests for the repository 'ag2ai/ag2' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.\",\n",
")\n",
"# type exit to terminate the chat"
]
diff --git a/notebook/agentchat_oai_assistant_retrieval.ipynb b/notebook/agentchat_oai_assistant_retrieval.ipynb
index ef94570182..61f05f0002 100644
--- a/notebook/agentchat_oai_assistant_retrieval.ipynb
+++ b/notebook/agentchat_oai_assistant_retrieval.ipynb
@@ -6,7 +6,7 @@
"source": [
"# RAG OpenAI Assistants in AutoGen\n",
"\n",
- "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/ag2labs/ag2/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
+ "This notebook shows an example of the [`GPTAssistantAgent`](https://github.com/ag2ai/ag2/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py) with retrieval augmented generation. `GPTAssistantAgent` is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
"`UserProxyAgent` in AutoGen."
]
},
diff --git a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb
index 5d30e1df57..356e53e72d 100644
--- a/notebook/agentchat_oai_assistant_twoagents_basic.ipynb
+++ b/notebook/agentchat_oai_assistant_twoagents_basic.ipynb
@@ -6,7 +6,7 @@
"source": [
"# OpenAI Assistants in AutoGen\n",
"\n",
- "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/ag2labs/ag2/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
+ "This notebook shows a very basic example of the [`GPTAssistantAgent`](https://github.com/ag2ai/ag2/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py), which is an experimental AutoGen agent class that leverages the [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) for conversational capabilities, working with\n",
"`UserProxyAgent` in AutoGen."
]
},
diff --git a/notebook/agentchat_oai_code_interpreter.ipynb b/notebook/agentchat_oai_code_interpreter.ipynb
index d969f953dc..80f317d54f 100644
--- a/notebook/agentchat_oai_code_interpreter.ipynb
+++ b/notebook/agentchat_oai_code_interpreter.ipynb
@@ -28,7 +28,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_planning.ipynb b/notebook/agentchat_planning.ipynb
index b7f5e5b05b..7ebb88191c 100644
--- a/notebook/agentchat_planning.ipynb
+++ b/notebook/agentchat_planning.ipynb
@@ -12,7 +12,7 @@
"# Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use multiple agents to work together and accomplish a task that requires finding info from the web and coding. `AssistantAgent` is an LLM-based agent that can write and debug Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. We further create a planning agent for the assistant agent to consult. The planning agent is a variation of the LLM-based `AssistantAgent` with a different system message.\n",
"\n",
@@ -47,7 +47,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for an environment variable with a specified name. The value of the environment variable needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
"\n",
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base.\n"
]
@@ -97,7 +97,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2labs/ag2/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n",
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2ai/ag2/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods.\n",
"\n",
"## Construct Agents\n",
"\n",
diff --git a/notebook/agentchat_stream.ipynb b/notebook/agentchat_stream.ipynb
index 49d7c74e5c..fbb8bcab08 100644
--- a/notebook/agentchat_stream.ipynb
+++ b/notebook/agentchat_stream.ipynb
@@ -12,7 +12,7 @@
"# Interactive LLM Agent Dealing with Data Stream\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use customized agents to continuously acquire news from the web and ask for investment suggestions.\n",
"\n",
@@ -47,7 +47,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
]
},
{
@@ -94,7 +94,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2labs/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2ai/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_surfer.ipynb b/notebook/agentchat_surfer.ipynb
index 525fde29a1..2fe6fda882 100644
--- a/notebook/agentchat_surfer.ipynb
+++ b/notebook/agentchat_surfer.ipynb
@@ -35,7 +35,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
"\n",
diff --git a/notebook/agentchat_swarm.ipynb b/notebook/agentchat_swarm.ipynb
index 207e88530c..56ac6f30de 100644
--- a/notebook/agentchat_swarm.ipynb
+++ b/notebook/agentchat_swarm.ipynb
@@ -34,7 +34,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -229,7 +229,7 @@
"metadata": {},
"source": [
"> With AutoGen, you don't need to write schemas for functions. You can add decorators to the functions to register a function schema to an LLM-based agent, where the schema is automatically generated.\n",
- "See more details in this [doc](https://ag2labs.github.io/autogen/docs/tutorial/tool-use)"
+ "See more details in this [doc](https://ag2ai.github.io/autogen/docs/tutorial/tool-use)"
]
},
{
@@ -360,7 +360,7 @@
"See the overall architecture of the example in the image below:\n",
"\n",
"\n",
diff --git a/notebook/agentchat_teachability.ipynb b/notebook/agentchat_teachability.ipynb
index dfb88619a9..338dd5971d 100644
--- a/notebook/agentchat_teachability.ipynb
+++ b/notebook/agentchat_teachability.ipynb
@@ -13,7 +13,7 @@
"\n",
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
"\n",
- "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/ag2labs/ag2/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
+ "This notebook demonstrates how `Teachability` can be added to an agent so that it can learn facts, preferences, and skills from users. To chat with a teachable agent yourself, run [chat_with_teachable_agent.py](https://github.com/ag2ai/ag2/blob/main/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py).\n",
"\n",
"## Requirements\n",
"\n",
@@ -37,7 +37,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_teachable_oai_assistants.ipynb b/notebook/agentchat_teachable_oai_assistants.ipynb
index 4263606266..b43dadea25 100644
--- a/notebook/agentchat_teachable_oai_assistants.ipynb
+++ b/notebook/agentchat_teachable_oai_assistants.ipynb
@@ -14,7 +14,7 @@
"In making decisions about memo storage and retrieval, `Teachability` calls an instance of `TextAnalyzerAgent` to analyze pieces of text in several different ways. This adds extra LLM calls involving a relatively small number of tokens. These calls can add a few seconds to the time a user waits for a response.\n",
"\n",
"This notebook demonstrates how `Teachability` can be added to instances of `GPTAssistantAgent`\n",
- "so that they can learn facts, preferences, and skills from users. As explained [here](https://ag2labs.github.io/autogen/docs/topics/openai-assistant/gpt_assistant_agent), each instance of `GPTAssistantAgent` wraps an OpenAI Assistant that can be given a set of tools including functions, code interpreter, and retrieval. Assistants with these tools are demonstrated in separate standalone sections below, which can be run independently.\n",
+ "so that they can learn facts, preferences, and skills from users. As explained [here](https://ag2ai.github.io/autogen/docs/topics/openai-assistant/gpt_assistant_agent), each instance of `GPTAssistantAgent` wraps an OpenAI Assistant that can be given a set of tools including functions, code interpreter, and retrieval. Assistants with these tools are demonstrated in separate standalone sections below, which can be run independently.\n",
"\n",
"## Requirements\n",
"\n",
@@ -41,7 +41,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
diff --git a/notebook/agentchat_teaching.ipynb b/notebook/agentchat_teaching.ipynb
index 50889f24b8..8a42616e51 100644
--- a/notebook/agentchat_teaching.ipynb
+++ b/notebook/agentchat_teaching.ipynb
@@ -10,9 +10,9 @@
"TODO: Implement advanced teachability based on this example.\n",
"\n",
"AutoGen offers conversable agents powered by LLMs, tools, or humans, which can be used to perform tasks collectively via automated chat. This framework makes it easy to build many advanced applications of LLMs.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
- "This notebook demonstrates how AutoGen enables a user to teach AI new skills via natural agent interactions, without requiring knowledge of programming language. It is modified based on https://github.com/ag2labs/FLAML/blob/evaluation/notebook/research_paper/teaching.ipynb and https://github.com/ag2labs/FLAML/blob/evaluation/notebook/research_paper/teaching_recipe_reuse.ipynb.\n",
+ "This notebook demonstrates how AutoGen enables a user to teach AI new skills via natural agent interactions, without requiring knowledge of programming language. It is modified based on https://github.com/ag2ai/FLAML/blob/evaluation/notebook/research_paper/teaching.ipynb and https://github.com/ag2ai/FLAML/blob/evaluation/notebook/research_paper/teaching_recipe_reuse.ipynb.\n",
"\n",
"## Requirements\n",
"\n",
diff --git a/notebook/agentchat_two_users.ipynb b/notebook/agentchat_two_users.ipynb
index 225366e9d9..5886c2f8b8 100644
--- a/notebook/agentchat_two_users.ipynb
+++ b/notebook/agentchat_two_users.ipynb
@@ -11,7 +11,7 @@
"source": [
"# Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users\n",
"\n",
- "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate an application involving multiple agents and human users to work together and accomplish a task. `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for a user to execute the code written by `AssistantAgent`. We create multiple `UserProxyAgent` instances that can represent different human users.\n",
"\n",
@@ -46,7 +46,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
"\n",
"It first looks for an environment variable of a specified name (\"OAI_CONFIG_LIST\" in this example), which needs to be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by models (you can filter by other keys as well).\n",
"\n",
@@ -74,7 +74,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2labs/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2ai/ag2/blob/main/website/docs/topics/llm_configuration.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb
index b057762f9b..2ed544dd9b 100644
--- a/notebook/agentchat_video_transcript_translate_with_whisper.ipynb
+++ b/notebook/agentchat_video_transcript_translate_with_whisper.ipynb
@@ -8,7 +8,7 @@
"# Translating Video audio using Whisper and GPT-3.5-turbo\n",
"\n",
"In this notebook, we demonstrate how to use whisper and GPT-3.5-turbo with `AssistantAgent` and `UserProxyAgent` to recognize and translate\n",
- "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_function_call.ipynb)\n"
+ "the speech sound from a video file and add the timestamp like a subtitle file based on [agentchat_function_call.ipynb](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_function_call.ipynb)\n"
]
},
{
diff --git a/notebook/agentchat_web_info.ipynb b/notebook/agentchat_web_info.ipynb
index cc5601ae1f..bb9dfe037e 100644
--- a/notebook/agentchat_web_info.ipynb
+++ b/notebook/agentchat_web_info.ipynb
@@ -12,7 +12,7 @@
"# Auto Generated Agent Chat: Solving Tasks Requiring Web Info\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to perform tasks which require acquiring info from the web:\n",
"* discuss a paper based on its URL.\n",
@@ -51,7 +51,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n"
]
},
{
@@ -108,7 +108,7 @@
"]\n",
"```\n",
"\n",
- "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2labs/ag2/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
+ "You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/ag2ai/ag2/blob/main/notebook/oai_openai_utils.ipynb) for full code examples of the different methods."
]
},
{
diff --git a/notebook/agentchat_websockets.ipynb b/notebook/agentchat_websockets.ipynb
index a0aa5da650..33aa1a7c01 100644
--- a/notebook/agentchat_websockets.ipynb
+++ b/notebook/agentchat_websockets.ipynb
@@ -8,16 +8,16 @@
"source": [
"# Websockets: Streaming input and output using websockets\n",
"\n",
- "This notebook demonstrates how to use the [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) class to stream both input and output using websockets. The use of websockets allows you to build web clients that are more responsive than the one using web methods. The main difference is that the webosockets allows you to push data while you need to poll the server for new response using web mothods.\n",
+ "This notebook demonstrates how to use the [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) class to stream both input and output using websockets. The use of websockets allows you to build web clients that are more responsive than the one using web methods. The main difference is that the webosockets allows you to push data while you need to poll the server for new response using web mothods.\n",
"\n",
"\n",
- "In this guide, we explore the capabilities of the [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) class. It is specifically designed to enhance the development of clients such as web clients which use websockets for streaming both input and output. The [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) stands out by enabling a more dynamic and interactive user experience for web applications.\n",
+ "In this guide, we explore the capabilities of the [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) class. It is specifically designed to enhance the development of clients such as web clients which use websockets for streaming both input and output. The [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) stands out by enabling a more dynamic and interactive user experience for web applications.\n",
"\n",
"Websockets technology is at the core of this functionality, offering a significant advancement over traditional web methods by allowing data to be \"pushed\" to the client in real-time. This is a departure from the conventional approach where clients must repeatedly \"poll\" the server to check for any new responses. By employing the underlining [websockets](https://websockets.readthedocs.io/) library, the IOStream class facilitates a continuous, two-way communication channel between the server and client. This ensures that updates are received instantly, without the need for constant polling, thereby making web clients more efficient and responsive.\n",
"\n",
- "The real power of websockets, leveraged through the [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) class, lies in its ability to create highly responsive web clients. This responsiveness is critical for applications requiring real-time data updates such as chat applications. By integrating the [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) class into your web application, you not only enhance user experience through immediate data transmission but also reduce the load on your server by eliminating unnecessary polling.\n",
+ "The real power of websockets, leveraged through the [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) class, lies in its ability to create highly responsive web clients. This responsiveness is critical for applications requiring real-time data updates such as chat applications. By integrating the [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) class into your web application, you not only enhance user experience through immediate data transmission but also reduce the load on your server by eliminating unnecessary polling.\n",
"\n",
- "In essence, the transition to using websockets through the [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) class marks a significant enhancement in web client development. This approach not only streamlines the data exchange process between clients and servers but also opens up new possibilities for creating more interactive and engaging web applications. By following this guide, developers can harness the full potential of websockets and the [`IOStream`](https://ag2labs.github.io/autogen/docs/reference/io/base/IOStream) class to push the boundaries of what is possible with web client responsiveness and interactivity.\n",
+ "In essence, the transition to using websockets through the [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) class marks a significant enhancement in web client development. This approach not only streamlines the data exchange process between clients and servers but also opens up new possibilities for creating more interactive and engaging web applications. By following this guide, developers can harness the full potential of websockets and the [`IOStream`](https://ag2ai.github.io/autogen/docs/reference/io/base/IOStream) class to push the boundaries of what is possible with web client responsiveness and interactivity.\n",
"\n",
"## Requirements\n",
"\n",
@@ -42,7 +42,7 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
+ "The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file."
]
},
{
@@ -92,7 +92,7 @@
"An `on_connect` function is a crucial part of applications that utilize websockets, acting as an event handler that is called whenever a new client connection is established. This function is designed to initiate any necessary setup, communication protocols, or data exchange procedures specific to the newly connected client. Essentially, it lays the groundwork for the interactive session that follows, configuring how the server and the client will communicate and what initial actions are to be taken once a connection is made. Now, let's delve into the details of how to define this function, especially in the context of using the AutoGen framework with websockets.\n",
"\n",
"\n",
- "Upon a client's connection to the websocket server, the server automatically initiates a new instance of the [`IOWebsockets`](https://ag2labs.github.io/autogen/docs/reference/io/websockets/IOWebsockets) class. This instance is crucial for managing the data flow between the server and the client. The `on_connect` function leverages this instance to set up the communication protocol, define interaction rules, and initiate any preliminary data exchanges or configurations required for the client-server interaction to proceed smoothly.\n"
+ "Upon a client's connection to the websocket server, the server automatically initiates a new instance of the [`IOWebsockets`](https://ag2ai.github.io/autogen/docs/reference/io/websockets/IOWebsockets) class. This instance is crucial for managing the data flow between the server and the client. The `on_connect` function leverages this instance to set up the communication protocol, define interaction rules, and initiate any preliminary data exchanges or configurations required for the client-server interaction to proceed smoothly.\n"
]
},
{
diff --git a/notebook/agenteval_cq_math.ipynb b/notebook/agenteval_cq_math.ipynb
index aca39dc251..e9dc5ca030 100644
--- a/notebook/agenteval_cq_math.ipynb
+++ b/notebook/agenteval_cq_math.ipynb
@@ -15,9 +15,9 @@
"\n",
"- `quantify_criteria`: This function quantifies the performance of any sample task based on the criteria generated in the `generate_criteria` step in the following way: $(c_1=a_1, \\dots, c_n=a_n)$\n",
"\n",
- "![AgentEval](https://media.githubusercontent.com/media/ag2labs/ag2/main/website/blog/2023-11-20-AgentEval/img/agenteval-CQ.png)\n",
+ "![AgentEval](https://media.githubusercontent.com/media/ag2ai/ag2/main/website/blog/2023-11-20-AgentEval/img/agenteval-CQ.png)\n",
"\n",
- "For more detailed explanations, please refer to the accompanying [blog post](https://ag2labs.github.io/autogen/blog/2023/11/20/AgentEval)\n",
+ "For more detailed explanations, please refer to the accompanying [blog post](https://ag2ai.github.io/autogen/blog/2023/11/20/AgentEval)\n",
"\n",
"## Requirements\n",
"\n",
diff --git a/notebook/autobuild_basic.ipynb b/notebook/autobuild_basic.ipynb
index 49b6ba303b..42d39532d3 100644
--- a/notebook/autobuild_basic.ipynb
+++ b/notebook/autobuild_basic.ipynb
@@ -9,10 +9,10 @@
"source": [
"# AutoBuild\n",
"By: [Linxin Song](https://linxins97.github.io/), [Jieyu Zhang](https://jieyuz2.github.io/)\n",
- "Reference: [Agent AutoBuild](https://ag2labs.github.io/autogen/blog/2023/11/26/Agent-AutoBuild/)\n",
+ "Reference: [Agent AutoBuild](https://ag2ai.github.io/autogen/blog/2023/11/26/Agent-AutoBuild/)\n",
"\n",
"AutoGen offers conversable agents powered by LLM, tool, or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
- "Please find documentation about this feature [here](https://ag2labs.github.io/autogen/docs/Use-Cases/agent_chat).\n",
+ "Please find documentation about this feature [here](https://ag2ai.github.io/autogen/docs/Use-Cases/agent_chat).\n",
"\n",
"In this notebook, we introduce a new class, `AgentBuilder`, to help user build an automatic task solving process powered by multi-agent system. Specifically, in `build()`, we prompt a LLM to create multiple participant agent and initialize a group chat, and specify whether this task need programming to solve. AgentBuilder also support open-source LLMs by [vLLM](https://docs.vllm.ai/en/latest/index.html) and [Fastchat](https://github.com/lm-sys/FastChat). Check the supported model list [here](https://docs.vllm.ai/en/latest/models/supported_models.html)."
]
diff --git a/notebook/autogen_uniformed_api_calling.ipynb b/notebook/autogen_uniformed_api_calling.ipynb
index 6928e782d1..4521674153 100644
--- a/notebook/autogen_uniformed_api_calling.ipynb
+++ b/notebook/autogen_uniformed_api_calling.ipynb
@@ -7,7 +7,7 @@
"# A Uniform interface to call different LLMs\n",
"\n",
"Autogen provides a uniform interface for API calls to different LLMs, and creating LLM agents from them.\n",
- "Through setting up a configuration file, you can easily switch between different LLMs by just changing the model name, while enjoying all the [enhanced features](https://ag2labs.github.io/autogen/docs/topics/llm-caching) such as [caching](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary) and [cost calculation](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary)!\n",
+ "Through setting up a configuration file, you can easily switch between different LLMs by just changing the model name, while enjoying all the [enhanced features](https://ag2ai.github.io/autogen/docs/topics/llm-caching) such as [caching](https://ag2ai.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary) and [cost calculation](https://ag2ai.github.io/autogen/docs/Use-Cases/enhanced_inference/#usage-summary)!\n",
"\n",
"In this notebook, we will show you how to use AutoGen to call different LLMs and create LLM agents from them.\n",
"\n",
@@ -22,7 +22,7 @@
"\n",
"... and more to come!\n",
"\n",
- "You can also [plug in your local deployed LLM](https://ag2labs.github.io/autogen/blog/2024/01/26/Custom-Models) into AutoGen if needed."
+ "You can also [plug in your local deployed LLM](https://ag2ai.github.io/autogen/blog/2024/01/26/Custom-Models) into AutoGen if needed."
]
},
{
diff --git a/notebook/config_loader_utility_functions.ipynb b/notebook/config_loader_utility_functions.ipynb
index 3892d72400..5bd6004bec 100644
--- a/notebook/config_loader_utility_functions.ipynb
+++ b/notebook/config_loader_utility_functions.ipynb
@@ -6,7 +6,7 @@
"source": [
"# Config loader utility functions\n",
"\n",
- "For an introduction to configuring LLMs, refer to the [main configuration docs](https://ag2labs.github.io/autogen/docs/topics/llm_configuration). This guide will run through examples of the more advanced utility functions for managing API configurations.\n",
+ "For an introduction to configuring LLMs, refer to the [main configuration docs](https://ag2ai.github.io/autogen/docs/topics/llm_configuration). This guide will run through examples of the more advanced utility functions for managing API configurations.\n",
"\n",
"Managing API configurations can be tricky, especially when dealing with multiple models and API versions. The provided utility functions assist users in managing these configurations effectively. Ensure your API keys and other sensitive data are stored securely. You might store keys in `.txt` or `.env` files or environment variables for local development. Never expose your API keys publicly. If you insist on storing your key files locally on your repo (you shouldn't), ensure the key file path is added to the `.gitignore` file.\n",
"\n",
diff --git a/notebook/oai_chatgpt_gpt4.ipynb b/notebook/oai_chatgpt_gpt4.ipynb
index 4167a2cda8..3d3e65f45d 100644
--- a/notebook/oai_chatgpt_gpt4.ipynb
+++ b/notebook/oai_chatgpt_gpt4.ipynb
@@ -17,8 +17,8 @@
}
},
"source": [
- "Contributions to this project, i.e., https://github.com/ag2labs/ag2, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
- "Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs\n",
+ "Contributions to this project, i.e., https://github.com/ag2ai/ag2, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
+ "Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai\n",
"SPDX-License-Identifier: Apache-2.0\n",
"Portions derived from https://github.com/microsoft/autogen under the MIT License.\n",
"SPDX-License-Identifier: MIT\n",
@@ -33,7 +33,7 @@
"\n",
"In this notebook, we tune OpenAI ChatGPT (both GPT-3.5 and GPT-4) models for math problem solving. We use [the MATH benchmark](https://crfm.stanford.edu/helm/latest/?group=math_chain_of_thought) for measuring mathematical problem solving on competition math problems with chain-of-thoughts style reasoning.\n",
"\n",
- "Related link: [Blogpost](https://ag2labs.github.io/autogen/blog/2023/04/21/LLM-tuning-math) based on this experiment.\n",
+ "Related link: [Blogpost](https://ag2ai.github.io/autogen/blog/2023/04/21/LLM-tuning-math) based on this experiment.\n",
"\n",
"## Requirements\n",
"\n",
@@ -98,7 +98,7 @@
"source": [
"### Set your API Endpoint\n",
"\n",
- "The [`config_list_openai_aoai`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "The [`config_list_openai_aoai`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
"\n",
"- OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
"- Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
diff --git a/notebook/oai_completion.ipynb b/notebook/oai_completion.ipynb
index 3a42ed3de6..3fabbca5e2 100644
--- a/notebook/oai_completion.ipynb
+++ b/notebook/oai_completion.ipynb
@@ -17,8 +17,8 @@
}
},
"source": [
- "Contributions to this project, i.e., https://github.com/ag2labs/ag2, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
- "Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs\n",
+ "Contributions to this project, i.e., https://github.com/ag2ai/ag2, are licensed under the Apache License, Version 2.0 (Apache-2.0).\n",
+ "Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai\n",
"SPDX-License-Identifier: Apache-2.0\n",
"Portions derived from https://github.com/microsoft/autogen under the MIT License.\n",
"SPDX-License-Identifier: MIT\n",
@@ -64,11 +64,11 @@
"source": [
"## Set your API Endpoint\n",
"\n",
- "* The [`config_list_openai_aoai`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
+ "* The [`config_list_openai_aoai`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_openai_aoai) function tries to create a list of configurations using Azure OpenAI endpoints and OpenAI endpoints. It assumes the api keys and api bases are stored in the corresponding environment variables or local txt files:\n",
" - OpenAI API key: os.environ[\"OPENAI_API_KEY\"] or `openai_api_key_file=\"key_openai.txt\"`.\n",
" - Azure OpenAI API key: os.environ[\"AZURE_OPENAI_API_KEY\"] or `aoai_api_key_file=\"key_aoai.txt\"`. Multiple keys can be stored, one per line.\n",
" - Azure OpenAI API base: os.environ[\"AZURE_OPENAI_API_BASE\"] or `aoai_api_base_file=\"base_aoai.txt\"`. Multiple bases can be stored, one per line.\n",
- "* The [`config_list_from_json`](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for the environment variable `env_or_file`, which must be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
+ "* The [`config_list_from_json`](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. It first looks for the environment variable `env_or_file`, which must be a valid json string. If that variable is not found, it looks for a json file with the same name. It filters the configs by filter_dict.\n",
"\n",
"It's OK to have only the OpenAI API key, or only the Azure OpenAI API key + base. If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choosing \"upload file\" icon.\n"
]
diff --git a/test/agentchat/contrib/agent_eval/test_agent_eval.py b/test/agentchat/contrib/agent_eval/test_agent_eval.py
index 9622fbd371..e871b9e347 100644
--- a/test/agentchat/contrib/agent_eval/test_agent_eval.py
+++ b/test/agentchat/contrib/agent_eval/test_agent_eval.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/agent_eval/test_criterion.py b/test/agentchat/contrib/agent_eval/test_criterion.py
index b218c66aec..f36ccdfd24 100644
--- a/test/agentchat/contrib/agent_eval/test_criterion.py
+++ b/test/agentchat/contrib/agent_eval/test_criterion.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/agent_eval/test_task.py b/test/agentchat/contrib/agent_eval/test_task.py
index 5bb27fcf79..a8e637f56f 100644
--- a/test/agentchat/contrib/agent_eval/test_task.py
+++ b/test/agentchat/contrib/agent_eval/test_task.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py b/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py
index 6b22fc0f8e..58a3a38a4e 100755
--- a/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py
+++ b/test/agentchat/contrib/capabilities/chat_with_teachable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -28,7 +28,7 @@
def create_teachable_agent(reset_db=False):
"""Instantiates a teachable agent using the settings from the top of this file."""
# Load LLM inference endpoints from an env variable or a file
- # See https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints
+ # See https://ag2ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file=OAI_CONFIG_LIST, filter_dict=filter_dict, file_location=KEY_LOC)
diff --git a/test/agentchat/contrib/capabilities/test_image_generation_capability.py b/test/agentchat/contrib/capabilities/test_image_generation_capability.py
index daa846a1c2..c0cb6fc1a9 100644
--- a/test/agentchat/contrib/capabilities/test_image_generation_capability.py
+++ b/test/agentchat/contrib/capabilities/test_image_generation_capability.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_teachable_agent.py b/test/agentchat/contrib/capabilities/test_teachable_agent.py
index 76a6bb9ff3..6f705d6537 100755
--- a/test/agentchat/contrib/capabilities/test_teachable_agent.py
+++ b/test/agentchat/contrib/capabilities/test_teachable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -40,7 +40,7 @@
def create_teachable_agent(reset_db=False, verbosity=0):
"""Instantiates a teachable agent using the settings from the top of this file."""
# Load LLM inference endpoints from an env variable or a file
- # See https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints
+ # See https://ag2ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file=OAI_CONFIG_LIST, filter_dict=filter_dict, file_location=KEY_LOC)
diff --git a/test/agentchat/contrib/capabilities/test_transform_messages.py b/test/agentchat/contrib/capabilities/test_transform_messages.py
index b52d187e37..9121a8d8bb 100644
--- a/test/agentchat/contrib/capabilities/test_transform_messages.py
+++ b/test/agentchat/contrib/capabilities/test_transform_messages.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_transforms.py b/test/agentchat/contrib/capabilities/test_transforms.py
index 865ef7102c..744727f65e 100644
--- a/test/agentchat/contrib/capabilities/test_transforms.py
+++ b/test/agentchat/contrib/capabilities/test_transforms.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_transforms_util.py b/test/agentchat/contrib/capabilities/test_transforms_util.py
index 473a1fd2dc..31c5ac223e 100644
--- a/test/agentchat/contrib/capabilities/test_transforms_util.py
+++ b/test/agentchat/contrib/capabilities/test_transforms_util.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/capabilities/test_vision_capability.py b/test/agentchat/contrib/capabilities/test_vision_capability.py
index d31dcb923f..9ba6dd9ec9 100644
--- a/test/agentchat/contrib/capabilities/test_vision_capability.py
+++ b/test/agentchat/contrib/capabilities/test_vision_capability.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py b/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py
index 739733bb03..3d64b1dc76 100644
--- a/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py
+++ b/test/agentchat/contrib/retrievechat/test_pgvector_retrievechat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py b/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py
index 3677bb6fa1..cca4d8029b 100755
--- a/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py
+++ b/test/agentchat/contrib/retrievechat/test_qdrant_retrievechat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/retrievechat/test_retrievechat.py b/test/agentchat/contrib/retrievechat/test_retrievechat.py
index 184fcc8957..76879f8765 100755
--- a/test/agentchat/contrib/retrievechat/test_retrievechat.py
+++ b/test/agentchat/contrib/retrievechat/test_retrievechat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_agent_builder.py b/test/agentchat/contrib/test_agent_builder.py
index f3a08db0d2..9dee05766e 100755
--- a/test/agentchat/contrib/test_agent_builder.py
+++ b/test/agentchat/contrib/test_agent_builder.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_agent_optimizer.py b/test/agentchat/contrib/test_agent_optimizer.py
index 5292c344b4..88323f56d8 100644
--- a/test/agentchat/contrib/test_agent_optimizer.py
+++ b/test/agentchat/contrib/test_agent_optimizer.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_gpt_assistant.py b/test/agentchat/contrib/test_gpt_assistant.py
index fcad4e6db0..c67130f77a 100755
--- a/test/agentchat/contrib/test_gpt_assistant.py
+++ b/test/agentchat/contrib/test_gpt_assistant.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_img_utils.py b/test/agentchat/contrib/test_img_utils.py
index ebb520e501..49efad1013 100755
--- a/test/agentchat/contrib/test_img_utils.py
+++ b/test/agentchat/contrib/test_img_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_llamaindex_conversable_agent.py b/test/agentchat/contrib/test_llamaindex_conversable_agent.py
index f80a5a989b..6fd74d4d18 100644
--- a/test/agentchat/contrib/test_llamaindex_conversable_agent.py
+++ b/test/agentchat/contrib/test_llamaindex_conversable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_llava.py b/test/agentchat/contrib/test_llava.py
index af5f916ffc..01935f99ad 100755
--- a/test/agentchat/contrib/test_llava.py
+++ b/test/agentchat/contrib/test_llava.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_lmm.py b/test/agentchat/contrib/test_lmm.py
index d4f293da31..43ee8d88a3 100755
--- a/test/agentchat/contrib/test_lmm.py
+++ b/test/agentchat/contrib/test_lmm.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_society_of_mind_agent.py b/test/agentchat/contrib/test_society_of_mind_agent.py
index 1f76495392..376bddfd70 100755
--- a/test/agentchat/contrib/test_society_of_mind_agent.py
+++ b/test/agentchat/contrib/test_society_of_mind_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/test_web_surfer.py b/test/agentchat/contrib/test_web_surfer.py
index b1f570666a..94c3013005 100755
--- a/test/agentchat/contrib/test_web_surfer.py
+++ b/test/agentchat/contrib/test_web_surfer.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -21,7 +21,7 @@
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from test_assistant_agent import KEY_LOC, OAI_CONFIG_LIST # noqa: E402
-BLOG_POST_URL = "https://ag2labs.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
+BLOG_POST_URL = "https://ag2ai.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
BLOG_POST_TITLE = "Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH | AutoGen"
BING_QUERY = "Microsoft"
diff --git a/test/agentchat/contrib/vectordb/test_chromadb.py b/test/agentchat/contrib/vectordb/test_chromadb.py
index e16fdbd7ab..7b7992f717 100644
--- a/test/agentchat/contrib/vectordb/test_chromadb.py
+++ b/test/agentchat/contrib/vectordb/test_chromadb.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_mongodb.py b/test/agentchat/contrib/vectordb/test_mongodb.py
index 13de00dd33..536da417fc 100644
--- a/test/agentchat/contrib/vectordb/test_mongodb.py
+++ b/test/agentchat/contrib/vectordb/test_mongodb.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_pgvectordb.py b/test/agentchat/contrib/vectordb/test_pgvectordb.py
index 2465967d63..e158cf1678 100644
--- a/test/agentchat/contrib/vectordb/test_pgvectordb.py
+++ b/test/agentchat/contrib/vectordb/test_pgvectordb.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_qdrant.py b/test/agentchat/contrib/vectordb/test_qdrant.py
index 431e496ed8..c8b3b9fdbf 100644
--- a/test/agentchat/contrib/vectordb/test_qdrant.py
+++ b/test/agentchat/contrib/vectordb/test_qdrant.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/contrib/vectordb/test_vectordb_utils.py b/test/agentchat/contrib/vectordb/test_vectordb_utils.py
index 88d279e918..7f9758d4a5 100644
--- a/test/agentchat/contrib/vectordb/test_vectordb_utils.py
+++ b/test/agentchat/contrib/vectordb/test_vectordb_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/extensions/tsp.py b/test/agentchat/extensions/tsp.py
index b61272e25a..7b1e7600f3 100644
--- a/test/agentchat/extensions/tsp.py
+++ b/test/agentchat/extensions/tsp.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/extensions/tsp_api.py b/test/agentchat/extensions/tsp_api.py
index 21b94f3160..f2b09a0c9b 100644
--- a/test/agentchat/extensions/tsp_api.py
+++ b/test/agentchat/extensions/tsp_api.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_file_logging.py b/test/agentchat/test_agent_file_logging.py
index b2714aae45..d68c5dea9c 100644
--- a/test/agentchat/test_agent_file_logging.py
+++ b/test/agentchat/test_agent_file_logging.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_logging.py b/test/agentchat/test_agent_logging.py
index 295a7b2bec..4e17487382 100644
--- a/test/agentchat/test_agent_logging.py
+++ b/test/agentchat/test_agent_logging.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_setup_with_use_docker_settings.py b/test/agentchat/test_agent_setup_with_use_docker_settings.py
index 018ae810ac..8846d5499d 100644
--- a/test/agentchat/test_agent_setup_with_use_docker_settings.py
+++ b/test/agentchat/test_agent_setup_with_use_docker_settings.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agent_usage.py b/test/agentchat/test_agent_usage.py
index 250d116546..88b686a1a2 100755
--- a/test/agentchat/test_agent_usage.py
+++ b/test/agentchat/test_agent_usage.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_agentchat_utils.py b/test/agentchat/test_agentchat_utils.py
index 63d84805be..805411f9c2 100644
--- a/test/agentchat/test_agentchat_utils.py
+++ b/test/agentchat/test_agentchat_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_assistant_agent.py b/test/agentchat/test_assistant_agent.py
index 6a9602d3e5..672ff59bd6 100755
--- a/test/agentchat/test_assistant_agent.py
+++ b/test/agentchat/test_assistant_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_async.py b/test/agentchat/test_async.py
index 2fd7ba0bac..cc748ff738 100755
--- a/test/agentchat/test_async.py
+++ b/test/agentchat/test_async.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_async_chats.py b/test/agentchat/test_async_chats.py
index 58ba3b9d6a..d2587ff273 100755
--- a/test/agentchat/test_async_chats.py
+++ b/test/agentchat/test_async_chats.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_async_get_human_input.py b/test/agentchat/test_async_get_human_input.py
index b512db6b0b..555ed63866 100755
--- a/test/agentchat/test_async_get_human_input.py
+++ b/test/agentchat/test_async_get_human_input.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_cache_agent.py b/test/agentchat/test_cache_agent.py
index af980080bf..723355b6e1 100644
--- a/test/agentchat/test_cache_agent.py
+++ b/test/agentchat/test_cache_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_chats.py b/test/agentchat/test_chats.py
index 10e139ddc7..8f243c1664 100755
--- a/test/agentchat/test_chats.py
+++ b/test/agentchat/test_chats.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_conversable_agent.py b/test/agentchat/test_conversable_agent.py
index 0c0fa6d836..8790c84db1 100755
--- a/test/agentchat/test_conversable_agent.py
+++ b/test/agentchat/test_conversable_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_function_and_tool_calling.py b/test/agentchat/test_function_and_tool_calling.py
index ab6a26ff23..cd7064cd7b 100644
--- a/test/agentchat/test_function_and_tool_calling.py
+++ b/test/agentchat/test_function_and_tool_calling.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_function_call.py b/test/agentchat/test_function_call.py
index 50c999d0be..525a24cc17 100755
--- a/test/agentchat/test_function_call.py
+++ b/test/agentchat/test_function_call.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_function_call_groupchat.py b/test/agentchat/test_function_call_groupchat.py
index 2badb1288e..7e09bbd365 100755
--- a/test/agentchat/test_function_call_groupchat.py
+++ b/test/agentchat/test_function_call_groupchat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_groupchat.py b/test/agentchat/test_groupchat.py
index 7364360700..de5f8bab11 100755
--- a/test/agentchat/test_groupchat.py
+++ b/test/agentchat/test_groupchat.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_human_input.py b/test/agentchat/test_human_input.py
index 78488aa767..beca99033c 100755
--- a/test/agentchat/test_human_input.py
+++ b/test/agentchat/test_human_input.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_math_user_proxy_agent.py b/test/agentchat/test_math_user_proxy_agent.py
index 0d87c0686f..83c6662ce2 100755
--- a/test/agentchat/test_math_user_proxy_agent.py
+++ b/test/agentchat/test_math_user_proxy_agent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_nested.py b/test/agentchat/test_nested.py
index e59aa58fcb..9995aa6ed6 100755
--- a/test/agentchat/test_nested.py
+++ b/test/agentchat/test_nested.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/agentchat/test_tool_calls.py b/test/agentchat/test_tool_calls.py
index faa8ef7fde..eb2cbe7c35 100755
--- a/test/agentchat/test_tool_calls.py
+++ b/test/agentchat/test_tool_calls.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_cache.py b/test/cache/test_cache.py
index a71e4ec173..2635c05e8e 100755
--- a/test/cache/test_cache.py
+++ b/test/cache/test_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_cosmos_db_cache.py b/test/cache/test_cosmos_db_cache.py
index 290d2cc93b..5f24bc80d1 100644
--- a/test/cache/test_cosmos_db_cache.py
+++ b/test/cache/test_cosmos_db_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_disk_cache.py b/test/cache/test_disk_cache.py
index 8e814f3cfa..5edfb910e0 100755
--- a/test/cache/test_disk_cache.py
+++ b/test/cache/test_disk_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_in_memory_cache.py b/test/cache/test_in_memory_cache.py
index 022d583b3a..59a24d0a85 100644
--- a/test/cache/test_in_memory_cache.py
+++ b/test/cache/test_in_memory_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/cache/test_redis_cache.py b/test/cache/test_redis_cache.py
index 63a831b433..3cef95ddab 100755
--- a/test/cache/test_redis_cache.py
+++ b/test/cache/test_redis_cache.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/conftest.py b/test/conftest.py
index e1afbd5431..5f98689fda 100644
--- a/test/conftest.py
+++ b/test/conftest.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/io/test_base.py b/test/io/test_base.py
index 169ef80e00..8083f0d811 100644
--- a/test/io/test_base.py
+++ b/test/io/test_base.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/io/test_console.py b/test/io/test_console.py
index ea459fcd98..653ccca0fc 100644
--- a/test/io/test_console.py
+++ b/test/io/test_console.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/io/test_websockets.py b/test/io/test_websockets.py
index 88e68ac335..c6d8494461 100644
--- a/test/io/test_websockets.py
+++ b/test/io/test_websockets.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/_test_completion.py b/test/oai/_test_completion.py
index 81678caa18..5e92149d41 100755
--- a/test/oai/_test_completion.py
+++ b/test/oai/_test_completion.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_anthropic.py b/test/oai/test_anthropic.py
index 36e944e819..4a081b3ef6 100644
--- a/test/oai/test_anthropic.py
+++ b/test/oai/test_anthropic.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_client.py b/test/oai/test_client.py
index 8cd80b07d0..4b37166eba 100755
--- a/test/oai/test_client.py
+++ b/test/oai/test_client.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_client_stream.py b/test/oai/test_client_stream.py
index 76cc9384b8..abb7e18c72 100755
--- a/test/oai/test_client_stream.py
+++ b/test/oai/test_client_stream.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_client_utils.py b/test/oai/test_client_utils.py
index f610355a03..8334b74f7f 100644
--- a/test/oai/test_client_utils.py
+++ b/test/oai/test_client_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_cohere.py b/test/oai/test_cohere.py
index 82a52225b1..34bb6f6114 100644
--- a/test/oai/test_cohere.py
+++ b/test/oai/test_cohere.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_custom_client.py b/test/oai/test_custom_client.py
index 4a10f0da6c..5976b7a46f 100644
--- a/test/oai/test_custom_client.py
+++ b/test/oai/test_custom_client.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_gemini.py b/test/oai/test_gemini.py
index e3ff2e3324..b5b84cd028 100644
--- a/test/oai/test_gemini.py
+++ b/test/oai/test_gemini.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_groq.py b/test/oai/test_groq.py
index 24aab48841..5e331f85aa 100644
--- a/test/oai/test_groq.py
+++ b/test/oai/test_groq.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_mistral.py b/test/oai/test_mistral.py
index b2314086f8..588b24392a 100644
--- a/test/oai/test_mistral.py
+++ b/test/oai/test_mistral.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_together.py b/test/oai/test_together.py
index 1bb2efe1bc..bff18d1b7a 100644
--- a/test/oai/test_together.py
+++ b/test/oai/test_together.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/oai/test_utils.py b/test/oai/test_utils.py
index b86eaaf1ab..599254b47d 100755
--- a/test/oai/test_utils.py
+++ b/test/oai/test_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_browser_utils.py b/test/test_browser_utils.py
index 32c091bdec..30ce662388 100755
--- a/test/test_browser_utils.py
+++ b/test/test_browser_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -16,7 +16,7 @@
import requests
from agentchat.test_assistant_agent import KEY_LOC # noqa: E402
-BLOG_POST_URL = "https://ag2labs.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
+BLOG_POST_URL = "https://ag2ai.github.io/autogen/blog/2023/04/21/LLM-tuning-math"
BLOG_POST_TITLE = "Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH | AutoGen"
BLOG_POST_STRING = "Large language models (LLMs) are powerful tools that can generate natural language texts for various applications, such as chatbots, summarization, translation, and more. GPT-4 is currently the state of the art LLM in the world. Is model selection irrelevant? What about inference parameters?"
diff --git a/test/test_code_utils.py b/test/test_code_utils.py
index ac9d566bd7..8fb2f44a97 100755
--- a/test/test_code_utils.py
+++ b/test/test_code_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_function_utils.py b/test/test_function_utils.py
index 9794df04ad..fce7e819b8 100644
--- a/test/test_function_utils.py
+++ b/test/test_function_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_graph_utils.py b/test/test_graph_utils.py
index 034e30688a..ea25a67f5c 100644
--- a/test/test_graph_utils.py
+++ b/test/test_graph_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_logging.py b/test/test_logging.py
index 8a8989e091..97481b5f7f 100644
--- a/test/test_logging.py
+++ b/test/test_logging.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_notebook.py b/test/test_notebook.py
index 06d94a1332..d6db43d711 100755
--- a/test/test_notebook.py
+++ b/test/test_notebook.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_pydantic.py b/test/test_pydantic.py
index 461b5bb2c3..256b30e335 100644
--- a/test/test_pydantic.py
+++ b/test/test_pydantic.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_retrieve_utils.py b/test/test_retrieve_utils.py
index 096e1aeb97..18bb1ea23f 100755
--- a/test/test_retrieve_utils.py
+++ b/test/test_retrieve_utils.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/test_token_count.py b/test/test_token_count.py
index 07d989e5de..e37324932c 100755
--- a/test/test_token_count.py
+++ b/test/test_token_count.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
diff --git a/test/twoagent.py b/test/twoagent.py
index ff354f7a42..75883ba946 100644
--- a/test/twoagent.py
+++ b/test/twoagent.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2labs
+# Copyright (c) 2023 - 2024, Owners of https://github.com/ag2ai
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -7,7 +7,7 @@
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
# Load LLM inference endpoints from an env variable or a file
-# See https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints
+# See https://ag2ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
diff --git a/website/blog/2023-10-18-RetrieveChat/index.mdx b/website/blog/2023-10-18-RetrieveChat/index.mdx
index e65054fa32..4396511c93 100644
--- a/website/blog/2023-10-18-RetrieveChat/index.mdx
+++ b/website/blog/2023-10-18-RetrieveChat/index.mdx
@@ -82,7 +82,7 @@ from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProx
2. Create an 'AssistantAgent' instance named "assistant" and an 'RetrieveUserProxyAgent' instance named "ragproxyagent"
-Refer to the [doc](https://ag2labs.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent)
+Refer to the [doc](https://ag2ai.github.io/autogen/docs/reference/agentchat/contrib/retrieve_user_proxy_agent)
for more information on the detailed configurations.
```python
diff --git a/website/blog/2023-10-26-TeachableAgent/index.mdx b/website/blog/2023-10-26-TeachableAgent/index.mdx
index 6a9bacf535..bd9ac6d656 100644
--- a/website/blog/2023-10-26-TeachableAgent/index.mdx
+++ b/website/blog/2023-10-26-TeachableAgent/index.mdx
@@ -54,7 +54,7 @@ from autogen import ConversableAgent # As an example
```python
# Load LLM inference endpoints from an env variable or a file
-# See https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints
+# See https://ag2ai.github.io/autogen/docs/FAQ#set-your-api-endpoints
# and OAI_CONFIG_LIST_sample
filter_dict = {"model": ["gpt-4"]} # GPT-3.5 is less reliable than GPT-4 at learning from user feedback.
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST", filter_dict=filter_dict)
diff --git a/website/blog/2023-11-06-LMM-Agent/index.mdx b/website/blog/2023-11-06-LMM-Agent/index.mdx
index 44b778aa61..667628e68c 100644
--- a/website/blog/2023-11-06-LMM-Agent/index.mdx
+++ b/website/blog/2023-11-06-LMM-Agent/index.mdx
@@ -9,8 +9,8 @@ tags: [LMM, multimodal]
**In Brief:**
* Introducing the **Multimodal Conversable Agent** and the **LLaVA Agent** to enhance LMM functionalities.
* Users can input text and images simultaneously using the `` tag to specify image loading.
-* Demonstrated through the [GPT-4V notebook](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb).
-* Demonstrated through the [LLaVA notebook](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_lmm_llava.ipynb).
+* Demonstrated through the [GPT-4V notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb).
+* Demonstrated through the [LLaVA notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_lmm_llava.ipynb).
## Introduction
Large multimodal models (LMMs) augment large language models (LLMs) with the ability to process multi-sensory data.
@@ -62,7 +62,7 @@ The `MultimodalConversableAgent` interprets the input prompt, extracting images
## Advanced Usage
Similar to other AutoGen agents, multimodal agents support multi-round dialogues with other agents, code generation, factual queries, and management via a GroupChat interface.
-For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent).
+For example, the `FigureCreator` in our [GPT-4V notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_lmm_gpt-4v.ipynb) and [LLaVA notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_lmm_llava.ipynb) integrates two agents: a coder (an AssistantAgent) and critics (a multimodal agent).
The coder drafts Python code for visualizations, while the critics provide insights for enhancement. Collaboratively, these agents aim to refine visual outputs.
With `human_input_mode=ALWAYS`, you can also contribute suggestions for better visualizations.
@@ -72,6 +72,6 @@ With `human_input_mode=ALWAYS`, you can also contribute suggestions for better v
## Future Enhancements
-For further inquiries or suggestions, please open an issue in the [AutoGen repository](https://github.com/ag2labs/ag2/) or contact me directly at beibin.li@microsoft.com.
+For further inquiries or suggestions, please open an issue in the [AutoGen repository](https://github.com/ag2ai/ag2/) or contact me directly at beibin.li@microsoft.com.
AutoGen will continue to evolve, incorporating more multimodal functionalities such as DALLE model integration, audio interaction, and video comprehension. Stay tuned for these exciting developments.
diff --git a/website/blog/2023-11-13-OAI-assistants/index.mdx b/website/blog/2023-11-13-OAI-assistants/index.mdx
index 6294d7287b..32027eb786 100644
--- a/website/blog/2023-11-13-OAI-assistants/index.mdx
+++ b/website/blog/2023-11-13-OAI-assistants/index.mdx
@@ -9,12 +9,12 @@ tags: [openai-assistant]
## TL;DR
-OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/ag2labs/ag2/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py).
+OpenAI assistants are now integrated into AutoGen via [`GPTAssistantAgent`](https://github.com/ag2ai/ag2/blob/main/autogen/agentchat/contrib/gpt_assistant_agent.py).
This enables multiple OpenAI assistants, which form the backend of the now popular GPTs, to collaborate and tackle complex tasks.
Checkout example notebooks for reference:
-* [Basic example](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb)
-* [Code interpreter](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_oai_code_interpreter.ipynb)
-* [Function calls](https://github.com/ag2labs/ag2/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb)
+* [Basic example](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_oai_assistant_twoagents_basic.ipynb)
+* [Code interpreter](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_oai_code_interpreter.ipynb)
+* [Function calls](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_oai_assistant_function_call.ipynb)
## Introduction
@@ -100,7 +100,7 @@ user_proxy = UserProxyAgent(name="user_proxy",
user_proxy.initiate_chat(gpt_assistant, message="Print hello world")
```
-Checkout more examples [here](https://github.com/ag2labs/ag2/tree/main/notebook).
+Checkout more examples [here](https://github.com/ag2ai/ag2/tree/main/notebook).
## Limitations and Future Work
diff --git a/website/blog/2023-11-20-AgentEval/index.mdx b/website/blog/2023-11-20-AgentEval/index.mdx
index 0620bc2336..b40d774810 100644
--- a/website/blog/2023-11-20-AgentEval/index.mdx
+++ b/website/blog/2023-11-20-AgentEval/index.mdx
@@ -14,7 +14,7 @@ tags: [LLM, GPT, evaluation, task utility]
**TL;DR:**
* As a developer of an LLM-powered application, how can you assess the utility it brings to end users while helping them with their tasks?
* To shed light on the question above, we introduce `AgentEval` — the first version of the framework to assess the utility of any LLM-powered application crafted to assist users in specific tasks. AgentEval aims to simplify the evaluation process by automatically proposing a set of criteria tailored to the unique purpose of your application. This allows for a comprehensive assessment, quantifying the utility of your application against the suggested criteria.
-* We demonstrate how `AgentEval` work using [math problems dataset](https://ag2labs.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc).
+* We demonstrate how `AgentEval` work using [math problems dataset](https://ag2ai.github.io/autogen/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/microsoft/autogen/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc).
## Introduction
diff --git a/website/blog/2023-12-01-AutoGenStudio/index.mdx b/website/blog/2023-12-01-AutoGenStudio/index.mdx
index 411a7670a7..0ceb7edd8f 100644
--- a/website/blog/2023-12-01-AutoGenStudio/index.mdx
+++ b/website/blog/2023-12-01-AutoGenStudio/index.mdx
@@ -26,9 +26,9 @@ To help you rapidly prototype multi-agent solutions for your tasks, we are intro
- Publish your sessions to a local gallery.
-See the official AutoGen Studio documentation [here](https://ag2labs.github.io/autogen/docs/autogen-studio/getting-started) for more details.
+See the official AutoGen Studio documentation [here](https://ag2ai.github.io/autogen/docs/autogen-studio/getting-started) for more details.
-AutoGen Studio is open source [code here](https://github.com/ag2labs/build-with-autogen/blob/main/samples/apps/autogen-studio), and can be installed via pip. Give it a try!
+AutoGen Studio is open source [code here](https://github.com/ag2ai/build-with-autogen/blob/main/samples/apps/autogen-studio), and can be installed via pip. Give it a try!
```bash
pip install autogenstudio
@@ -36,7 +36,7 @@ pip install autogenstudio
## Introduction
-The accelerating pace of technology has ushered us into an era where digital assistants (or agents) are becoming integral to our lives. [AutoGen](https://github.com/ag2labs/ag2/tree/main/autogen) has emerged as a leading framework for orchestrating the power of agents. In the spirit of expanding this frontier and democratizing this capability, we are thrilled to introduce a new user-friendly interface: **AutoGen Studio**.
+The accelerating pace of technology has ushered us into an era where digital assistants (or agents) are becoming integral to our lives. [AutoGen](https://github.com/ag2ai/ag2/tree/main/autogen) has emerged as a leading framework for orchestrating the power of agents. In the spirit of expanding this frontier and democratizing this capability, we are thrilled to introduce a new user-friendly interface: **AutoGen Studio**.
With AutoGen Studio, users can rapidly create, manage, and interact with agents that can learn, adapt, and collaborate. As we release this interface into the open-source community, our ambition is not only to enhance productivity but to inspire a level of personalized interaction between humans and agents.
@@ -48,7 +48,7 @@ The following guide will help you get AutoGen Studio up and running on your syst
### Configuring an LLM Provider
-To get started, you need access to a language model. You can get this set up by following the steps in the AutoGen documentation [here](https://ag2labs.github.io/autogen/docs/FAQ#set-your-api-endpoints). Configure your environment with either `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY`.
+To get started, you need access to a language model. You can get this set up by following the steps in the AutoGen documentation [here](https://ag2ai.github.io/autogen/docs/FAQ#set-your-api-endpoints). Configure your environment with either `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY`.
For example, in your terminal, you would set the API key like this:
@@ -104,7 +104,7 @@ There are two ways to install AutoGen Studio - from PyPi or from source. We **re
yarn build
```
- For Windows users, to build the frontend, you may need alternative commands provided in the [autogen studio readme](https://github.com/ag2labs/build-with-autogen/blob/main/samples/apps/autogen-studio).
+ For Windows users, to build the frontend, you may need alternative commands provided in the [autogen studio readme](https://github.com/ag2ai/build-with-autogen/blob/main/samples/apps/autogen-studio).
### Running the Application
@@ -139,7 +139,7 @@ This section focuses on defining the properties of agents and agent workflows. I
-**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/ag2labs/ag2/blob/main/autogen/agentchat/conversable_agent.py) class).
+**Agents**: This provides an interface to declaratively specify properties for an AutoGen agent (mirrors most of the members of a base [AutoGen conversable agent](https://github.com/ag2ai/ag2/blob/main/autogen/agentchat/conversable_agent.py) class).
**Agent Workflows**: An agent workflow is a specification of a set of agents that can work together to accomplish a task. The simplest version of this is a setup with two agents – a user proxy agent (that represents a user i.e. it compiles code and prints result) and an assistant that can address task requests (e.g., generating plans, writing code, evaluating responses, proposing error recovery steps, etc.). A more complex flow could be a group chat where even more agents work towards a solution.
@@ -168,7 +168,7 @@ AutoGen Studio comes with 3 example skills: `fetch_profile`, `find_papers`, `gen
## The AutoGen Studio API
-While AutoGen Studio is a web interface, it is powered by an underlying python API that is reusable and modular. Importantly, we have implemented an API where agent workflows can be declaratively specified (in JSON), loaded and run. An example of the current API is shown below. Please consult the [AutoGen Studio repo](https://github.com/ag2labs/build-with-autogen/blob/main/samples/apps/autogen-studio) for more details.
+While AutoGen Studio is a web interface, it is powered by an underlying python API that is reusable and modular. Importantly, we have implemented an API where agent workflows can be declaratively specified (in JSON), loaded and run. An example of the current API is shown below. Please consult the [AutoGen Studio repo](https://github.com/ag2ai/build-with-autogen/blob/main/samples/apps/autogen-studio) for more details.
```python
import json
@@ -201,7 +201,7 @@ As we continue to develop and refine AutoGen Studio, the road map below outlines
We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project:
-- Review the overall AutoGen project [AutoGen](https://github.com/ag2labs/ag2).
+- Review the overall AutoGen project [AutoGen](https://github.com/ag2ai/ag2).
- Please review the AutoGen Studio [roadmap](https://github.com/microsoft/autogen/issues/737) to get a sense of the current priorities for the project. Help is appreciated especially with Studio issues tagged with `help-wanted`.
- Please initiate a discussion on the roadmap issue or a new issue to discuss your proposed contribution.
- Submit a pull request with your contribution!
@@ -219,7 +219,7 @@ A: To reset your conversation history, you can delete the `database.sqlite` file
A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.
**Q: Where can I find documentation and support for AutoGen Studio?**
-A: We are constantly working to improve AutoGen Studio. For the latest updates, please refer to the [AutoGen Studio Readme](https://github.com/ag2labs/build-with-autogen/blob/main/samples/apps/autogen-studio). For additional support, please open an issue on [GitHub](https://github.com/ag2labs/ag2) or ask questions on [Discord](https://aka.ms/autogen-dc).
+A: We are constantly working to improve AutoGen Studio. For the latest updates, please refer to the [AutoGen Studio Readme](https://github.com/ag2ai/build-with-autogen/blob/main/samples/apps/autogen-studio). For additional support, please open an issue on [GitHub](https://github.com/ag2ai/ag2) or ask questions on [Discord](https://aka.ms/autogen-dc).
**Q: Can I use Other Models with AutoGen Studio?**
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model name`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure".
diff --git a/website/blog/2023-12-23-AgentOptimizer/index.mdx b/website/blog/2023-12-23-AgentOptimizer/index.mdx
index 3c6a8f2edf..a48ee28e94 100644
--- a/website/blog/2023-12-23-AgentOptimizer/index.mdx
+++ b/website/blog/2023-12-23-AgentOptimizer/index.mdx
@@ -36,7 +36,7 @@ It contains three main methods:
This method records the conversation history and performance of the agents in solving one problem.
It includes two inputs: conversation_history (List[Dict]) and is_satisfied (bool).
-conversation_history is a list of dictionaries which could be got from chat_messages_for_summary in the [AgentChat](https://ag2labs.github.io/autogen/docs/reference/agentchat/agentchat/) class.
+conversation_history is a list of dictionaries which could be got from chat_messages_for_summary in the [AgentChat](https://ag2ai.github.io/autogen/docs/reference/agentchat/agentchat/) class.
is_satisfied is a bool value that represents whether the user is satisfied with the solution. If it is none, the user will be asked to input the satisfaction.
Example:
diff --git a/website/blog/2023-12-29-AgentDescriptions/index.mdx b/website/blog/2023-12-29-AgentDescriptions/index.mdx
index 205f4502e8..301026e927 100644
--- a/website/blog/2023-12-29-AgentDescriptions/index.mdx
+++ b/website/blog/2023-12-29-AgentDescriptions/index.mdx
@@ -8,7 +8,7 @@ tags: [AutoGen]
## TL;DR
-AutoGen 0.2.2 introduces a [description](https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to ConversableAgent (and all subclasses), and changes GroupChat so that it uses agent `description`s rather than `system_message`s when choosing which agents should speak next.
+AutoGen 0.2.2 introduces a [description](https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to ConversableAgent (and all subclasses), and changes GroupChat so that it uses agent `description`s rather than `system_message`s when choosing which agents should speak next.
This is expected to simplify GroupChat’s job, improve orchestration, and make it easier to implement new GroupChat or GroupChat-like alternatives.
@@ -18,9 +18,9 @@ However, if you were struggling with getting GroupChat to work, you can now try
## Introduction
-As AutoGen matures and developers build increasingly complex combinations of agents, orchestration is becoming an important capability. At present, [GroupChat](https://ag2labs.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) and the [GroupChatManager](https://ag2labs.github.io/autogen/docs/reference/agentchat/groupchat#groupchatmanager-objects) are the main built-in tools for orchestrating conversations between 3 or more agents. For orchestrators like GroupChat to work well, they need to know something about each agent so that they can decide who should speak and when. Prior to AutoGen 0.2.2, GroupChat relied on each agent's `system_message` and `name` to learn about each participating agent. This is likely fine when the system prompt is short and sweet, but can lead to problems when the instructions are very long (e.g., with the [AssistantAgent](https://ag2labs.github.io/autogen/docs/reference/agentchat/assistant_agent)), or non-existent (e.g., with the [UserProxyAgent](https://ag2labs.github.io/autogen/docs/reference/agentchat/user_proxy_agent)).
+As AutoGen matures and developers build increasingly complex combinations of agents, orchestration is becoming an important capability. At present, [GroupChat](https://ag2ai.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) and the [GroupChatManager](https://ag2ai.github.io/autogen/docs/reference/agentchat/groupchat#groupchatmanager-objects) are the main built-in tools for orchestrating conversations between 3 or more agents. For orchestrators like GroupChat to work well, they need to know something about each agent so that they can decide who should speak and when. Prior to AutoGen 0.2.2, GroupChat relied on each agent's `system_message` and `name` to learn about each participating agent. This is likely fine when the system prompt is short and sweet, but can lead to problems when the instructions are very long (e.g., with the [AssistantAgent](https://ag2ai.github.io/autogen/docs/reference/agentchat/assistant_agent)), or non-existent (e.g., with the [UserProxyAgent](https://ag2ai.github.io/autogen/docs/reference/agentchat/user_proxy_agent)).
-AutoGen 0.2.2 introduces a [description](https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to all agents, and replaces the use of the `system_message` for orchestration in GroupChat and all future orchestrators. The `description` field defaults to the `system_message` to ensure backwards compatibility, so you may not need to change anything with your code if things are working well for you. However, if you were struggling with GroupChat, give setting the `description` field a try.
+AutoGen 0.2.2 introduces a [description](https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent#__init__) field to all agents, and replaces the use of the `system_message` for orchestration in GroupChat and all future orchestrators. The `description` field defaults to the `system_message` to ensure backwards compatibility, so you may not need to change anything with your code if things are working well for you. However, if you were struggling with GroupChat, give setting the `description` field a try.
The remainder of this post provides an example of how using the `description` field simplifies GroupChat's job, provides some evidence of its effectiveness, and provides tips for writing good descriptions.
diff --git a/website/blog/2024-01-23-Code-execution-in-docker/index.mdx b/website/blog/2024-01-23-Code-execution-in-docker/index.mdx
index c143f86519..d2582ac8e6 100644
--- a/website/blog/2024-01-23-Code-execution-in-docker/index.mdx
+++ b/website/blog/2024-01-23-Code-execution-in-docker/index.mdx
@@ -55,8 +55,8 @@ user_proxy = autogen.UserProxyAgent(name="user_proxy", llm_config=llm_config,
## Related documentation
-- [Code execution with docker](https://ag2labs.github.io/autogen/docs/Installation#code-execution-with-docker-default)
-- [How to disable code execution in docker](https://ag2labs.github.io/autogen/docs/FAQ#agents-are-throwing-due-to-docker-not-running-how-can-i-resolve-this)
+- [Code execution with docker](https://ag2ai.github.io/autogen/docs/Installation#code-execution-with-docker-default)
+- [How to disable code execution in docker](https://ag2ai.github.io/autogen/docs/FAQ#agents-are-throwing-due-to-docker-not-running-how-can-i-resolve-this)
## Conclusion
diff --git a/website/blog/2024-01-25-AutoGenBench/index.mdx b/website/blog/2024-01-25-AutoGenBench/index.mdx
index 4d4bfd89b2..1f2c2209c0 100644
--- a/website/blog/2024-01-25-AutoGenBench/index.mdx
+++ b/website/blog/2024-01-25-AutoGenBench/index.mdx
@@ -21,8 +21,8 @@ Today we are releasing AutoGenBench - a tool for evaluating AutoGen agents and w
AutoGenBench is a standalone command line tool, installable from PyPI, which handles downloading, configuring, running, and reporting supported benchmarks. AutoGenBench works best when run alongside Docker, since it uses Docker to isolate tests from one another.
-- See the [AutoGenBench README](https://github.com/ag2labs/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for information on installation and running benchmarks.
-- See the [AutoGenBench CONTRIBUTING guide](https://github.com/ag2labs/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets.
+- See the [AutoGenBench README](https://github.com/ag2ai/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for information on installation and running benchmarks.
+- See the [AutoGenBench CONTRIBUTING guide](https://github.com/ag2ai/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) for information on developing or contributing benchmark datasets.
### Quick Start
@@ -42,7 +42,7 @@ autogenbench tabulate Results/human_eval_two_agents
## Introduction
-Measurement and evaluation are core components of every major AI or ML research project. The same is true for AutoGen. To this end, today we are releasing AutoGenBench, a standalone command line tool that we have been using to guide development of AutoGen. Conveniently, AutoGenBench handles: downloading, configuring, running, and reporting results of agents on various public benchmark datasets. In addition to reporting top-line numbers, each AutoGenBench run produces a comprehensive set of logs and telemetry that can be used for debugging, profiling, computing custom metrics, and as input to [AgentEval](https://ag2labs.github.io/autogen/blog/2023/11/20/AgentEval). In the remainder of this blog post, we outline core design principles for AutoGenBench (key to understanding its operation); present a guide to installing and running AutoGenBench; outline a roadmap for evaluation; and conclude with an open call for contributions.
+Measurement and evaluation are core components of every major AI or ML research project. The same is true for AutoGen. To this end, today we are releasing AutoGenBench, a standalone command line tool that we have been using to guide development of AutoGen. Conveniently, AutoGenBench handles: downloading, configuring, running, and reporting results of agents on various public benchmark datasets. In addition to reporting top-line numbers, each AutoGenBench run produces a comprehensive set of logs and telemetry that can be used for debugging, profiling, computing custom metrics, and as input to [AgentEval](https://ag2ai.github.io/autogen/blog/2023/11/20/AgentEval). In the remainder of this blog post, we outline core design principles for AutoGenBench (key to understanding its operation); present a guide to installing and running AutoGenBench; outline a roadmap for evaluation; and conclude with an open call for contributions.
## Design Principles
@@ -52,7 +52,7 @@ AutoGenBench is designed around three core design principles. Knowing these prin
- **Isolation:** Agents interact with their worlds in both subtle and overt ways. For example an agent may install a python library or write a file to disk. This can lead to ordering effects that can impact future measurements. Consider, for example, comparing two agents on a common benchmark. One agent may appear more efficient than the other simply because it ran second, and benefitted from the hard work the first agent did in installing and debugging necessary Python libraries. To address this, AutoGenBench isolates each task in its own Docker container. This ensures that all runs start with the same initial conditions. (Docker is also a _much safer way to run agent-produced code_, in general.)
-- **Instrumentation:** While top-line metrics are great for comparing agents or models, we often want much more information about how the agents are performing, where they are getting stuck, and how they can be improved. We may also later think of new research questions that require computing a different set of metrics. To this end, AutoGenBench is designed to log everything, and to compute metrics from those logs. This ensures that one can always go back to the logs to answer questions about what happened, run profiling software, or feed the logs into tools like [AgentEval](https://ag2labs.github.io/autogen/blog/2023/11/20/AgentEval).
+- **Instrumentation:** While top-line metrics are great for comparing agents or models, we often want much more information about how the agents are performing, where they are getting stuck, and how they can be improved. We may also later think of new research questions that require computing a different set of metrics. To this end, AutoGenBench is designed to log everything, and to compute metrics from those logs. This ensures that one can always go back to the logs to answer questions about what happened, run profiling software, or feed the logs into tools like [AgentEval](https://ag2ai.github.io/autogen/blog/2023/11/20/AgentEval).
## Installing and Running AutoGenBench
@@ -125,7 +125,7 @@ Please do not cite these values in academic work without first inspecting and ve
From this output we can see the results of the three separate repetitions of each task, and final summary statistics of each run. In this case, the results were generated via GPT-4 (as defined in the OAI_CONFIG_LIST that was provided), and used the `TwoAgents` template. **It is important to remember that AutoGenBench evaluates _specific_ end-to-end configurations of agents (as opposed to evaluating a model or cognitive framework more generally).**
-Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/ag2labs/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via:
+Finally, complete execution traces and logs can be found in the `Results` folder. See the [AutoGenBench README](https://github.com/ag2ai/build-with-autogen/blob/main/samples/tools/autogenbench/README.md) for more details about command-line options and output formats. Each of these commands also offers extensive in-line help via:
- `autogenbench --help`
- `autogenbench clone --help`
@@ -145,4 +145,4 @@ For an up to date tracking of our work items on this project, please see [AutoGe
## Call for Participation
-Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/ag2labs/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel!
+Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/ag2ai/build-with-autogen/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel!
diff --git a/website/blog/2024-02-02-AutoAnny/index.mdx b/website/blog/2024-02-02-AutoAnny/index.mdx
index c346891ad1..5debefa2e9 100644
--- a/website/blog/2024-02-02-AutoAnny/index.mdx
+++ b/website/blog/2024-02-02-AutoAnny/index.mdx
@@ -16,7 +16,7 @@ import AutoAnnyLogo from './img/AutoAnnyLogo.jpg';
## TL;DR
We are adding a new sample app called Anny-- a simple Discord bot powered
-by AutoGen that's intended to assist AutoGen Devs. See [`samples/apps/auto-anny`](https://github.com/ag2labs/build-with-autogen/tree/main/samples/apps/auto-anny) for details.
+by AutoGen that's intended to assist AutoGen Devs. See [`samples/apps/auto-anny`](https://github.com/ag2ai/build-with-autogen/tree/main/samples/apps/auto-anny) for details.
## Introduction
@@ -41,7 +41,7 @@ The current version of Anny is pretty simple -- it uses the Discord API and Auto
For example, it supports commands like `/heyanny help` for command listing, `/heyanny ghstatus` for
GitHub activity summary, `/heyanny ghgrowth` for GitHub repo growth indicators, and `/heyanny ghunattended` for listing unattended issues and PRs. Most of these commands use multiple AutoGen agents to accomplish these task.
-To use Anny, please follow instructions in [`samples/apps/auto-anny`](https://github.com/ag2labs/build-with-autogen/tree/main/samples/apps/auto-anny).
+To use Anny, please follow instructions in [`samples/apps/auto-anny`](https://github.com/ag2ai/build-with-autogen/tree/main/samples/apps/auto-anny).
## It's Not Just for AutoGen
If you're an open-source developer managing your own project, you can probably relate to our challenges. We invite you to check out Anny and contribute to its development and roadmap.
diff --git a/website/blog/2024-02-11-FSM-GroupChat/index.mdx b/website/blog/2024-02-11-FSM-GroupChat/index.mdx
index fd00f16215..86ecba25b2 100644
--- a/website/blog/2024-02-11-FSM-GroupChat/index.mdx
+++ b/website/blog/2024-02-11-FSM-GroupChat/index.mdx
@@ -285,4 +285,4 @@ pip install autogen[graph]
```
## Notebook examples
-More examples can be found in the [notebook](https://ag2labs.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine/). The notebook includes more examples of possible transition paths such as (1) hub and spoke, (2) sequential team operations, and (3) think aloud and debate. It also uses the function `visualize_speaker_transitions_dict` from `autogen.graph_utils` to visualize the various graphs.
+More examples can be found in the [notebook](https://ag2ai.github.io/autogen/docs/notebooks/agentchat_groupchat_finite_state_machine/). The notebook includes more examples of possible transition paths such as (1) hub and spoke, (2) sequential team operations, and (3) think aloud and debate. It also uses the function `visualize_speaker_transitions_dict` from `autogen.graph_utils` to visualize the various graphs.
diff --git a/website/blog/2024-05-24-Agent/index.mdx b/website/blog/2024-05-24-Agent/index.mdx
index 9f9dc5f35a..15c5c718ec 100644
--- a/website/blog/2024-05-24-Agent/index.mdx
+++ b/website/blog/2024-05-24-Agent/index.mdx
@@ -143,7 +143,7 @@ better with low cost. [EcoAssistant](/blog/2023/11/09/EcoAssistant) is a good ex
There are certainly tradeoffs to make. The large design space of multi-agents offers these tradeoffs and opens up new opportunities for optimization.
-> Over a year since the debut of Ask AT&T, the generative AI platform to which we’ve onboarded over 80,000 users, AT&T has been enhancing its capabilities by incorporating 'AI Agents'. These agents, powered by the Autogen framework pioneered by Microsoft (https://ag2labs.github.io/autogen/blog/2023/12/01/AutoGenStudio/), are designed to tackle complicated workflows and tasks that traditional language models find challenging. To drive collaboration, AT&T is contributing back to the open-source project by introducing features that facilitate enhanced security and role-based access for various projects and data.
+> Over a year since the debut of Ask AT&T, the generative AI platform to which we’ve onboarded over 80,000 users, AT&T has been enhancing its capabilities by incorporating 'AI Agents'. These agents, powered by the Autogen framework pioneered by Microsoft (https://ag2ai.github.io/autogen/blog/2023/12/01/AutoGenStudio/), are designed to tackle complicated workflows and tasks that traditional language models find challenging. To drive collaboration, AT&T is contributing back to the open-source project by introducing features that facilitate enhanced security and role-based access for various projects and data.
>
> > Andy Markus, Chief Data Officer at AT&T
diff --git a/website/blog/2024-06-21-AgentEval/index.mdx b/website/blog/2024-06-21-AgentEval/index.mdx
index 8d88343afd..874c7fe060 100644
--- a/website/blog/2024-06-21-AgentEval/index.mdx
+++ b/website/blog/2024-06-21-AgentEval/index.mdx
@@ -15,13 +15,13 @@ tags: [LLM, GPT, evaluation, task utility]
TL;DR:
* As a developer, how can you assess the utility and effectiveness of an LLM-powered application in helping end users with their tasks?
-* To shed light on the question above, we previously introduced [`AgentEval`](https://ag2labs.github.io/autogen/blog/2023/11/20/AgentEval/) — a framework to assess the multi-dimensional utility of any LLM-powered application crafted to assist users in specific tasks. We have now embedded it as part of the AutoGen library to ease developer adoption.
+* To shed light on the question above, we previously introduced [`AgentEval`](https://ag2ai.github.io/autogen/blog/2023/11/20/AgentEval/) — a framework to assess the multi-dimensional utility of any LLM-powered application crafted to assist users in specific tasks. We have now embedded it as part of the AutoGen library to ease developer adoption.
* Here, we introduce an updated version of AgentEval that includes a verification process to estimate the robustness of the QuantifierAgent. More details can be found in [this paper](https://arxiv.org/abs/2405.02178).
## Introduction
-Previously introduced [`AgentEval`](https://ag2labs.github.io/autogen/blog/2023/11/20/AgentEval/) is a comprehensive framework designed to bridge the gap in assessing the utility of LLM-powered applications. It leverages recent advancements in LLMs to offer a scalable and cost-effective alternative to traditional human evaluations. The framework comprises three main agents: `CriticAgent`, `QuantifierAgent`, and `VerifierAgent`, each playing a crucial role in assessing the task utility of an application.
+Previously introduced [`AgentEval`](https://ag2ai.github.io/autogen/blog/2023/11/20/AgentEval/) is a comprehensive framework designed to bridge the gap in assessing the utility of LLM-powered applications. It leverages recent advancements in LLMs to offer a scalable and cost-effective alternative to traditional human evaluations. The framework comprises three main agents: `CriticAgent`, `QuantifierAgent`, and `VerifierAgent`, each playing a crucial role in assessing the task utility of an application.
**CriticAgent: Defining the Criteria**
diff --git a/website/blog/2024-06-24-AltModels-Classes/index.mdx b/website/blog/2024-06-24-AltModels-Classes/index.mdx
index 1defd586a8..aae3d2c531 100644
--- a/website/blog/2024-06-24-AltModels-Classes/index.mdx
+++ b/website/blog/2024-06-24-AltModels-Classes/index.mdx
@@ -48,7 +48,7 @@ AutoGen's ability to associate specific configurations to each agent means you c
The common requirements of text generation and function/tool calling are supported by these client classes.
-Multi-modal support, such as for image/audio/video, is an area of active development. The [Google Gemini](https://ag2labs.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) client class can be
+Multi-modal support, such as for image/audio/video, is an area of active development. The [Google Gemini](https://ag2ai.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) client class can be
used to create a multimodal agent.
## Tips
@@ -58,9 +58,9 @@ Here are some tips when working with these client classes:
- **Most to least capable** - start with larger models and get your workflow working, then iteratively try smaller models.
- **Right model** - choose one that's suited to your task, whether it's coding, function calling, knowledge, or creative writing.
- **Agent names** - these cloud providers do not use the `name` field on a message, so be sure to use your agent's name in their `system_message` and `description` fields, as well as instructing the LLM to 'act as' them. This is particularly important for "auto" speaker selection in group chats as we need to guide the LLM to choose the next agent based on a name, so tweak `select_speaker_message_template`, `select_speaker_prompt_template`, and `select_speaker_auto_multiple_template` with more guidance.
-- **Context length** - as your conversation gets longer, models need to support larger context lengths, be mindful of what the model supports and consider using [Transform Messages](https://ag2labs.github.io/autogen/docs/topics/handling_long_contexts/intro_to_transform_messages) to manage context size.
-- **Provider parameters** - providers have parameters you can set such as temperature, maximum tokens, top-k, top-p, and safety. See each client class in AutoGen's [API Reference](https://ag2labs.github.io/autogen/docs/reference/oai/gemini) or [documentation](https://ag2labs.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) for details.
-- **Prompts** - prompt engineering is critical in guiding smaller LLMs to do what you need. [ConversableAgent](https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent), [GroupChat](https://ag2labs.github.io/autogen/docs/reference/agentchat/groupchat), [UserProxyAgent](https://ag2labs.github.io/autogen/docs/reference/agentchat/user_proxy_agent), and [AssistantAgent](https://ag2labs.github.io/autogen/docs/reference/agentchat/assistant_agent) all have customizable prompt attributes that you can tailor. Here are some prompting tips from [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)([+Library](https://docs.anthropic.com/en/prompt-library/library)), [Mistral AI](https://docs.mistral.ai/guides/prompting_capabilities/), [Together.AI](https://docs.together.ai/docs/examples), and [Meta](https://llama.meta.com/docs/how-to-guides/prompting/).
+- **Context length** - as your conversation gets longer, models need to support larger context lengths, be mindful of what the model supports and consider using [Transform Messages](https://ag2ai.github.io/autogen/docs/topics/handling_long_contexts/intro_to_transform_messages) to manage context size.
+- **Provider parameters** - providers have parameters you can set such as temperature, maximum tokens, top-k, top-p, and safety. See each client class in AutoGen's [API Reference](https://ag2ai.github.io/autogen/docs/reference/oai/gemini) or [documentation](https://ag2ai.github.io/autogen/docs/topics/non-openai-models/cloud-gemini) for details.
+- **Prompts** - prompt engineering is critical in guiding smaller LLMs to do what you need. [ConversableAgent](https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent), [GroupChat](https://ag2ai.github.io/autogen/docs/reference/agentchat/groupchat), [UserProxyAgent](https://ag2ai.github.io/autogen/docs/reference/agentchat/user_proxy_agent), and [AssistantAgent](https://ag2ai.github.io/autogen/docs/reference/agentchat/assistant_agent) all have customizable prompt attributes that you can tailor. Here are some prompting tips from [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview)([+Library](https://docs.anthropic.com/en/prompt-library/library)), [Mistral AI](https://docs.mistral.ai/guides/prompting_capabilities/), [Together.AI](https://docs.together.ai/docs/examples), and [Meta](https://llama.meta.com/docs/how-to-guides/prompting/).
- **Help!** - reach out on the AutoGen [Discord](https://discord.gg/pAbnFJrkgZ) or [log an issue](https://github.com/microsoft/autogen/issues) if you need help with or can help improve these client classes.
Now it's time to try them out.
@@ -109,7 +109,7 @@ Add your model configurations to the `OAI_CONFIG_LIST`. Ensure you specify the `
### Usage
-The `[config_list_from_json](https://ag2labs.github.io/autogen/docs/reference/oai/openai_utils/#config_list_from_json)` function loads a list of configurations from an environment variable or a json file.
+The `[config_list_from_json](https://ag2ai.github.io/autogen/docs/reference/oai/openai_utils/#config_list_from_json)` function loads a list of configurations from an environment variable or a json file.
```py
import autogen
@@ -150,7 +150,7 @@ user_proxy.intiate_chat(assistant, message="Write python code to print Hello Wor
```
-**NOTE: To integrate this setup into GroupChat, follow the [tutorial](https://ag2labs.github.io/autogen/docs/notebooks/agentchat_groupchat) with the same config as above.**
+**NOTE: To integrate this setup into GroupChat, follow the [tutorial](https://ag2ai.github.io/autogen/docs/notebooks/agentchat_groupchat) with the same config as above.**
## Function Calls
@@ -390,4 +390,4 @@ So we can see how Anthropic's Sonnet is able to suggest multiple tools in a sing
## More tips and tricks
-For an interesting chess game between Anthropic's Sonnet and Mistral's Mixtral, we've put together a sample notebook that highlights some of the tips and tricks for working with non-OpenAI LLMs. [See the notebook here](https://ag2labs.github.io/autogen/docs/notebooks/agentchat_nested_chats_chess_altmodels).
+For an interesting chess game between Anthropic's Sonnet and Mistral's Mixtral, we've put together a sample notebook that highlights some of the tips and tricks for working with non-OpenAI LLMs. [See the notebook here](https://ag2ai.github.io/autogen/docs/notebooks/agentchat_nested_chats_chess_altmodels).
diff --git a/website/blog/2024-07-25-AgentOps/index.mdx b/website/blog/2024-07-25-AgentOps/index.mdx
index 520d3d6797..2a2fc122b7 100644
--- a/website/blog/2024-07-25-AgentOps/index.mdx
+++ b/website/blog/2024-07-25-AgentOps/index.mdx
@@ -28,7 +28,7 @@ Agent observability, in its most basic form, allows you to monitor, troubleshoot
## Why AgentOps?
-AutoGen has simplified the process of building agents, yet we recognized the need for an easy-to-use, native tool for observability. We've previously discussed AgentOps, and now we're excited to partner with AgentOps as our official agent observability tool. Integrating AgentOps with AutoGen simplifies your workflow and boosts your agents' performance through clear observability, ensuring they operate optimally. For more details, check out our [AgentOps documentation](https://ag2labs.github.io/autogen/docs/notebooks/agentchat_agentops/).
+AutoGen has simplified the process of building agents, yet we recognized the need for an easy-to-use, native tool for observability. We've previously discussed AgentOps, and now we're excited to partner with AgentOps as our official agent observability tool. Integrating AgentOps with AutoGen simplifies your workflow and boosts your agents' performance through clear observability, ensuring they operate optimally. For more details, check out our [AgentOps documentation](https://ag2ai.github.io/autogen/docs/notebooks/agentchat_agentops/).
diff --git a/website/docs/Examples.md b/website/docs/Examples.md
index 051989a024..6e66940484 100644
--- a/website/docs/Examples.md
+++ b/website/docs/Examples.md
@@ -40,7 +40,7 @@ Links to notebook examples:
- Automated Continual Learning from New Data - [View Notebook](/docs/notebooks/agentchat_stream)
-- [AutoAnny](https://github.com/ag2labs/build-with-autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen
+- [AutoAnny](https://github.com/ag2ai/build-with-autogen/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen
### Tool Use
@@ -59,7 +59,7 @@ Links to notebook examples:
### Human Involvement
-- Simple example in ChatGPT style [View example](https://github.com/ag2labs/build-with-autogen/blob/main/samples/simple_chat.py)
+- Simple example in ChatGPT style [View example](https://github.com/ag2ai/build-with-autogen/blob/main/samples/simple_chat.py)
- Auto Code Generation, Execution, Debugging and **Human Feedback** - [View Notebook](/docs/notebooks/agentchat_human_feedback)
- Automated Task Solving with GPT-4 + **Multiple Human Users** - [View Notebook](/docs/notebooks/agentchat_two_users)
- Agent Chat with **Async Human Inputs** - [View Notebook](/docs/notebooks/async_human_input)
@@ -91,7 +91,7 @@ Links to notebook examples:
### Long Context Handling
-
+
- Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages)
### Evaluation and Assessment
@@ -111,7 +111,7 @@ Links to notebook examples:
### Utilities
-- API Unification - [View Documentation with Code Example](https://ag2labs.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification)
+- API Unification - [View Documentation with Code Example](https://ag2ai.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification)
- Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration)
### Inference Hyperparameters Tuning
@@ -120,5 +120,5 @@ AutoGen offers a cost-effective hyperparameter optimization technique [EcoOptiGe
Please find documentation about this feature [here](/docs/Use-Cases/enhanced_inference).
Links to notebook examples:
-* [Optimize for Code Generation](https://github.com/ag2labs/ag2/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/ag2labs/ag2/blob/main/notebook/oai_completion.ipynb)
-* [Optimize for Math](https://github.com/ag2labs/ag2/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/ag2labs/ag2/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
+* [Optimize for Code Generation](https://github.com/ag2ai/ag2/blob/main/notebook/oai_completion.ipynb) | [Open in colab](https://colab.research.google.com/github/ag2ai/ag2/blob/main/notebook/oai_completion.ipynb)
+* [Optimize for Math](https://github.com/ag2ai/ag2/blob/main/notebook/oai_chatgpt_gpt4.ipynb) | [Open in colab](https://colab.research.google.com/github/ag2ai/ag2/blob/main/notebook/oai_chatgpt_gpt4.ipynb)
diff --git a/website/docs/FAQ.mdx b/website/docs/FAQ.mdx
index da7dff2480..f07780c638 100644
--- a/website/docs/FAQ.mdx
+++ b/website/docs/FAQ.mdx
@@ -34,8 +34,8 @@ In version >=1, OpenAI renamed their `api_base` parameter to `base_url`. So for
Yes. You currently have two options:
-- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://ag2labs.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
-- You can supply your own custom model implementation and use it with Autogen. Please check https://ag2labs.github.io/autogen/blog/2024/01/26/Custom-Models for more information.
+- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://ag2ai.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
+- You can supply your own custom model implementation and use it with Autogen. Please check https://ag2ai.github.io/autogen/blog/2024/01/26/Custom-Models for more information.
## Handle Rate Limit Error and Timeout Error
@@ -52,9 +52,9 @@ When you call `initiate_chat` the conversation restarts by default. You can use
## `max_consecutive_auto_reply` vs `max_turn` vs `max_round`
-- [`max_consecutive_auto_reply`](https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS".
-- [`max_turns` in `ConversableAgent.initiate_chat`](https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)
-- [`max_round` in GroupChat](https://ag2labs.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session.
+- [`max_consecutive_auto_reply`](https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS".
+- [`max_turns` in `ConversableAgent.initiate_chat`](https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human)
+- [`max_round` in GroupChat](https://ag2ai.github.io/autogen/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session.
## How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group?
@@ -106,7 +106,7 @@ for each code-execution agent, or set `AUTOGEN_USE_DOCKER` to `False` as an
environment variable.
You can also develop your AutoGen application in a docker container.
-For example, when developing in [GitHub codespace](https://codespaces.new/ag2labs/ag2?quickstart=1),
+For example, when developing in [GitHub codespace](https://codespaces.new/ag2ai/ag2?quickstart=1),
AutoGen runs in a docker container.
If you are not developing in GitHub Codespaces,
follow instructions [here](installation/Docker.md#option-1-install-and-run-autogen-in-docker)
@@ -159,7 +159,7 @@ Explanation: Per [this gist](https://gist.github.com/defulmere/8b9695e415a442710
(from [issue #478](https://github.com/microsoft/autogen/issues/478))
-See here https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent/#register_reply
+See here https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent/#register_reply
For example, you can register a reply function that gets called when `generate_reply` is called for an agent.
@@ -188,11 +188,11 @@ In the above, we register a `print_messages` function that is called each time t
## How to get last message ?
-Refer to https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent/#last_message
+Refer to https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent/#last_message
## How to get each agent message ?
-Please refer to https://ag2labs.github.io/autogen/docs/reference/agentchat/conversable_agent#chat_messages
+Please refer to https://ag2ai.github.io/autogen/docs/reference/agentchat/conversable_agent#chat_messages
## When using autogen docker, is it always necessary to reinstall modules?
diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx
index 7611757e03..a3ad36956c 100644
--- a/website/docs/Getting-Started.mdx
+++ b/website/docs/Getting-Started.mdx
@@ -121,7 +121,7 @@ Learn more about configuring LLMs for agents [here](/docs/topics/llm_configurati
#### Multi-Agent Conversation Framework
Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
-By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/ag2labs/ag2/blob/main/test/twoagent.py),
+By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),
The figure below shows an example conversation flow with AutoGen.
@@ -138,10 +138,10 @@ The figure below shows an example conversation flow with AutoGen.
- Follow on [Twitter](https://twitter.com/Chi_Wang_)
- See our [roadmaps](https://aka.ms/autogen-roadmap)
-If you like our project, please give it a [star](https://github.com/ag2labs/ag2/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).
+If you like our project, please give it a [star](https://github.com/ag2ai/ag2/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).