diff --git a/website/README.md b/website/README.md index f65890830a..9cd4647010 100644 --- a/website/README.md +++ b/website/README.md @@ -1,40 +1,13 @@ -# Website +## Development -This website is built using [Docusaurus 3](https://docusaurus.io/), a modern static website generator. +Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command -## Prerequisites - -To build and test documentation locally, begin by downloading and installing [Node.js](https://nodejs.org/en/download/), and then installing [Yarn](https://classic.yarnpkg.com/en/). -On Windows, you can install via the npm package manager (npm) which comes bundled with Node.js: - -```console -npm install --global yarn ``` - -## Installation - -```console -pip install pydoc-markdown pyyaml colored -cd website -yarn install +npm install ``` -### Install Quarto - -`quarto` is used to render notebooks. - -Install it [here](https://github.com/quarto-dev/quarto-cli/releases). +Run the following command at the root of your documentation (where mint.json is) -> Note: Ensure that your `quarto` version is `1.5.23` or higher. - -## Local Development - -Navigate to the `website` folder and run: - -```console -pydoc-markdown -python ./process_notebooks.py render -yarn start ``` - -This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server. +npm run mintlify:dev +``` diff --git a/website/blog/2023-04-21-LLM-tuning-math/index.md b/website/blog/2023-04-21-LLM-tuning-math/index.mdx similarity index 100% rename from website/blog/2023-04-21-LLM-tuning-math/index.md rename to website/blog/2023-04-21-LLM-tuning-math/index.mdx diff --git a/website/blog/2023-07-14-Local-LLMs/index.md b/website/blog/2023-07-14-Local-LLMs/index.mdx similarity index 98% rename from website/blog/2023-07-14-Local-LLMs/index.md rename to website/blog/2023-07-14-Local-LLMs/index.mdx index 2747509885..e650c73988 100644 --- a/website/blog/2023-07-14-Local-LLMs/index.md +++ b/website/blog/2023-07-14-Local-LLMs/index.mdx @@ -64,7 +64,7 @@ class CompletionResponseStreamChoice(BaseModel): ``` -## Interact with model using `oai.Completion` (requires openai<1) +## Interact with model using `oai.Completion` (requires openai{'<'}1) Now the models can be directly accessed through openai-python library as well as `autogen.oai.Completion` and `autogen.oai.ChatCompletion`. diff --git a/website/blog/2024-02-02-AutoAnny/index.mdx b/website/blog/2024-02-02-AutoAnny/index.mdx index 0968a1feec..fb03d00964 100644 --- a/website/blog/2024-02-02-AutoAnny/index.mdx +++ b/website/blog/2024-02-02-AutoAnny/index.mdx @@ -5,10 +5,8 @@ authors: tags: [AutoGen] --- -import AutoAnnyLogo from './img/AutoAnnyLogo.jpg'; -
- AutoAnny Logo + AutoAnny Logo

Anny is a Discord bot powered by AutoGen to help AutoGen's Discord server.

diff --git a/website/blog/2024-03-03-AutoGen-Update/index.mdx b/website/blog/2024-03-03-AutoGen-Update/index.mdx index d8dc0eefcd..3a9a700496 100644 --- a/website/blog/2024-03-03-AutoGen-Update/index.mdx +++ b/website/blog/2024-03-03-AutoGen-Update/index.mdx @@ -38,10 +38,10 @@ Many users have deep understanding of the value in different dimensions, such as > The same reason autogen is significant is the same reason OOP is a good idea. Autogen packages up all that complexity into an agent I can create in one line, or modify with another. - +*/} Over time, more and more users share their experiences in using or contributing to autogen. diff --git a/website/docs/Examples.md b/website/docs/Examples.mdx similarity index 97% rename from website/docs/Examples.md rename to website/docs/Examples.mdx index c4e0bce998..896de486f6 100644 --- a/website/docs/Examples.md +++ b/website/docs/Examples.mdx @@ -42,7 +42,7 @@ Links to notebook examples: ### Applications - Automated Continual Learning from New Data - [View Notebook](/docs/notebooks/agentchat_stream) - +{/* - [OptiGuide](https://github.com/microsoft/optiguide) - Coding, Tool Using, Safeguarding & Question Answering for Supply Chain Optimization */} - [AutoAnny](https://github.com/ag2ai/build-with-ag2/tree/main/samples/apps/auto-anny) - A Discord bot built using AutoGen ### RAG @@ -98,7 +98,7 @@ Links to notebook examples: ### Long Context Handling - +{/* - Conversations with Chat History Compression Enabled - [View Notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agentchat_compression.ipynb) */} - Long Context Handling as A Capability - [View Notebook](/docs/notebooks/agentchat_transform_messages) ### Evaluation and Assessment diff --git a/website/docs/FAQ.mdx b/website/docs/FAQ.mdx index 5d6152bcc8..aa12938b9e 100644 --- a/website/docs/FAQ.mdx +++ b/website/docs/FAQ.mdx @@ -1,8 +1,7 @@ -import TOCInline from "@theme/TOCInline"; - -# Frequently Asked Questions - - +--- +title: Frequently Asked Questions +sidebarTitle: FAQ +--- ## Install the correct package - `autogen` @@ -34,8 +33,8 @@ In version >=1, OpenAI renamed their `api_base` parameter to `base_url`. So for Yes. You currently have two options: -- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://ag2ai.github.io/ag2/blog/2023/07/14/Local-LLMs for an example. -- You can supply your own custom model implementation and use it with Autogen. Please check https://ag2ai.github.io/ag2/blog/2024/01/26/Custom-Models for more information. +- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check [here](/blog/2023-07-14-Local-LLMs) for an example. +- You can supply your own custom model implementation and use it with Autogen. Please check [here](/blog/2024-01-26-Custom-Models) for more information. ## Handle Rate Limit Error and Timeout Error @@ -52,9 +51,9 @@ When you call `initiate_chat` the conversation restarts by default. You can use ## `max_consecutive_auto_reply` vs `max_turn` vs `max_round` -- [`max_consecutive_auto_reply`](https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS". -- [`max_turns` in `ConversableAgent.initiate_chat`](https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human) -- [`max_round` in GroupChat](https://ag2ai.github.io/ag2/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session. +- [`max_consecutive_auto_reply`](/docs/reference/agentchat/conversable_agent#max_consecutive_auto_reply) the maximum number of consecutive auto replie (a reply from an agent without human input is considered an auto reply). It plays a role when `human_input_mode` is not "ALWAYS". +- [`max_turns` in `ConversableAgent.initiate_chat`](/docs/reference/agentchat/conversable_agent#initiate_chat) limits the number of conversation turns between two conversable agents (without differentiating auto-reply and reply/input from human) +- [`max_round` in GroupChat](/docs/reference/agentchat/groupchat#groupchat-objects) specifies the maximum number of rounds in a group chat session. ## How do we decide what LLM is used for each agent? How many agents can be used? How do we decide how many agents in the group? @@ -159,7 +158,7 @@ Explanation: Per [this gist](https://gist.github.com/defulmere/8b9695e415a442710 (from [issue #478](https://github.com/microsoft/autogen/issues/478)) -See here https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent/#register_reply +See here /docs/reference/agentchat/conversable_agent/#register_reply For example, you can register a reply function that gets called when `generate_reply` is called for an agent. @@ -188,11 +187,11 @@ In the above, we register a `print_messages` function that is called each time t ## How to get last message ? -Refer to https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent/#last_message +Refer to /docs/reference/agentchat/conversable_agent/#last_message ## How to get each agent message ? -Please refer to https://ag2ai.github.io/ag2/docs/reference/agentchat/conversable_agent#chat_messages +Please refer to /docs/reference/agentchat/conversable_agent#chat_messages ## When using autogen docker, is it always necessary to reinstall modules? @@ -285,4 +284,4 @@ RUN apt-get clean && \ apt-get install sudo git npm # and whatever packages need to be installed in this specific version of the devcontainer ``` -This is a combination of StackOverflow suggestions [here](https://stackoverflow.com/a/48777773/2114580) and [here](https://stackoverflow.com/a/76092743/2114580). +This is a combination of StackOverflow suggestions [here](https://stackoverflow.com/a/48777773/2114580) and [here](https://stackoverflow.com/a/76092743/2114580). \ No newline at end of file diff --git a/website/docs/Gallery.mdx b/website/docs/Gallery.mdx index 8a5cd334ba..2f00bf1cf7 100644 --- a/website/docs/Gallery.mdx +++ b/website/docs/Gallery.mdx @@ -2,8 +2,8 @@ hide_table_of_contents: true --- -import GalleryPage from '../src/components/GalleryPage'; -import galleryData from "../src/data/gallery.json"; +import GalleryPage from "/snippets/components/GalleryPage.js"; +import galleryData from "/snippets/data/gallery.json"; # Gallery diff --git a/website/docs/Getting-Started.mdx b/website/docs/Getting-Started.mdx index 4a75431597..427ce74be6 100644 --- a/website/docs/Getting-Started.mdx +++ b/website/docs/Getting-Started.mdx @@ -1,7 +1,6 @@ -import Tabs from "@theme/Tabs"; -import TabItem from "@theme/TabItem"; - -# Getting Started +--- +title: "Getting Started" +--- AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AG2 aims to provide an easy-to-use @@ -10,7 +9,7 @@ like PyTorch for Deep Learning. It offers features such as agents that can conve with other agents, LLM and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns. -![AG2 Overview](/img/autogen_agentchat.png) +![AG2 Overview](/static/img/autogen_agentchat.png) ### Main Features @@ -37,12 +36,15 @@ Microsoft, Penn State University, and University of Washington. ```sh pip install autogen ``` -:::tip -You can also install with different [optional dependencies](/docs/installation/Optional-Dependencies). -::: +
+ + You can also install with different [optional + dependencies](/website/docs/installation/Optional-Dependencies). + +
- + ```python import os @@ -59,12 +61,14 @@ user_proxy.initiate_chat( ) ``` - - + + -:::warning -When asked, be sure to check the generated code before continuing to ensure it is safe to run. -::: +
+ + When asked, be sure to check the generated code before continuing to ensure it is safe to run. + +
```python import os @@ -85,8 +89,8 @@ user_proxy.initiate_chat( ) ``` -
- + + ```python import os @@ -110,13 +114,16 @@ with autogen.coding.DockerCommandLineCodeExecutor(work_dir="coding") as code_exe Open `coding/plot.png` to see the generated plot. - +
-:::tip -Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration). -::: +
+ + Learn more about configuring LLMs for agents + [here](/website/docs/topics/llm_configuration). + +
#### Multi-Agent Conversation Framework @@ -125,7 +132,7 @@ By automating chat among multiple capable agents, one can easily make them colle The figure below shows an example conversation flow with AG2. -![Agent Chat Example](/img/chat_example.png) +![Agent Chat Example](/static/img/chat_example.png) ### Where to Go Next? @@ -141,7 +148,7 @@ The figure below shows an example conversation flow with AG2. If you like our project, please give it a [star](https://github.com/ag2ai/ag2) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).