Skip to content

Commit

Permalink
Merge pull request #168 from ag2ai/fix_links
Browse files Browse the repository at this point in the history
fix broken links
  • Loading branch information
skzhang1 authored Dec 7, 2024
2 parents 9478921 + 9fb8d68 commit 9338c7a
Show file tree
Hide file tree
Showing 13 changed files with 33 additions and 33 deletions.
2 changes: 1 addition & 1 deletion website/blog/2023-04-21-LLM-tuning-math/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,4 +71,4 @@ The need for model selection, parameter tuning and cost saving is not specific t
* [Research paper about the tuning technique](https://arxiv.org/abs/2303.04673)
* [Documentation about inference tuning](/docs/Use-Cases/enhanced_inference)

*Do you have any experience to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? Please join our [Discord](https://aka.ms/autogen-dc) server for discussion.*
*Do you have any experience to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? Please join our [Discord](https://discord.gg/pAbnFJrkgZ) server for discussion.*
2 changes: 1 addition & 1 deletion website/blog/2023-05-18-GPT-adaptive-humaneval/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ There are many directions of extensions in research and development:
- Automate the process of optimizing the configurations.
- Build adaptive agents for different applications.

_Do you find this approach applicable to your use case? Do you have any other challenge to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? Please join our [Discord](https://aka.ms/autogen-dc) server for discussion._
_Do you find this approach applicable to your use case? Do you have any other challenge to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? Please join our [Discord](https://discord.gg/pAbnFJrkgZ) server for discussion._

## For Further Reading

Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-06-28-MathChat/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -93,4 +93,4 @@ Further work can be done to enhance this framework or math problem-solving in ge
- [Research paper of MathChat](https://arxiv.org/abs/2306.01337)
- [Documentation about `autogen`](/docs/Getting-Started)

_Are you working on applications that involve math problem-solving? Would you appreciate additional research or support on the application of LLM-based agents for math problem-solving? Please join our [Discord](https://aka.ms/autogen-dc) server for discussion._
_Are you working on applications that involve math problem-solving? Would you appreciate additional research or support on the application of LLM-based agents for math problem-solving? Please join our [Discord](https://discord.gg/pAbnFJrkgZ) server for discussion._
2 changes: 1 addition & 1 deletion website/blog/2023-10-26-TeachableAgent/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -394,4 +394,4 @@ The solution is: 8 * 3 + 7 * 5

## Conclusion

`Teachability` is still under active research and development. For any problems you find or improvements you have in mind, please join our discussions in this repo and on our [Discord channel](https://aka.ms/autogen-dc). We look forward to seeing how you and the rest of the community can use and improve teachable agents in AutoGen!
`Teachability` is still under active research and development. For any problems you find or improvements you have in mind, please join our discussions in this repo and on our [Discord channel](https://discord.gg/pAbnFJrkgZ). We look forward to seeing how you and the rest of the community can use and improve teachable agents in AutoGen!
2 changes: 1 addition & 1 deletion website/blog/2023-11-13-OAI-assistants/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Checkout example notebooks for reference:
Earlier last week, OpenAI introduced [GPTs](https://openai.com/blog/introducing-gpts), giving users ability to create custom ChatGPTs tailored for them.
*But what if these individual GPTs could collaborate to do even more?*
Fortunately, because of AutoGen, this is now a reality!
AutoGen has been pioneering agents and supporting [multi-agent workflows](https://aka.ms/autogen-pdf) since earlier this year, and now (starting with version 0.2.0b5) we are introducing compatibility with the [Assistant API](https://openai.com/blog/introducing-gpts), which is currently in beta preview.
AutoGen has been pioneering agents and supporting [multi-agent workflows](https://openreview.net/pdf?id=BAakY1hNKS) since earlier this year, and now (starting with version 0.2.0b5) we are introducing compatibility with the [Assistant API](https://openai.com/blog/introducing-gpts), which is currently in beta preview.

To accomplish this, we've added a new (experimental) agent called the `GPTAssistantAgent` that
lets you seamlessly add these new OpenAI assistants into AutoGen-based multi-agent workflows.
Expand Down
4 changes: 2 additions & 2 deletions website/blog/2023-11-20-AgentEval/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ tags: [LLM, GPT, evaluation, task utility]
**TL;DR:**
* As a developer of an LLM-powered application, how can you assess the utility it brings to end users while helping them with their tasks?
* To shed light on the question above, we introduce `AgentEval` — the first version of the framework to assess the utility of any LLM-powered application crafted to assist users in specific tasks. AgentEval aims to simplify the evaluation process by automatically proposing a set of criteria tailored to the unique purpose of your application. This allows for a comprehensive assessment, quantifying the utility of your application against the suggested criteria.
* We demonstrate how `AgentEval` work using [math problems dataset](https://ag2ai.github.io/ag2/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc).
* We demonstrate how `AgentEval` work using [math problems dataset](https://ag2ai.github.io/ag2/blog/2023/06/28/MathChat) as an example in the [following notebook](https://github.com/ag2ai/ag2/blob/main/notebook/agenteval_cq_math.ipynb). Any feedback would be useful for future development. Please contact us on our [Discord](https://discord.gg/pAbnFJrkgZ).


## Introduction
Expand Down Expand Up @@ -110,7 +110,7 @@ To mitigate the limitations mentioned above, we are working on VerifierAgent, wh
## Summary
`CriticAgent` and `QuantifierAgent` can be applied to the logs of any type of application, providing you with an in-depth understanding of the utility your solution brings to the user for a given task.

We would love to hear about how AgentEval works for your application. Any feedback would be useful for future development. Please contact us on our [Discord](http://aka.ms/autogen-dc).
We would love to hear about how AgentEval works for your application. Any feedback would be useful for future development. Please contact us on our [Discord](https://discord.gg/pAbnFJrkgZ).


## Previous Research
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2023-12-01-AutoGenStudio/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ A: To reset your conversation history, you can delete the `database.sqlite` file
A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages.

**Q: Where can I find documentation and support for AutoGen Studio?**
A: We are constantly working to improve AutoGen Studio. For the latest updates, please refer to the [AutoGen Studio Readme](https://github.com/ag2ai/build-with-ag2/blob/main/samples/apps/autogen-studio). For additional support, please open an issue on [GitHub](https://github.com/ag2ai/ag2) or ask questions on [Discord](https://aka.ms/autogen-dc).
A: We are constantly working to improve AutoGen Studio. For the latest updates, please refer to the [AutoGen Studio Readme](https://github.com/ag2ai/build-with-ag2/blob/main/samples/apps/autogen-studio). For additional support, please open an issue on [GitHub](https://github.com/ag2ai/ag2) or ask questions on [Discord](https://discord.gg/pAbnFJrkgZ).

**Q: Can I use Other Models with AutoGen Studio?**
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model name`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure".
Expand Down
2 changes: 1 addition & 1 deletion website/blog/2024-01-25-AutoGenBench/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -145,4 +145,4 @@ For an up to date tracking of our work items on this project, please see [AutoGe

## Call for Participation

Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/ag2ai/build-with-ag2/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://aka.ms/autogen-dc) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel!
Finally, we want to end this blog post with an open call for contributions. AutoGenBench is still nascent, and has much opportunity for improvement. New benchmarks are constantly being published, and will need to be added. Everyone may have their own distinct set of metrics that they care most about optimizing, and these metrics should be onboarded. To this end, we welcome any and all contributions to this corner of the AutoGen project. If contributing is something that interests you, please see the [contributor’s guide](https://github.com/ag2ai/build-with-ag2/blob/main/samples/tools/autogenbench/CONTRIBUTING.md) and join our [Discord](https://discord.gg/pAbnFJrkgZ) discussion in the [#autogenbench](https://discord.com/channels/1153072414184452236/1199851779328847902) channel!
6 changes: 3 additions & 3 deletions website/blog/2024-03-03-AutoGen-Update/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -96,8 +96,8 @@ AutoGen is used or contributed by companies, organizations, universities from A

AutoGen has a large and active community of developers, researchers and AI practitioners.

- 22K+ stars on [GitHub](https://aka.ms/autogen-gh), 3K+ forks
- 14K+ members on [Discord](https://aka.ms/autogen-dc)
- 22K+ stars on [GitHub](https://github.com/ag2ai/ag2), 3K+ forks
- 14K+ members on [Discord](https://discord.gg/pAbnFJrkgZ)
- 100K+ downloads per months
- 3M+ views on Youtube (400+ community-generated videos)
- 100+ citations on [Google Scholar](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=IiSNwnAAAAAJ&citation_for_view=IiSNwnAAAAAJ:zCpYd49hD24C)
Expand Down Expand Up @@ -184,6 +184,6 @@ Despite all the exciting progress, there are tons of open problems, issues and f
We need more help to tackle the challenging problems and accelerate the development.
You're all welcome to join our community and define the future of AI agents together.

_Do you find this update helpful? Would you like to join force? Please join our [Discord](https://aka.ms/autogen-dc) server for discussion._
_Do you find this update helpful? Would you like to join force? Please join our [Discord](https://discord.gg/pAbnFJrkgZ) server for discussion._

![contributors](img/contributors.png)
28 changes: 14 additions & 14 deletions website/docs/Getting-Started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,33 +3,33 @@ import TabItem from "@theme/TabItem";

# Getting Started

AutoGen is an open-source programming framework for building AI agents and facilitating
cooperation among multiple agents to solve tasks. AutoGen aims to provide an easy-to-use
AG2 (formerly AutoGen) is an open-source programming framework for building AI agents and facilitating
cooperation among multiple agents to solve tasks. AG2 aims to provide an easy-to-use
and flexible framework for accelerating development and research on agentic AI,
like PyTorch for Deep Learning. It offers features such as agents that can converse
with other agents, LLM and tool use support, autonomous and human-in-the-loop workflows,
and multi-agent conversation patterns.

![AutoGen Overview](/img/autogen_agentchat.png)
![AG2 Overview](/img/autogen_agentchat.png)

### Main Features

- AutoGen enables building next-gen LLM applications based on [multi-agent
- AG2 enables building next-gen LLM applications based on [multi-agent
conversations](/docs/Use-Cases/agent_chat) with minimal effort. It simplifies
the orchestration, automation, and optimization of a complex LLM workflow. It
maximizes the performance of LLM models and overcomes their weaknesses.
- It supports [diverse conversation
patterns](/docs/Use-Cases/agent_chat#supporting-diverse-conversation-patterns)
for complex workflows. With customizable and conversable agents, developers can
use AutoGen to build a wide range of conversation patterns concerning
use AG2 to build a wide range of conversation patterns concerning
conversation autonomy, the number of agents, and agent conversation topology.
- It provides a collection of working systems with different complexities. These
systems span a [wide range of
applications](/docs/Use-Cases/agent_chat#diverse-applications-implemented-with-autogen)
from various domains and complexities. This demonstrates how AutoGen can
from various domains and complexities. This demonstrates how AG2 can
easily support diverse conversation patterns.

AutoGen is powered by collaborative [research studies](/docs/Research) from
AG2 is powered by collaborative [research studies](/docs/Research) from
Microsoft, Penn State University, and University of Washington.

### Quickstart
Expand Down Expand Up @@ -120,25 +120,25 @@ Learn more about configuring LLMs for agents [here](/docs/topics/llm_configurati

#### Multi-Agent Conversation Framework

Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
AG2 enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools, and humans.
By automating chat among multiple capable agents, one can easily make them collectively perform tasks autonomously or with human feedback, including tasks that require using tools via code. For [example](https://github.com/ag2ai/ag2/blob/main/test/twoagent.py),

The figure below shows an example conversation flow with AutoGen.
The figure below shows an example conversation flow with AG2.

![Agent Chat Example](/img/chat_example.png)

### Where to Go Next?

- Go through the [tutorial](/docs/tutorial/introduction) to learn more about the core concepts in AutoGen
- Go through the [tutorial](/docs/tutorial/introduction) to learn more about the core concepts in AG2
- Read the examples and guides in the [notebooks section](/docs/notebooks)
- Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference)
- Read the [API](/docs/reference/agentchat/conversable_agent/) docs
- Learn about [research](/docs/Research) around AutoGen
- Chat on [Discord](https://aka.ms/autogen-dc)
- Learn about [research](/docs/Research) around AG2
- Chat on [Discord](https://discord.gg/pAbnFJrkgZ)
- Follow on [Twitter](https://twitter.com/Chi_Wang_)
- See our [roadmaps](https://aka.ms/autogen-roadmap)
- See our [roadmaps](https://github.com/ag2ai/ag2/issues?q=is%3Aissue+is%3Aopen+label%3Aroadmap)

If you like our project, please give it a [star](https://github.com/ag2ai/ag2/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).
If you like our project, please give it a [star](https://github.com/ag2ai/ag2) on GitHub. If you are interested in contributing, please read [Contributor's Guide](/docs/contributor-guide/contributing).

<iframe
src="https://ghbtns.com/github-btn.html?user=ag2ai&amp;repo=autogen&amp;type=star&amp;count=true&amp;size=large"
Expand Down
2 changes: 1 addition & 1 deletion website/docs/contributor-guide/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ docker run -it -p 8081:3000 -v /home/AutoGenDeveloper/autogen-newcode:newstuff/
## Develop in Remote Container

If you use vscode, you can open the ag2 folder in a [Container](https://code.visualstudio.com/docs/remote/containers).
We have provided the configuration in [devcontainer](https://github.com/ag2ai/ag2/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AutoGen in dev containers is recommended.
We have provided the configuration in [devcontainer](https://github.com/ag2ai/ag2/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing AG2 in dev containers is recommended.
6 changes: 3 additions & 3 deletions website/docs/installation/Installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import TabItem from '@theme/TabItem';

## Create a virtual environment (optional)

When installing AutoGen locally, we recommend using a virtual environment for the installation. This will ensure that the dependencies for AutoGen are isolated from the rest of your system.
When installing AG2 locally, we recommend using a virtual environment for the installation. This will ensure that the dependencies for AG2 are isolated from the rest of your system.

<Tabs>
<TabItem value="venv" label="venv" default>
Expand Down Expand Up @@ -64,9 +64,9 @@ When installing AutoGen locally, we recommend using a virtual environment for th
</TabItem>
</Tabs>

## Install AutoGen
## Install AG2

AutoGen requires **Python version >= 3.8, < 3.14**. It can be installed from pip:
AG2 requires **Python version >= 3.8, < 3.14**. It can be installed from pip:

```bash
pip install autogen
Expand Down
6 changes: 3 additions & 3 deletions website/docs/installation/Optional-Dependencies.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Different LLMs

AutoGen installs OpenAI package by default. To use LLMs by other providers, you can install the following packages:
AG2 installs OpenAI package by default. To use LLMs by other providers, you can install the following packages:

```bash
pip install autogen[gemini,anthropic,mistral,together,groq,cohere]
Expand Down Expand Up @@ -86,7 +86,7 @@ Example notebooks:

## Teachability

To use Teachability, please install AutoGen with the [teachable] option.
To use Teachability, please install AG2 with the [teachable] option.

```bash
pip install "autogen[teachable]"
Expand Down Expand Up @@ -126,7 +126,7 @@ Example notebook: [Finite State Machine graphs to set speaker transition constra

## Long Context Handling

AutoGen includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option:
AG2 includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option:

```bash
pip install "autogen[long-context]"
Expand Down

0 comments on commit 9338c7a

Please sign in to comment.