Skip to content

Commit

Permalink
Merge pull request #317 from harishmohanraj/main
Browse files Browse the repository at this point in the history
[Docs] Fix Broken Links and Improve Example Formatting in Docstrings
  • Loading branch information
davorrunje authored Dec 30, 2024
2 parents 07a3d25 + 5a7aa42 commit 767d539
Show file tree
Hide file tree
Showing 35 changed files with 131 additions and 88 deletions.
7 changes: 5 additions & 2 deletions .devcontainer/dev/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -30,10 +30,13 @@ RUN sudo pip install --upgrade pip && \
# Install pre-commit hooks
RUN pre-commit install

# Setup Mintlify for the documentation website
# Install Python packages for documentation
RUN sudo pip install pydoc-markdown pyyaml termcolor nbclient
RUN cd website

# Install npm packages for documentation
WORKDIR /home/autogen/ag2/website
RUN npm install
WORKDIR /home/autogen/ag2

RUN arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) && \
wget -q https://github.com/quarto-dev/quarto-cli/releases/download/v1.5.23/quarto-1.5.23-linux-${arch}.tar.gz && \
Expand Down
10 changes: 9 additions & 1 deletion .muffet-excluded-links.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,16 @@
http://localhost
linkedin.com
x.com
twitter.com
example.com
rapidapi.com
https://platform.openai.com/docs/
https://platform.openai.com
https://openai.com
https://code.visualstudio.com/docs/devcontainers/containers
https://thesequence.substack.com/p/my-five-favorite-ai-papers-of-2023
https://www.llama.com/docs/how-to-guides/prompting/
https://azure.microsoft.com/en-us/get-started/azure-portal
https://github.com/pgvector/pgvector?tab=readme-ov-file
https://github.com/ag2ai/ag2/blob/b1adac515931bf236ac59224269eeec683a162ba/test/oai/test_client.py
https://github.com/ag2ai/ag2/blob/main/notebook/contributing.md
https://github.com/openai/openai-python/blob/d231d1fa783967c1d3a1db3ba1b52647fff148ac/src/openai/resources/completions.py
Expand Down
2 changes: 1 addition & 1 deletion autogen/agentchat/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ def initiate_chats(chat_queue: list[dict[str, Any]]) -> list[ChatResult]:
chat_queue (List[Dict]): A list of dictionaries containing the information about the chats.
Each dictionary should contain the input arguments for
[`ConversableAgent.initiate_chat`](/docs/reference/agentchat/conversable_agent#initiate_chat).
[`ConversableAgent.initiate_chat`](/docs/reference/agentchat/conversable_agent#initiate-chat).
For example:
- `"sender"` - the sender agent.
- `"recipient"` - the recipient agent.
Expand Down
10 changes: 6 additions & 4 deletions autogen/agentchat/contrib/capabilities/vision_capability.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,22 +141,24 @@ def process_last_received_message(self, content: Union[str, list[dict]]) -> str:
(Content is a string without an image, remains unchanged.)
- Input as String, with image location:
content = "What's weather in this cool photo: <img http://example.com/photo.jpg>"
Output: "What's weather in this cool photo: <img http://example.com/photo.jpg> in case you can not see, the caption of this image is:
content = "What's weather in this cool photo: `<img http://example.com/photo.jpg>`"
Output: "What's weather in this cool photo: `<img http://example.com/photo.jpg>` in case you can not see, the caption of this image is:
A beautiful sunset over the mountains\n"
(Caption added after the image)
- Input as List with Text Only:
content = [{"type": "text", "text": "Here's an interesting fact."}]
content = `[{"type": "text", "text": "Here's an interesting fact."}]`
Output: "Here's an interesting fact."
(No images in the content, it remains unchanged.)
- Input as List with Image URL:
```python
content = [
{"type": "text", "text": "What's weather in this cool photo:"},
{"type": "image_url", "image_url": {"url": "http://example.com/photo.jpg"}}
]
Output: "What's weather in this cool photo: <img http://example.com/photo.jpg> in case you can not see, the caption of this image is:
```
Output: "What's weather in this cool photo: `<img http://example.com/photo.jpg>` in case you can not see, the caption of this image is:
A beautiful sunset over the mountains\n"
(Caption added after the image)
"""
Expand Down
3 changes: 2 additions & 1 deletion autogen/agentchat/contrib/graph_rag/graph_rag_capability.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ class GraphRagCapability(AgentCapability):
3. generate answers from retrieved information and send messages back.
For example,
```python
graph_query_engine = GraphQueryEngine(...)
graph_query_engine.init_db([Document(doc1), Document(doc2), ...])
Expand Down Expand Up @@ -50,7 +51,7 @@ class GraphRagCapability(AgentCapability):
# - Hugo Weaving',
# 'role': 'user_proxy'},
# ...)
```
"""

def __init__(self, query_engine: GraphQueryEngine):
Expand Down
12 changes: 8 additions & 4 deletions autogen/agentchat/contrib/img_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ def llava_formatter(prompt: str, order_image_tokens: bool = False) -> tuple[str,
Formats the input prompt by replacing image tags and returns the new prompt along with image locations.
Parameters:
- prompt (str): The input string that may contain image tags like <img ...>.
- prompt (str): The input string that may contain image tags like `<img ...>`.
- order_image_tokens (bool, optional): Whether to order the image tokens with numbers.
It will be useful for GPT-4V. Defaults to False.
Expand Down Expand Up @@ -194,7 +194,7 @@ def gpt4v_formatter(prompt: str, img_format: str = "uri") -> list[Union[str, dic
Formats the input prompt by replacing image tags and returns a list of text and images.
Args:
- prompt (str): The input string that may contain image tags like <img ...>.
- prompt (str): The input string that may contain image tags like `<img ...>`.
- img_format (str): what image format should be used. One of "uri", "url", "pil".
Returns:
Expand Down Expand Up @@ -293,24 +293,28 @@ def message_formatter_pil_to_b64(messages: list[dict]) -> list[dict]:
'image_url' key converted to base64 encoded data URIs.
Example Input:
```python
[
{'content': [{'type': 'text', 'text': 'You are a helpful AI assistant.'}], 'role': 'system'},
{'content': [
{'type': 'text', 'text': "What's the breed of this dog here? \n"},
{'type': 'text', 'text': "What's the breed of this dog here?"},
{'type': 'image_url', 'image_url': {'url': a PIL.Image.Image}},
{'type': 'text', 'text': '.'}],
'role': 'user'}
]
```
Example Output:
```python
[
{'content': [{'type': 'text', 'text': 'You are a helpful AI assistant.'}], 'role': 'system'},
{'content': [
{'type': 'text', 'text': "What's the breed of this dog here? \n"},
{'type': 'text', 'text': "What's the breed of this dog here?"},
{'type': 'image_url', 'image_url': {'url': a B64 Image}},
{'type': 'text', 'text': '.'}],
'role': 'user'}
]
```
"""
new_messages = []
for message in messages:
Expand Down
8 changes: 5 additions & 3 deletions autogen/agentchat/contrib/vectordb/chromadb.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,11 @@ def __init__(
Args:
client: chromadb.Client | The client object of the vector database. Default is None.
If provided, it will use the client object directly and ignore other arguments.
path: str | The path to the vector database. Default is `tmp/db`. The default was `None` for version <=0.2.24.
path: str | The path to the vector database. Default is `tmp/db`. The default was `None` for version `<=0.2.24`.
embedding_function: Callable | The embedding function used to generate the vector representation
of the documents. Default is None, SentenceTransformerEmbeddingFunction("all-MiniLM-L6-v2") will be used.
metadata: dict | The metadata of the vector database. Default is None. If None, it will use this
setting: {"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 32}. For more details of
setting: `{"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 32}`. For more details of
the metadata, please refer to [distances](https://github.com/nmslib/hnswlib#supported-distances),
[hnsw](https://github.com/chroma-core/chroma/blob/566bc80f6c8ee29f7d99b6322654f32183c368c4/chromadb/segment/impl/vector/local_hnsw.py#L184),
and [ALGO_PARAMS](https://github.com/nmslib/hnswlib/blob/master/ALGO_PARAMS.md).
Expand Down Expand Up @@ -248,7 +248,7 @@ def retrieve_docs(
collection_name: str | The name of the collection. Default is None.
n_results: int | The number of relevant documents to return. Default is 10.
distance_threshold: float | The threshold for the distance score, only distance smaller than it will be
returned. Don't filter with it if < 0. Default is -1.
returned. Don't filter with it if `< 0`. Default is -1.
kwargs: Dict | Additional keyword arguments.
Returns:
Expand Down Expand Up @@ -279,6 +279,7 @@ def _chroma_get_results_to_list_documents(data_dict) -> list[Document]:
List[Document] | The list of Document.
Example:
```python
data_dict = {
"key1s": [1, 2, 3],
"key2s": ["a", "b", "c"],
Expand All @@ -291,6 +292,7 @@ def _chroma_get_results_to_list_documents(data_dict) -> list[Document]:
{"key1": 2, "key2": "b", "key4": "y"},
{"key1": 3, "key2": "c", "key4": "z"},
]
```
"""

results = []
Expand Down
4 changes: 2 additions & 2 deletions autogen/agentchat/contrib/vectordb/pgvectordb.py
Original file line number Diff line number Diff line change
Expand Up @@ -606,7 +606,7 @@ def __init__(
Models can be chosen from:
https://huggingface.co/models?library=sentence-transformers
metadata: dict | The metadata of the vector database. Default is None. If None, it will use this
setting: {"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 16}. Creates Index on table
setting: `{"hnsw:space": "ip", "hnsw:construction_ef": 30, "hnsw:M": 16}`. Creates Index on table
using hnsw (embedding vector_l2_ops) WITH (m = hnsw:M) ef_construction = "hnsw:construction_ef".
For more info: https://github.com/pgvector/pgvector?tab=readme-ov-file#hnsw
Returns:
Expand Down Expand Up @@ -917,7 +917,7 @@ def retrieve_docs(
collection_name: str | The name of the collection. Default is None.
n_results: int | The number of relevant documents to return. Default is 10.
distance_threshold: float | The threshold for the distance score, only distance smaller than it will be
returned. Don't filter with it if < 0. Default is -1.
returned. Don't filter with it if `< 0`. Default is -1.
kwargs: Dict | Additional keyword arguments.
Returns:
Expand Down
4 changes: 2 additions & 2 deletions autogen/agentchat/contrib/vectordb/qdrant.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def __init__(
Defaults to None.
**kwargs: Additional options to pass to fastembed.TextEmbedding
Raises:
ValueError: If the model_name is not in the format <org>/<model> e.g. BAAI/bge-small-en-v1.5.
ValueError: If the model_name is not in the format `<org>/<model>` e.g. BAAI/bge-small-en-v1.5.
"""
try:
from fastembed import TextEmbedding
Expand Down Expand Up @@ -229,7 +229,7 @@ def retrieve_docs(
collection_name: str | The name of the collection. Default is None.
n_results: int | The number of relevant documents to return. Default is 10.
distance_threshold: float | The threshold for the distance score, only distance smaller than it will be
returned. Don't filter with it if < 0. Default is 0.
returned. Don't filter with it if `< 0`. Default is 0.
kwargs: Dict | Additional keyword arguments.
Returns:
Expand Down
2 changes: 2 additions & 0 deletions autogen/agentchat/contrib/vectordb/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ def chroma_results_to_query_results(data_dict: dict[str, list[list[Any]]], speci
special_key.
Example:
```python
data_dict = {
"key1s": [[1, 2, 3], [4, 5, 6], [7, 8, 9]],
"key2s": [["a", "b", "c"], ["c", "d", "e"], ["e", "f", "g"]],
Expand All @@ -103,6 +104,7 @@ def chroma_results_to_query_results(data_dict: dict[str, list[list[Any]]], speci
({"key1": 9, "key2": "g", "key4": "6"}, 0.9),
],
]
```
"""

keys = [
Expand Down
4 changes: 2 additions & 2 deletions autogen/agentchat/conversable_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -1048,7 +1048,7 @@ def initiate_chat(
silent (bool or None): (Experimental) whether to print the messages for this conversation. Default is False.
cache (AbstractCache or None): the cache client to be used for this conversation. Default is None.
max_turns (int or None): the maximum number of turns for the chat between the two agents. One turn means one conversation round trip. Note that this is different from
[max_consecutive_auto_reply](#max-consecutive-auto-reply) which is the maximum number of consecutive auto replies; and it is also different from [max_rounds in GroupChat](./groupchat#groupchat-objects) which is the maximum number of rounds in a group chat session.
[max_consecutive_auto_reply](#max-consecutive-auto-reply) which is the maximum number of consecutive auto replies; and it is also different from [max_rounds in GroupChat](./groupchat) which is the maximum number of rounds in a group chat session.
If max_turns is set to None, the chat will continue until a termination condition is met. Default is None.
summary_method (str or callable): a method to get a summary from the chat. Default is DEFAULT_SUMMARY_METHOD, i.e., "last_msg".
Expand Down Expand Up @@ -1376,7 +1376,7 @@ def initiate_chats(self, chat_queue: list[dict[str, Any]]) -> list[ChatResult]:
Args:
chat_queue (List[Dict]): a list of dictionaries containing the information of the chats.
Each dictionary should contain the input arguments for [`initiate_chat`](conversable_agent#initiate_chat)
Each dictionary should contain the input arguments for [`initiate_chat`](conversable_agent#initiate-chat)
Returns: a list of ChatResult objects corresponding to the finished chats in the chat_queue.
"""
Expand Down
Loading

0 comments on commit 767d539

Please sign in to comment.