From a191e5f6db9da2be3b006cdb0b2b06a2e4adf56c Mon Sep 17 00:00:00 2001 From: "Moore, Eric" Date: Fri, 29 Nov 2024 13:50:59 -0600 Subject: [PATCH 1/5] Add watsonx on litellm --- .../cloud-litellm-watsonx.ipynb | 265 ++++++++++++++++++ 1 file changed, 265 insertions(+) create mode 100644 website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb diff --git a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb new file mode 100644 index 0000000000..0fa8288d2b --- /dev/null +++ b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb @@ -0,0 +1,265 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# LiteLLM with WatsonX \n", + "\n", + "LiteLLM is an open-source, locally run proxy server providing an OpenAI-compatible API. It supports various LLM providers, including IBM's WatsonX, enabling seamless integration with tools like AutoGen.\n", + "\n", + "Running LiteLLM with WatsonX requires the following installations:\n", + "\n", + "1. **AutoGen** – A framework for building and orchestrating AI agents.\n", + "2. **LiteLLM** – An OpenAI-compatible proxy for bridging non-compliant APIs.\n", + "3. **IBM WatsonX** – LLM service requiring specific session token authentication.\n", + "\n", + "### Prerequisites\n", + "\n", + "Before setting up, ensure **Docker** is installed. Refer to the [Docker installation guide](https://docs.docker.com/get-docker/). Optionally, consider using **Postman** to easily test API requests.\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Installing WatsonX \n", + "\n", + "To set up WatsonX, follow these steps:\n", + "\n", + "1. **Access WatsonX:**\n", + " - Sign up for [WatsonX.ai](https://www.ibm.com/watsonx).\n", + " - Create an API_KEY and PROJECT_ID.\n", + "\n", + "2. **Validate WatsonX API Access:**\n", + " - Verify access using the following commands:\n" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "vscode": { + "languageId": "shellscript" + } + }, + "outputs": [ + { + "data": { + "text/plain": [ + "'bash\\ncurl -L \"https://iam.cloud.ibm.com/identity/token?=null\" -H \"Content-Type: application/x-www-form-urlencoded\" -d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" -d \"apikey=\"\\n'" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "'''bash\n", + "curl -L \"https://iam.cloud.ibm.com/identity/token?=null\" -H \"Content-Type: application/x-www-form-urlencoded\" -d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" -d \"apikey=\"\n", + "'''" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "vscode": { + "languageId": "shellscript" + } + }, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "\n", + " ```\n", + "\n", + " ```bash\n", + " curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/foundation_model_specs?version=2024-09-16&project_id=1eeb4112-5f6e-4a81-9b61-8eac7f9653b4&filters=function_text_generation%2C%21lifecycle_withdrawn%3Aand&limit=200\" \\\n", + " -H \"Authorization: Bearer \"\n", + "\n", + " ```\n", + "\n", + " ```bash\n", + " # Test querying the LLM\n", + " curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02\" \\\n", + " -H \"Content-Type: application/json\" \\\n", + " -H \"Accept: application/json\" \\\n", + " -H \"Authorization: Bearer \" \\\n", + " -d \"{\n", + " \\\"model_id\\\": \\\"google/flan-t5-xxl\\\",\n", + " \\\"input\\\": \\\"What is the capital of Arkansas?:\\\",\n", + " \\\"parameters\\\": {\n", + " \\\"max_new_tokens\\\": 100,\n", + " \\\"time_limit\\\": 1000\n", + " },\n", + " \\\"project_id\\\": \\\"\"\n", + " }\"\n", + "\n", + " ```\n", + "\n", + "3. **Install WatsonX Python SDK:**\n", + "\n", + " ```bash\n", + " pip install watsonx\n", + " ```\n", + "\n", + " For detailed instructions, visit the [WatsonX SDK documentation](https://ibm.github.io/watsonx-ai-python-sdk/install.html).\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Installing LiteLLM \n", + "\n", + "1. **Download LiteLLM Docker Image:**\n", + "\n", + " ```bash\n", + " docker pull ghcr.io/berriai/litellm:main-latest\n", + " ```\n", + "\n", + " **(ALTERNATIVE). Install LiteLLM Python Library:**\n", + "\n", + " ```bash\n", + " pip install 'litellm[proxy]'\n", + " ```\n", + "\n", + "\n", + "\n", + "---\n", + "\n", + "2. **Create a LiteLLM Configuration File:**\n", + "\n", + " - Save as `litellm_config.yaml` in a local directory.\n", + " - Example content for WatsonX:\n", + "\n", + " ```yaml\n", + " model_list:\n", + " - model_name: llama-3-8b\n", + " litellm_params:\n", + " # all params accepted by litellm.completion()\n", + " model: watsonx/meta-llama/llama-3-8b-instruct\n", + " api_key: \"os.environ/WATSONX_API_KEY\" \n", + " project_id: \"os.environ/WX_PROJECT_ID\"\n", + "\n", + " ```\n", + "'''yaml\n", + " - model_name: \"llama_3_2_90\"\n", + " litellm_params:\n", + " model: watsonx/meta-llama/llama-3-2-90b-vision-instruct\n", + " api_key: os.environ[\"WATSONX_APIKEY\"] = \"\" # IBM cloud API key\n", + " max_new_tokens: 4000\n", + "'''\n", + "3. **Start LiteLLM Container:**\n", + "\n", + " ```bash\n", + " docker run -v \\litellm_config.yaml:/app/config.yaml -e WATSONX_API_KEY= -e WATSONX_URL=https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02 -e WX_PROJECT_ID= -p 4000:4000 ghcr.io/berriai/litellm:main-latest --config /app/config.yaml --detailed_debug\n", + " ```\n", + "\n", + " ---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Installing AutoGen \n", + "\n", + "AutoGen simplifies orchestration and communication between agents. To install:\n", + "\n", + "1. Open a terminal with administrator rights.\n", + "2. Run the following command:\n", + "\n", + " ```bash\n", + " pip install ag2\n", + " ```\n", + "\n", + "Once installed, AutoGen agents can leverage WatsonX APIs via LiteLLM.\n", + "\n", + "---\n", + "\n", + "phi1 = {\n", + " \"config_list\": [\n", + " {\n", + " \"model\": \"llama-3-8b\",\n", + " \"base_url\": \"http://localhost:4000\",\n", + " \"api_key\":\"watsonx\",\n", + " \"price\" : [0,0]\n", + " },\n", + " ],\n", + " \"cache_seed\": None, # Disable caching.\n", + "}\n", + "\n", + "\n", + "\n", + "\n", + "phi2 = {\n", + " \"config_list\": [\n", + " {\n", + " \"model\": \"llama-3-8b\",\n", + " \"base_url\": \"http://localhost:4000\",\n", + " \"api_key\":\"watsonx\",\n", + " \"price\" : [0,0]\n", + " },\n", + " ],\n", + " \"cache_seed\": None, # Disable caching.\n", + "}\n", + "\n", + "from autogen import ConversableAgent, AssistantAgent\n", + "\n", + "jack = ConversableAgent(\n", + " \"Jack (Phi-2)\",\n", + " llm_config=phi2,\n", + " system_message=\"Your name is Jack and you are a comedian in a two-person comedy show.\",\n", + ")\n", + "\n", + "emma = ConversableAgent(\n", + " \"Emma (Gemma)\",\n", + " llm_config=phi1, \n", + " system_message=\"Your name is Emma and you are a comedian in two-person comedy show.\",\n", + ")\n", + "\n", + "#autogen\n", + "chat_result = jack.initiate_chat(emma, message=\"Emma, tell me a joke.\", max_turns=2)\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.5" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 183ba39d66672b3300968ea7436cbd07e314cf56 Mon Sep 17 00:00:00 2001 From: "Moore, Eric" Date: Wed, 11 Dec 2024 12:18:36 -0600 Subject: [PATCH 2/5] Initial commit upstream --- .../cloud-litellm-watsonx.ipynb | 99 +++++-------------- 1 file changed, 26 insertions(+), 73 deletions(-) diff --git a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb index 0fa8288d2b..83c4685154 100644 --- a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb +++ b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb @@ -37,85 +37,38 @@ " - Verify access using the following commands:\n" ] }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": { - "vscode": { - "languageId": "shellscript" - } - }, - "outputs": [ - { - "data": { - "text/plain": [ - "'bash\\ncurl -L \"https://iam.cloud.ibm.com/identity/token?=null\" -H \"Content-Type: application/x-www-form-urlencoded\" -d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" -d \"apikey=\"\\n'" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "'''bash\n", - "curl -L \"https://iam.cloud.ibm.com/identity/token?=null\" -H \"Content-Type: application/x-www-form-urlencoded\" -d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" -d \"apikey=\"\n", - "'''" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "vscode": { - "languageId": "shellscript" - } - }, - "outputs": [], - "source": [] - }, { "cell_type": "markdown", "metadata": {}, "source": [ + "Tip: Verify access to watsonX APIs before installing LiteLLM\n", + "Get Session Token:\n", "\n", - " \n", - "\n", - " ```\n", - "\n", - " ```bash\n", - " curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/foundation_model_specs?version=2024-09-16&project_id=1eeb4112-5f6e-4a81-9b61-8eac7f9653b4&filters=function_text_generation%2C%21lifecycle_withdrawn%3Aand&limit=200\" \\\n", - " -H \"Authorization: Bearer \"\n", - "\n", - " ```\n", - "\n", - " ```bash\n", - " # Test querying the LLM\n", - " curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02\" \\\n", - " -H \"Content-Type: application/json\" \\\n", - " -H \"Accept: application/json\" \\\n", - " -H \"Authorization: Bearer \" \\\n", - " -d \"{\n", - " \\\"model_id\\\": \\\"google/flan-t5-xxl\\\",\n", - " \\\"input\\\": \\\"What is the capital of Arkansas?:\\\",\n", - " \\\"parameters\\\": {\n", - " \\\"max_new_tokens\\\": 100,\n", - " \\\"time_limit\\\": 1000\n", - " },\n", - " \\\"project_id\\\": \\\"\"\n", - " }\"\n", - "\n", - " ```\n", - "\n", - "3. **Install WatsonX Python SDK:**\n", - "\n", - " ```bash\n", - " pip install watsonx\n", - " ```\n", - "\n", - " For detailed instructions, visit the [WatsonX SDK documentation](https://ibm.github.io/watsonx-ai-python-sdk/install.html).\n", + "curl -L \"https://iam.cloud.ibm.com/identity/token?=null\" -H \"Content-Type: application/x-www-form-urlencoded\" -d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" -d \"apikey=\"\n", "\n", - "---" + "Get list of LLMs:\n", + " \n", + "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/foundation_model_specs?version=2024-09-16&project_id=1eeb4112-5f6e-4a81-9b61-8eac7f9653b4&filters=function_text_generation%2C%21lifecycle_withdrawn%3Aand&limit=200\" -H \"Authorization: Bearer \"\n", + "\n", + "\n", + "Ask the LLM a question:\n", + " \n", + "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -H \"Authorization: Bearer \" \\\n", + "-d \"{\n", + " \\\"model_id\\\": \\\"google/flan-t5-xxl\\\",\n", + " \\\"input\\\": \\\"What is the capital of Arkansas?:\\\",\n", + " \\\"parameters\\\": {\n", + " \\\"max_new_tokens\\\": 100,\n", + " \\\"time_limit\\\": 1000\n", + " },\n", + " \\\"project_id\\\": \\\"\"\n", + "}\"\n", + "\n", + "\n", + "2.\tWith access to watsonX API’s validated you can install the python library\n", + " \n", + " \n", + "From \n" ] }, { From fdca90d6e15b961963420e8cfdb5b21e73cd8746 Mon Sep 17 00:00:00 2001 From: Eric Date: Thu, 12 Dec 2024 13:40:52 -0600 Subject: [PATCH 3/5] Formatting fixes --- .../cloud-litellm-watsonx.ipynb | 122 +++++++++--------- 1 file changed, 64 insertions(+), 58 deletions(-) diff --git a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb index 83c4685154..5f6f31b546 100644 --- a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb +++ b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb @@ -25,35 +25,44 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Installing WatsonX \n", + "## Installing WatsonX\n", "\n", "To set up WatsonX, follow these steps:\n", - "\n", "1. **Access WatsonX:**\n", - " - Sign up for [WatsonX.ai](https://www.ibm.com/watsonx).\n", - " - Create an API_KEY and PROJECT_ID.\n", - "\n", + " - Sign up for [WatsonX.ai](https://www.ibm.com/watsonx).\n", + " - Create an API_KEY and PROJECT_ID.\n", + "
\n", + "
\n", "2. **Validate WatsonX API Access:**\n", - " - Verify access using the following commands:\n" + " - Verify access using the following commands:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "Tip: Verify access to watsonX APIs before installing LiteLLM\n", - "Get Session Token:\n", - "\n", - "curl -L \"https://iam.cloud.ibm.com/identity/token?=null\" -H \"Content-Type: application/x-www-form-urlencoded\" -d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" -d \"apikey=\"\n", - "\n", - "Get list of LLMs:\n", - " \n", - "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/foundation_model_specs?version=2024-09-16&project_id=1eeb4112-5f6e-4a81-9b61-8eac7f9653b4&filters=function_text_generation%2C%21lifecycle_withdrawn%3Aand&limit=200\" -H \"Authorization: Bearer \"\n", - "\n", - "\n", - "Ask the LLM a question:\n", - " \n", - "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -H \"Authorization: Bearer \" \\\n", + "Tip: Verify access to watsonX APIs before installing LiteLLM.\n", + "

\n", + "Get Session Token:
\n", + "```bash\n", + "curl -L \"https://iam.cloud.ibm.com/identity/token?=null\" \n", + "-H \"Content-Type: application/x-www-form-urlencoded\" \n", + "-d \"grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey\" \n", + "-d \"apikey=\"\n", + "```\n", + "\n", + "Get list of LLMs:
\n", + "```bash\n", + "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/foundation_model_specs?version=2024-09-16&project_id=1eeb4112-5f6e-4a81-9b61-8eac7f9653b4&filters=function_text_generation%2C%21lifecycle_withdrawn%3Aand&limit=200\" \n", + "-H \"Authorization: Bearer \"\n", + "```\n", + "\n", + "Ask the LLM a question:
\n", + "```bash\n", + "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02\" \n", + "-H \"Content-Type: application/json\" \n", + "-H \"Accept: application/json\" \n", + "-H \"Authorization: Bearer \" \\\n", "-d \"{\n", " \\\"model_id\\\": \\\"google/flan-t5-xxl\\\",\n", " \\\"input\\\": \\\"What is the capital of Arkansas?:\\\",\n", @@ -61,72 +70,66 @@ " \\\"max_new_tokens\\\": 100,\n", " \\\"time_limit\\\": 1000\n", " },\n", - " \\\"project_id\\\": \\\"\"\n", - "}\"\n", + " \\\"project_id\\\": \\\"\"}\"\n", + "```\n", "\n", "\n", - "2.\tWith access to watsonX API’s validated you can install the python library\n", - " \n", - " \n", - "From \n" + "With access to watsonX API’s validated you can install the python library from \n", + "\n", + "---" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [] - }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Installing LiteLLM \n", - "\n", + "To install LiteLLM, follow these steps:\n", "1. **Download LiteLLM Docker Image:**\n", "\n", " ```bash\n", " docker pull ghcr.io/berriai/litellm:main-latest\n", " ```\n", "\n", - " **(ALTERNATIVE). Install LiteLLM Python Library:**\n", + " OR\n", + "\n", + "\n", + " **Install LiteLLM Python Library:**\n", "\n", " ```bash\n", " pip install 'litellm[proxy]'\n", " ```\n", "\n", "\n", - "\n", - "---\n", - "\n", "2. **Create a LiteLLM Configuration File:**\n", "\n", " - Save as `litellm_config.yaml` in a local directory.\n", " - Example content for WatsonX:\n", "\n", - " ```yaml\n", - " model_list:\n", - " - model_name: llama-3-8b\n", - " litellm_params:\n", - " # all params accepted by litellm.completion()\n", - " model: watsonx/meta-llama/llama-3-8b-instruct\n", - " api_key: \"os.environ/WATSONX_API_KEY\" \n", - " project_id: \"os.environ/WX_PROJECT_ID\"\n", + " ```bash\n", + " model_list:\n", + " - model_name: llama-3-8b\n", + " litellm_params:\n", + " # all params accepted by litellm.completion()\n", + " model: watsonx/meta-llama/llama-3-8b-instruct\n", + " api_key: \"os.environ/WATSONX_API_KEY\" \n", + " project_id: \"os.environ/WX_PROJECT_ID\"\n", "\n", " ```\n", - "'''yaml\n", - " - model_name: \"llama_3_2_90\"\n", - " litellm_params:\n", - " model: watsonx/meta-llama/llama-3-2-90b-vision-instruct\n", - " api_key: os.environ[\"WATSONX_APIKEY\"] = \"\" # IBM cloud API key\n", - " max_new_tokens: 4000\n", - "'''\n", + " ```bash\n", + " - model_name: \"llama_3_2_90\"\n", + " litellm_params:\n", + " model: watsonx/meta-llama/llama-3-2-90b-vision-instruct\n", + " api_key: os.environ[\"WATSONX_APIKEY\"] = \"\" # IBM cloud API key\n", + " max_new_tokens: 4000\n", + " ```\n", "3. **Start LiteLLM Container:**\n", "\n", " ```bash\n", " docker run -v \\litellm_config.yaml:/app/config.yaml -e WATSONX_API_KEY= -e WATSONX_URL=https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02 -e WX_PROJECT_ID= -p 4000:4000 ghcr.io/berriai/litellm:main-latest --config /app/config.yaml --detailed_debug\n", " ```\n", "\n", - " ---" + "---" ] }, { @@ -147,12 +150,12 @@ "Once installed, AutoGen agents can leverage WatsonX APIs via LiteLLM.\n", "\n", "---\n", - "\n", + "```bash\n", "phi1 = {\n", " \"config_list\": [\n", " {\n", " \"model\": \"llama-3-8b\",\n", - " \"base_url\": \"http://localhost:4000\",\n", + " \"base_url\": \"http://localhost:4000\", #use http://0.0.0.0:4000 for Macs\n", " \"api_key\":\"watsonx\",\n", " \"price\" : [0,0]\n", " },\n", @@ -160,14 +163,11 @@ " \"cache_seed\": None, # Disable caching.\n", "}\n", "\n", - "\n", - "\n", - "\n", "phi2 = {\n", " \"config_list\": [\n", " {\n", " \"model\": \"llama-3-8b\",\n", - " \"base_url\": \"http://localhost:4000\",\n", + " \"base_url\": \"http://localhost:4000\", #use http://0.0.0.0:4000 for Macs\n", " \"api_key\":\"watsonx\",\n", " \"price\" : [0,0]\n", " },\n", @@ -190,8 +190,14 @@ ")\n", "\n", "#autogen\n", - "chat_result = jack.initiate_chat(emma, message=\"Emma, tell me a joke.\", max_turns=2)\n" + "chat_result = jack.initiate_chat(emma, message=\"Emma, tell me a joke.\", max_turns=2)\n", + "```" ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] } ], "metadata": { From ab5947b63c4d778f4df5549f6fa0fd9e655784a0 Mon Sep 17 00:00:00 2001 From: "Moore, Eric" Date: Fri, 13 Dec 2024 10:18:47 -0600 Subject: [PATCH 4/5] Changed br to newline --- .../non-openai-models/cloud-litellm-watsonx.ipynb | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb index 5f6f31b546..fc14743858 100644 --- a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb +++ b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb @@ -31,8 +31,8 @@ "1. **Access WatsonX:**\n", " - Sign up for [WatsonX.ai](https://www.ibm.com/watsonx).\n", " - Create an API_KEY and PROJECT_ID.\n", - "
\n", - "
\n", + "\n\n", + "\n\n", "2. **Validate WatsonX API Access:**\n", " - Verify access using the following commands:" ] @@ -42,8 +42,8 @@ "metadata": {}, "source": [ "Tip: Verify access to watsonX APIs before installing LiteLLM.\n", - "

\n", - "Get Session Token:
\n", + "\n\n\n", + "Get Session Token: \n\n", "```bash\n", "curl -L \"https://iam.cloud.ibm.com/identity/token?=null\" \n", "-H \"Content-Type: application/x-www-form-urlencoded\" \n", @@ -51,13 +51,13 @@ "-d \"apikey=\"\n", "```\n", "\n", - "Get list of LLMs:
\n", + "Get list of LLMs: \n\n", "```bash\n", "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/foundation_model_specs?version=2024-09-16&project_id=1eeb4112-5f6e-4a81-9b61-8eac7f9653b4&filters=function_text_generation%2C%21lifecycle_withdrawn%3Aand&limit=200\" \n", "-H \"Authorization: Bearer \"\n", "```\n", "\n", - "Ask the LLM a question:
\n", + "Ask the LLM a question: \n\n", "```bash\n", "curl -L \"https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-02\" \n", "-H \"Content-Type: application/json\" \n", From 1a30b9e7412bec5be04b17bb5cdf6db957fe280d Mon Sep 17 00:00:00 2001 From: "Moore, Eric" Date: Fri, 13 Dec 2024 10:25:32 -0600 Subject: [PATCH 5/5] Autogen to AG2 --- .../non-openai-models/cloud-litellm-watsonx.ipynb | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb index fc14743858..42a3c388c0 100644 --- a/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb +++ b/website/docs/topics/non-openai-models/cloud-litellm-watsonx.ipynb @@ -6,11 +6,11 @@ "source": [ "# LiteLLM with WatsonX \n", "\n", - "LiteLLM is an open-source, locally run proxy server providing an OpenAI-compatible API. It supports various LLM providers, including IBM's WatsonX, enabling seamless integration with tools like AutoGen.\n", + "LiteLLM is an open-source, locally run proxy server providing an OpenAI-compatible API. It supports various LLM providers, including IBM's WatsonX, enabling seamless integration with tools like AG2.\n", "\n", "Running LiteLLM with WatsonX requires the following installations:\n", "\n", - "1. **AutoGen** – A framework for building and orchestrating AI agents.\n", + "1. **AG2** – A framework for building and orchestrating AI agents.\n", "2. **LiteLLM** – An OpenAI-compatible proxy for bridging non-compliant APIs.\n", "3. **IBM WatsonX** – LLM service requiring specific session token authentication.\n", "\n", @@ -136,9 +136,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Installing AutoGen \n", + "## Installing AG2 \n", "\n", - "AutoGen simplifies orchestration and communication between agents. To install:\n", + "AG2 simplifies orchestration and communication between agents. To install:\n", "\n", "1. Open a terminal with administrator rights.\n", "2. Run the following command:\n", @@ -147,7 +147,7 @@ " pip install ag2\n", " ```\n", "\n", - "Once installed, AutoGen agents can leverage WatsonX APIs via LiteLLM.\n", + "Once installed, AG2 agents can leverage WatsonX APIs via LiteLLM.\n", "\n", "---\n", "```bash\n", @@ -175,7 +175,7 @@ " \"cache_seed\": None, # Disable caching.\n", "}\n", "\n", - "from autogen import ConversableAgent, AssistantAgent\n", + "from AG2 import ConversableAgent, AssistantAgent\n", "\n", "jack = ConversableAgent(\n", " \"Jack (Phi-2)\",\n", @@ -189,7 +189,7 @@ " system_message=\"Your name is Emma and you are a comedian in two-person comedy show.\",\n", ")\n", "\n", - "#autogen\n", + "#AG2\n", "chat_result = jack.initiate_chat(emma, message=\"Emma, tell me a joke.\", max_turns=2)\n", "```" ]