Skip to content

Commit

Permalink
Merge branch 'main' into rachelb/2024-12-17-001
Browse files Browse the repository at this point in the history
  • Loading branch information
rachelbakkke authored Dec 19, 2024
2 parents 1a5588a + 495fcec commit 0501e32
Show file tree
Hide file tree
Showing 47 changed files with 4,584 additions and 454 deletions.
39 changes: 39 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,44 @@
# ChangeLog

## [2024-12-18]

### `llama-index-core` [0.12.7]

- fix: add a timeout to langchain callback handler (#17296)
- fix: make Document serialization event more backward compatible (#17312)

### `llama-index-embeddings-voyageai` [0.3.4]

- Exposing additional keyword arguments for VoyageAI's embedding model (#17315)

### `llama-index-llms-keywordsai` [0.1.0]

- Added KeywordsAI LLM (#16860)

### `llama-index-llms-oci-genai` [0.4.0]

- Add OCI Generative AI tool calling support (#16888)

### `llama-index-llms-openai` [0.3.11]

- support new o1 models (#17307)

### `llama-index-postprocessor-voyageai-rerank` [0.3.1]

- VoyageAI Reranker optional API Key (#17310)

### `llama-index-vector-stores-azureaisearch` [0.3.1]

- improve async search client handling (#17319)

### `llama-index-vector-stores-azurecosmosmongo` [0.4.0]

- CosmosDB insertion timestamp bugfix (#17290)

### `llama-index-vector-stores-azurecosmosnosql` [1.3.0]

- CosmosDB insertion timestamp bugfix (#17290)

## [2024-12-17]

### `llama-index-core` [0.12.6]
Expand Down
41 changes: 40 additions & 1 deletion docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,49 @@
# ChangeLog

## [2024-12-18]

### `llama-index-core` [0.12.7]

- fix: add a timeout to langchain callback handler (#17296)
- fix: make Document serialization event more backward compatible (#17312)

### `llama-index-embeddings-voyageai` [0.3.4]

- Exposing additional keyword arguments for VoyageAI's embedding model (#17315)

### `llama-index-llms-keywordsai` [0.1.0]

- Added KeywordsAI LLM (#16860)

### `llama-index-llms-oci-genai` [0.4.0]

- Add OCI Generative AI tool calling support (#16888)

### `llama-index-llms-openai` [0.3.11]

- support new o1 models (#17307)

### `llama-index-postprocessor-voyageai-rerank` [0.3.1]

- VoyageAI Reranker optional API Key (#17310)

### `llama-index-vector-stores-azureaisearch` [0.3.1]

- improve async search client handling (#17319)

### `llama-index-vector-stores-azurecosmosmongo` [0.4.0]

- CosmosDB insertion timestamp bugfix (#17290)

### `llama-index-vector-stores-azurecosmosnosql` [1.3.0]

- CosmosDB insertion timestamp bugfix (#17290)

## [2024-12-17]

### `llama-index-core` [0.12.6]

- [bug fix] Ensure that StopEvent gets cleared from Context._in_progress["_done"] after a Workflow run (#17300)
- [bug fix] Ensure that StopEvent gets cleared from Context.\_in_progress["_done"] after a Workflow run (#17300)
- fix: add a timeout to langchain callback handler (#17296)
- tweak User vs tool in react prompts (#17273)
- refact: Refactor Document to be natively multimodal (#17204)
Expand Down

Large diffs are not rendered by default.

59 changes: 50 additions & 9 deletions docs/docs/examples/llm/oci_genai.ipynb
Original file line number Diff line number Diff line change
@@ -1,14 +1,5 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "6d1ca9ac",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/docs/examples/llm/bedrock.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"id": "9e3a8796-edc8-43f2-94ad-fe4fb20d70ed",
Expand Down Expand Up @@ -360,6 +351,56 @@
"resp = llm.chat(messages)\n",
"print(resp)"
]
},
{
"cell_type": "markdown",
"id": "acd73b3d",
"metadata": {},
"source": [
"## Basic tool calling in llamaindex \n",
"\n",
"Only Cohere supports tool calling for now"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5546c661",
"metadata": {},
"outputs": [],
"source": [
"from llama_index.llms.oci_genai import OCIGenAI\n",
"from llama_index.core.tools import FunctionTool\n",
"\n",
"llm = OCIGenAI(\n",
" model=\"MY_MODEL\",\n",
" service_endpoint=\"https://inference.generativeai.us-chicago-1.oci.oraclecloud.com\",\n",
" compartment_id=\"MY_OCID\",\n",
")\n",
"\n",
"\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiple two integers and returns the result integer\"\"\"\n",
" return a * b\n",
"\n",
"\n",
"def add(a: int, b: int) -> int:\n",
" \"\"\"Addition function on two integers.\"\"\"\n",
" return a + b\n",
"\n",
"\n",
"add_tool = FunctionTool.from_defaults(fn=add)\n",
"multiply_tool = FunctionTool.from_defaults(fn=multiply)\n",
"\n",
"response = llm.chat_with_tools(\n",
" tools=[add_tool, multiply_tool],\n",
" user_msg=\"What is 3 * 12? Also, what is 11 + 49?\",\n",
")\n",
"\n",
"print(response)\n",
"tool_calls = response.message.additional_kwargs.get(\"tool_calls\", [])\n",
"print(tool_calls)"
]
}
],
"metadata": {
Expand Down
Loading

0 comments on commit 0501e32

Please sign in to comment.