Skip to content

Commit

Permalink
Merge branch 'main' into pgvector-test
Browse files Browse the repository at this point in the history
  • Loading branch information
masci authored Dec 9, 2024
2 parents 3094eae + 4032456 commit 4f1f454
Show file tree
Hide file tree
Showing 123 changed files with 4,016 additions and 2,189 deletions.
100 changes: 100 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,105 @@
# ChangeLog

## [2024-12-08]

### `llama-index-core` [0.12.4]

- Fix sync and async structured streaming (#17194)
- unpin pydantic to allow 2.8 or greater (#17193)
- Update core structured predict streaming, add ollama structured predict (#17188)
- bump tenacity dependency in llama-index-core (#17178)

### `llama-index-indices-managed-vectara` [0.3.1]

- Add Verbose to Vectara `as_query_engine` (#17176)

### `llama-index-llms-ollama` [0.5.0]

- Update core structured predict streaming, add ollama structured predict (#17188)

### `llama-index-llms-perplexity` [0.3.2]

- Fix message format for perplexity (#17182)

### `llama-index-readers-web` [0.3.1]

- Add possibility to use URI as doc id in WholeSiteReader (#17187)

### `llama-index-vector-stores-chroma` [0.4.1]

- BUG FIX: llama-index-vectorstore-chromadb to work with chromadb v0.5.17 (#17184)

## [2024-12-06]

### `llama-index-core` [0.12.3]

- cover SimpleDirectoryReader with unit tests (#17156)
- docs: rewrite openai image reasoning example without multimodal LLM (#17148)
- fix(metrics): fixed NDCG calculation and added comprehensive test cases (#17126)
- feat: improve ImageBlock (#17111)
- Remove forgotten print in ChatMemoryBuffer (#17114)
- [FIX] Move JSONalyzeQueryEngine to experimental (#17110)

### `llama-index-embeddings-clip` [0.3.1]

- Unrestrict clip models to use (#17162)

### `llama-index-embeddings-openai` [0.3.1]

- fix/openai-embbeding-retry (#17072)

### `llama-index-embeddings-text-embeddings-inference` [0.3.1]

- proper auth token in TEI (#17158)

### `llama-index-indices-managed-llama-cloud` [0.6.3]

- chore: fix httpx_client typo in LlamaCloudRetriever (#17101)
- fix: wrong project id variable in LlamaCloudRetriever (#17086)

### `llama-index-llms-bedrock-converse` [0.4.1]

- Adding AWS Nova models to Bedrock Converse (#17139)

### `llama-index-llms-ollama` [0.4.2]

- Ollama LLM: Added TypeError exception to `_get_response_token_counts` (#17150)

### `llama-index-llms-sambanovasystems` [0.4.3]

- changes in openai identification in url (#17161)

### `llama-index-memory-mem0` [0.2.1]

- Fix mem0 version check (#17159)

### `llama-index-multi-modal-llms-openai` [0.4.0]

- fix: make OpenAIMultiModal work with new ChatMessage (#17138)

### `llama-index-postprocessor-bedrock-rerank` [0.3.0]

- Add AWS Bedrock Reranker (#17134)

### `llama-index-readers-file` [0.4.1]

- update doc id for unstructured reader (#17160)

### `llama-index-retrievers-duckdb-retriever` [0.4.0]

- fix: use prepared statement in DuckDBRetriever (#17092)

### `llama-index-vector-stores-postgres` [0.3.2]

- Create tables for pgvector regardless of schema status (#17100)

### `llama-index-vector-stores-weaviate` [1.2.4]

- make alpha not none in weaviate (#17163)
- Make Weaviate Vector Store integration work with complex properties (#17129)
- Add support for `IS_EMPTY` metadata filters to Weaviate Vector Store integration (#17128)
- Make Weaviate Vector Store integration support nested metadata filtering (#17107)

## [2024-11-26]

### `llama-index-core` [0.12.2]
Expand Down
100 changes: 100 additions & 0 deletions docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,105 @@
# ChangeLog

## [2024-12-08]

### `llama-index-core` [0.12.4]

- Fix sync and async structured streaming (#17194)
- unpin pydantic to allow 2.8 or greater (#17193)
- Update core structured predict streaming, add ollama structured predict (#17188)
- bump tenacity dependency in llama-index-core (#17178)

### `llama-index-indices-managed-vectara` [0.3.1]

- Add Verbose to Vectara `as_query_engine` (#17176)

### `llama-index-llms-ollama` [0.5.0]

- Update core structured predict streaming, add ollama structured predict (#17188)

### `llama-index-llms-perplexity` [0.3.2]

- Fix message format for perplexity (#17182)

### `llama-index-readers-web` [0.3.1]

- Add possibility to use URI as doc id in WholeSiteReader (#17187)

### `llama-index-vector-stores-chroma` [0.4.1]

- BUG FIX: llama-index-vectorstore-chromadb to work with chromadb v0.5.17 (#17184)

## [2024-12-06]

### `llama-index-core` [0.12.3]

- cover SimpleDirectoryReader with unit tests (#17156)
- docs: rewrite openai image reasoning example without multimodal LLM (#17148)
- fix(metrics): fixed NDCG calculation and added comprehensive test cases (#17126)
- feat: improve ImageBlock (#17111)
- Remove forgotten print in ChatMemoryBuffer (#17114)
- [FIX] Move JSONalyzeQueryEngine to experimental (#17110)

### `llama-index-embeddings-clip` [0.3.1]

- Unrestrict clip models to use (#17162)

### `llama-index-embeddings-openai` [0.3.1]

- fix/openai-embbeding-retry (#17072)

### `llama-index-embeddings-text-embeddings-inference` [0.3.1]

- proper auth token in TEI (#17158)

### `llama-index-indices-managed-llama-cloud` [0.6.3]

- chore: fix httpx_client typo in LlamaCloudRetriever (#17101)
- fix: wrong project id variable in LlamaCloudRetriever (#17086)

### `llama-index-llms-bedrock-converse` [0.4.1]

- Adding AWS Nova models to Bedrock Converse (#17139)

### `llama-index-llms-ollama` [0.4.2]

- Ollama LLM: Added TypeError exception to `_get_response_token_counts` (#17150)

### `llama-index-llms-sambanovasystems` [0.4.3]

- changes in openai identification in url (#17161)

### `llama-index-memory-mem0` [0.2.1]

- Fix mem0 version check (#17159)

### `llama-index-multi-modal-llms-openai` [0.4.0]

- fix: make OpenAIMultiModal work with new ChatMessage (#17138)

### `llama-index-postprocessor-bedrock-rerank` [0.3.0]

- Add AWS Bedrock Reranker (#17134)

### `llama-index-readers-file` [0.4.1]

- update doc id for unstructured reader (#17160)

### `llama-index-retrievers-duckdb-retriever` [0.4.0]

- fix: use prepared statement in DuckDBRetriever (#17092)

### `llama-index-vector-stores-postgres` [0.3.2]

- Create tables for pgvector regardless of schema status (#17100)

### `llama-index-vector-stores-weaviate` [1.2.4]

- make alpha not none in weaviate (#17163)
- Make Weaviate Vector Store integration work with complex properties (#17129)
- Add support for `IS_EMPTY` metadata filters to Weaviate Vector Store integration (#17128)
- Make Weaviate Vector Store integration support nested metadata filtering (#17107)

## [2024-11-26]

### `llama-index-core` [0.12.2]
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/callbacks/argilla.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.callbacks.argilla
options:
members:
- argilla_callback_handler
4 changes: 4 additions & 0 deletions docs/docs/api_reference/postprocessor/bedrock_rerank.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.postprocessor.bedrock_rerank
options:
members:
- BedrockRerank
2 changes: 1 addition & 1 deletion docs/docs/examples/llm/bedrock_converse.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -274,7 +274,7 @@
"from llama_index.llms.bedrock_converse import BedrockConverse\n",
"\n",
"llm = BedrockConverse(\n",
" model=\"anthropic.claude-3-haiku-20240307-v1:0\",\n",
" model=\"us.amazon.nova-lite-v1:0\",\n",
" aws_access_key_id=\"AWS Access Key ID to use\",\n",
" aws_secret_access_key=\"AWS Secret Access Key to use\",\n",
" aws_session_token=\"AWS Session Token to use\",\n",
Expand Down
57 changes: 51 additions & 6 deletions docs/docs/examples/llm/ollama.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@
"source": [
"## Structured Outputs\n",
"\n",
"We can also attach a pyndatic class to the LLM to ensure structured outputs"
"We can also attach a pyndatic class to the LLM to ensure structured outputs. This will use Ollama's builtin structured output capabilities for a given pydantic class."
]
},
{
Expand All @@ -360,7 +360,6 @@
"outputs": [],
"source": [
"from llama_index.core.bridge.pydantic import BaseModel\n",
"from llama_index.core.tools import FunctionTool\n",
"\n",
"\n",
"class Song(BaseModel):\n",
Expand Down Expand Up @@ -392,11 +391,13 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{\"name\": \"Yesterday\", \"artist\": \"The Beatles\"}\n"
"{\"name\":\"Radioactive\",\"artist\":\"Imagine Dragons\"}\n"
]
}
],
"source": [
"from llama_index.core.llms import ChatMessage\n",
"\n",
"response = sllm.chat([ChatMessage(role=\"user\", content=\"Name a random song!\")])\n",
"print(response.message.content)"
]
Expand All @@ -419,7 +420,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{\"name\": \"Happy Birthday to You\", \"artist\": \"Traditional\"}\n"
"{\"name\":\"Lose Yourself\",\"artist\":\"Eminem\"}\n"
]
}
],
Expand All @@ -432,10 +433,54 @@
},
{
"cell_type": "markdown",
"id": "c4b224c6",
"id": "cdad7904",
"metadata": {},
"source": [
"Currently, Ollama does not support streaming structured objects. But hopefully soon!"
"You can also stream structured outputs! Streaming a structured output is a little different than streaming a normal string. It will yield a generator of the most up to date structured object."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5c40157",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":null}\n",
"{\"name\":null,\"artist\":\"\"}\n",
"{\"name\":null,\"artist\":\"The\"}\n",
"{\"name\":null,\"artist\":\"The Black\"}\n",
"{\"name\":null,\"artist\":\"The Black Keys\"}\n",
"{\"name\":null,\"artist\":\"The Black Keys\"}\n",
"{\"name\":null,\"artist\":\"The Black Keys\"}\n",
"{\"name\":null,\"artist\":\"The Black Keys\"}\n",
"{\"name\":null,\"artist\":\"The Black Keys\"}\n",
"{\"name\":null,\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"\",\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"Lon\",\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"Lonely\",\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"Lonely Boy\",\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"Lonely Boy\",\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"Lonely Boy\",\"artist\":\"The Black Keys\"}\n",
"{\"name\":\"Lonely Boy\",\"artist\":\"The Black Keys\"}\n"
]
}
],
"source": [
"response_gen = sllm.stream_chat(\n",
" [ChatMessage(role=\"user\", content=\"Name a random song!\")]\n",
")\n",
"for r in response_gen:\n",
" print(r.message.content)"
]
}
],
Expand Down
Loading

0 comments on commit 4f1f454

Please sign in to comment.