Skip to content

Releases: deepset-ai/haystack

v2.2.1-rc1

05 Jun 16:10
Compare
Choose a tag to compare

Release Notes

v2.2.1-rc1

⬆️ Upgrade Notes

  • trafilatura must now be manually installed with pip install trafilatura to use the HTMLToDocument Component.

⚡️ Enhancement Notes

  • Remove trafilatura as direct dependency and make it a lazily imported one

v1.26.1

05 Jun 13:32
Compare
Choose a tag to compare

Release Notes

v1.26.1

🚀 New Features

  • Add previously removed fetch_archive_from_http util function to fetch zip and gzip archives from url

v1.26.0

04 Jun 14:08
4188bf9
Compare
Choose a tag to compare

Release Notes

v1.26.0

Prelude

We are announcing that Haystack 1.26 is the final minor release for Haystack 1.x. Although we will continue to release bug fixes for this version, we will neither be adding nor removing any functionalities. Instead, we will focus our efforts on Haystack 2.x. Haystack 1.26 will reach its end-of-life on March 11, 2025.

The utility functions fetch_archive_from_http, build_pipeline and add_example_data were removed from Haystack.

This release changes the PDFToTextConverter so that it doesn't support PyMuPDF anymore. The converter will always assume xpdf is used by default.

⬆️ Upgrade Notes

  • We recommend replacing calls to the fetch_archive_from_http function with other tools available in Python or in the operating system of use.
  • To keep using PyMuPDF you must create a custom node, you can use the previous Haystack version for inspiration.

⚡️ Enhancement Notes

  • Add raise_on_failure flag to BaseConverter class so that big processes can optionally continue without breaking from exceptions.

  • Support for Llama3 models on AWS Bedrock.

  • Support for MistralAI and new Claude 3 models on AWS Bedrock.

  • Upgrade Transformers to the latest version 4.37.2. This version adds support for the Phi-2 and Qwen2 models and improves support for quantization.

  • Upgrade transformers to version 4.39.3 so that Haystack can support the new Cohere Command R models.

  • Add support for latest OpenAI embedding models text-embedding-3-large and text-embedding-3-small.

  • API_BASE can now be passed as an optional parameter in the getting_started sample. Only openai provider is supported in this set of changes. PromptNode and PromptModel were enhanced to allow passing of this parameter. This allows RAG against a local endpoint (e.g, http://localhost:1234/v1), so long as it is OpenAI compatible (such as LM Studio)

    Logging in the getting started sample was made more verbose, to make it easier for people to see what was happening under the covers.

  • Added new option split_by="page" to the preprocessor so we can chunk documents by page break.

  • Review and update context windows for OpenAI GPT models.

  • Support gated repos for Huggingface inference.

  • Add a check to verify that the embedding dimension set in the FAISS Document Store and retriever are equal before running embedding calculations.

🐛 Bug Fixes

  • Pipeline run error when using the FileTypeClassifier with the raise_on_error: True option. Instead of returning an unexpected NoneType, we route the file to a dead-end edge.

  • Ensure that the crawled files are downloaded to the output_dir directory, as specified in the Crawler constructor. Previously, some files were incorrectly downloaded to the current working directory.

  • Fixes SearchEngineDocumentStore.get_metadata_values_by_key method to make use of self.index if no index is provided.

  • Fixes OutputParser usage in PromptTemplate after making invocation context immutable in #7510.

  • When using a Pipeline with a JoinNode (e.g. JoinDocuments) all information from the previous nodes was lost other than a few select fields (e.g. documents). This was due to the JoinNode not properly passing on the information from the previous nodes. This has been fixed and now all information from the previous nodes is passed on to the next node in the pipeline.

    For example, this is a pipeline that rewrites the query during pipeline execution combined with a hybrid retrieval setup that requires a JoinDocuments node. Specifically the first prompt node rewrites the query to fix all spelling errors, and this new query is used for retrieval. And now the JoinDocuments node will now pass on the rewritten query so it can be used by the QAPromptNode node whereas before it would pass on the original query. `python from haystack import Pipeline from haystack.nodes import BM25Retriever, EmbeddingRetriever, PromptNode, Shaper, JoinDocuments, PromptTemplate from haystack.document_stores import InMemoryDocumentStore document_store = InMemoryDocumentStore(use_bm25=True) dicts = [{"content": "The capital of Germany is Berlin."}, {"content": "The capital of France is Paris."}] document_store.write_documents(dicts) query_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("You are a spell checker. Given a user query return the same query with all spelling errors fixed.\nUser Query: {query}\nSpell Checked Query:") ) shaper = Shaper( func="join_strings", inputs={"strings": "results"}, outputs=["query"], ) qa_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("Answer the user query. Query: {query}") ) sparse_retriever = BM25Retriever( document_store=document_store, top_k=2 ) dense_retriever = EmbeddingRetriever( document_store=document_store, embedding_model="intfloat/e5-base-v2", model_format="sentence_transformers", top_k=2 ) document_store.update_embeddings(dense_retriever) pipeline = Pipeline() pipeline.add_node(component=query_prompt_node, name="QueryPromptNode", inputs=["Query"]) pipeline.add_node(component=shaper, name="ListToString", inputs=["QueryPromptNode"]) pipeline.add_node(component=sparse_retriever, name="BM25", inputs=["ListToString"]) pipeline.add_node(component=dense_retriever, name="Embedding", inputs=["ListToString"]) pipeline.add_node( component=JoinDocuments(join_mode="concatenate"), name="Join", inputs=["BM25", "Embedding"] ) pipeline.add_node(component=qa_prompt_node, name="QAPromptNode", inputs=["Join"]) out = pipeline.run(query="What is the captial of Grmny?", debug=True) print(out["invocation_context"]) # Before Fix # {'query': 'What is the captial of Grmny?', <-- Original Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the captial of Grmny?'], <-- Original Query!! # After Fix # {'query': 'What is the capital of Germany?', <-- Rewritten Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the capital of Germany?'], <-- Rewritten Query!!`

  • When passing empty inputs (such as query="") to PromptNode, the node would raise an error. This has been fixed.

  • Change the dummy vector used internally in the Pinecone Document Store. A recent change to the Pinecone API does not allow to use vectors filled with zeros as was the previous dummy vector.

  • The types of meta data values accepted by RouteDocuments was unnecessarily restricted to string types. This causes validation errors (for example when loading from a yaml file) if a user tries to use a boolean type for example. We add boolean and int types as valid types for metadata_values.

  • Fixed a bug that made it impossible to write Documents to Weaviate when some of the fields were empty lists (e.g. split_overlap for preprocessed documents).

v2.2.0

03 Jun 13:51
19e4766
Compare
Choose a tag to compare

Release Notes

v2.2.0

Highlights

The Multiplexer component proved to be hard to explain and to understand. After reviewing its use cases, the documentation was rewritten and the component was renamed to BranchJoiner to better explain its functionalities.

Add the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' to the OpenAI components.

⬆️ Upgrade Notes

  • BranchJoiner has the very same interface as Multiplexer. To upgrade your code, just rename any occurrence of Multiplexer to BranchJoiner and ajdust the imports accordingly.

🚀 New Features

  • Add BranchJoiner to eventually replace Multiplexer
  • AzureOpenAIGenerator and AzureOpenAIChatGenerator can now be configured passing a timeout for the underlying AzureOpenAI client.

⚡️ Enhancement Notes

  • ChatPromptBuilder now supports changing its template at runtime. This allows you to define a default template and then change it based on your needs at runtime.
  • If an LLM-based evaluator (e.g., Faithfulness or ContextRelevance) is initialised with raise_on_failure=False, and if a call to an LLM fails or an LLM outputs an invalid JSON, the score of the sample is set to NaN instead of raising an exception. The user is notified with a warning indicating the number of requests that failed.
  • Adds inference mode to model call of the ExtractiveReader. This prevents gradients from being calculated during inference time in pytorch.
  • The DocumentCleaner class has the optional attribute keep_id that if set to True it keeps the document ids unchanged after cleanup.
  • DocumentSplitter now has an optional split_threshold parameter. Use this parameter if you want to rather not split inputs that are only slightly longer than the allowed split_length. If when chunking one of the chunks is smaller than the split_threshold, the chunk will be concatenated with the previous one. This avoids having too small chunks that are not meaningful.
  • Re-implement InMemoryDocumentStore BM25 search with incremental indexing by avoiding re-creating the entire inverse index for every new query. This change also removes the dependency on haystack_bm25. Please refer to [PR #7549](#7549) for the full context.
  • Improved MIME type management by directly setting MIME types on ByteStreams, enhancing the overall handling and routing of different file types. This update makes MIME type data more consistently accessible and simplifies the process of working with various document formats.
  • PromptBuilder now supports changing its template at runtime (e.g. for Prompt Engineering). This allows you to define a default template and then change it based on your needs at runtime.
  • Now you can set the timeout and max_retries parameters on OpenAI components by setting the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment vars or passing them at __init__.
  • The DocumentJoiner component's run method now accepts a top_k parameter, allowing users to specify the maximum number of documents to return at query time. This fixes issue #7702.
  • Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.
  • Make warm_up() usage consistent across the codebase.
  • Create a class hierarchy for pipeline classes, and move the run logic into the child class. Preparation work for introducing multiple run stratgegies.
  • Make the SerperDevWebSearch more robust when snippet is not present in the request response.
  • Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic
  • `HTMLToDocument`: change the HTML conversion backend from boilerpy3 to trafilatura, which is more robust and better maintained.

⚠️ Deprecation Notes

  • Mulitplexer is now deprecated.
  • DynamicChatPromptBuilder has been deprecated as ChatPromptBuilder fully covers its functionality. Use ChatPromptBuilder instead.
  • DynamicPromptBuilder has been deprecated as PromptBuilder fully covers its functionality. Use PromptBuilder instead.
  • The following parameters of HTMLToDocument are ignored and will be removed in Haystack 2.4.0: extractor_type and try_others.

🐛 Bug Fixes

  • FaithfullnessEvaluator and ContextRelevanceEvaluator now return 0 instead of NaN when applied to an empty context or empty statements.
  • Azure generators components fixed, they were missing the @component decorator.
  • Updates the from_dict method of SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder, NamedEntityExtractor, SentenceTransformersDiversityRanker and LocalWhisperTranscriber to allow None as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using the ComponentDevice.resolve_device logic.
  • Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
  • Fix NamedEntityExtractor crashing in Python 3.12 if constructed using a string backend argument.
  • Fixed the PdfMinerToDocument converter's outputs to be properly wired up to 'documents'.
  • Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.
  • Improves/fixes type serialization of PEP 585 types (e.g. list[Document], and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching of list[X] and List[X] types in component connections after serialization.
  • Fixed (de)serialization of NamedEntityExtractor. Includes updated tests verifying these fixes when NamedEntityExtractor is used in pipelines.
  • The include_outputs_from parameter in Pipeline.run correctly returns outputs of components with multiple outputs.
  • Return an empty list of answers when ExtractiveReader receives an empty list of documents instead of raising an exception.

v1.26.0-rc1

03 Jun 15:21
e66327a
Compare
Choose a tag to compare

Release Notes

v1.26.0-rc1

Prelude

The utility functions fetch_archive_from_http, build_pipeline and add_example_data were removed from Haystack.

This release changes the PDFToTextConverter so that it doesn't support PyMuPDF anymore. The converter will always assume xpdf is used by default.

⬆️ Upgrade Notes

  • We recommend replacing calls to the fetch_archive_from_http function with other tools available in Python or in the operating system of use.
  • To keep using PyMuPDF you must create a custom node, you can use the previous Haystack version for inspiration.

⚡️ Enhancement Notes

  • Support for Llama3 models on AWS Bedrock.
  • Support for MistralAI and new Claude 3 models on AWS Bedrock.
  • Upgrade transformers to version 4.39.3 so that Haystack can support the new Cohere Command R models.
  • Review and update context windows for OpenAI GPT models.
  • Support gated repos for Huggingface inference.
  • Add a check to verify that the embedding dimension set in the FAISS Document Store and retriever are equal before running embedding calculations.

🐛 Bug Fixes

  • Pipeline run error when using the FileTypeClassifier with the raise_on_error: True option. Instead of returning an unexpected NoneType, we route the file to a dead-end edge.

  • Ensure that the crawled files are downloaded to the output_dir directory, as specified in the Crawler constructor. Previously, some files were incorrectly downloaded to the current working directory.

  • Fixes SearchEngineDocumentStore.get_metadata_values_by_key method to make use of self.index if no index is provided.

  • Fixes OutputParser usage in PromptTemplate after making invocation context immutable in #7510.

  • When using a Pipeline with a JoinNode (e.g. JoinDocuments) all information from the previous nodes was lost other than a few select fields (e.g. documents). This was due to the JoinNode not properly passing on the information from the previous nodes. This has been fixed and now all information from the previous nodes is passed on to the next node in the pipeline.

    For example, this is a pipeline that rewrites the query during pipeline execution combined with a hybrid retrieval setup that requires a JoinDocuments node. Specifically the first prompt node rewrites the query to fix all spelling errors, and this new query is used for retrieval. And now the JoinDocuments node will now pass on the rewritten query so it can be used by the QAPromptNode node whereas before it would pass on the original query. `python from haystack import Pipeline from haystack.nodes import BM25Retriever, EmbeddingRetriever, PromptNode, Shaper, JoinDocuments, PromptTemplate from haystack.document_stores import InMemoryDocumentStore document_store = InMemoryDocumentStore(use_bm25=True) dicts = [{"content": "The capital of Germany is Berlin."}, {"content": "The capital of France is Paris."}] document_store.write_documents(dicts) query_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("You are a spell checker. Given a user query return the same query with all spelling errors fixed.\nUser Query: {query}\nSpell Checked Query:") ) shaper = Shaper( func="join_strings", inputs={"strings": "results"}, outputs=["query"], ) qa_prompt_node = PromptNode( model_name_or_path="gpt-3.5-turbo", api_key="", default_prompt_template=PromptTemplate("Answer the user query. Query: {query}") ) sparse_retriever = BM25Retriever( document_store=document_store, top_k=2 ) dense_retriever = EmbeddingRetriever( document_store=document_store, embedding_model="intfloat/e5-base-v2", model_format="sentence_transformers", top_k=2 ) document_store.update_embeddings(dense_retriever) pipeline = Pipeline() pipeline.add_node(component=query_prompt_node, name="QueryPromptNode", inputs=["Query"]) pipeline.add_node(component=shaper, name="ListToString", inputs=["QueryPromptNode"]) pipeline.add_node(component=sparse_retriever, name="BM25", inputs=["ListToString"]) pipeline.add_node(component=dense_retriever, name="Embedding", inputs=["ListToString"]) pipeline.add_node( component=JoinDocuments(join_mode="concatenate"), name="Join", inputs=["BM25", "Embedding"] ) pipeline.add_node(component=qa_prompt_node, name="QAPromptNode", inputs=["Join"]) out = pipeline.run(query="What is the captial of Grmny?", debug=True) print(out["invocation_context"]) # Before Fix # {'query': 'What is the captial of Grmny?', <-- Original Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the captial of Grmny?'], <-- Original Query!! # After Fix # {'query': 'What is the capital of Germany?', <-- Rewritten Query!! # 'results': ['The capital of Germany is Berlin.'], # 'prompts': ['Answer the user query. Query: What is the capital of Germany?'], <-- Rewritten Query!!`

  • When passing empty inputs (such as query="") to PromptNode, the node would raise an error. This has been fixed.

v1.26.0-rc0

⚡️ Enhancement Notes

  • Add raise_on_failure flag to BaseConverter class so that big processes can optionally continue without breaking from exceptions.

  • Upgrade Transformers to the latest version 4.37.2. This version adds support for the Phi-2 and Qwen2 models and improves support for quantization.

  • Add support for latest OpenAI embedding models text-embedding-3-large and text-embedding-3-small.

  • API_BASE can now be passed as an optional parameter in the getting_started sample. Only openai provider is supported in this set of changes. PromptNode and PromptModel were enhanced to allow passing of this parameter. This allows RAG against a local endpoint (e.g, http://localhost:1234/v1), so long as it is OpenAI compatible (such as LM Studio)

    Logging in the getting started sample was made more verbose, to make it easier for people to see what was happening under the covers.

  • Added new option split_by="page" to the preprocessor so we can chunk documents by page break.

🐛 Bug Fixes

  • Change the dummy vector used internally in the Pinecone Document Store. A recent change to the Pinecone API does not allow to use vectors filled with zeros as was the previous dummy vector.
  • The types of meta data values accepted by RouteDocuments was unnecessarily restricted to string types. This causes validation errors (for example when loading from a yaml file) if a user tries to use a boolean type for example. We add boolean and int types as valid types for metadata_values.
  • Fixed a bug that made it impossible to write Documents to Weaviate when some of the fields were empty lists (e.g. split_overlap for preprocessed documents).

v2.2.0-rc2

30 May 17:14
16e2ad7
Compare
Choose a tag to compare
v2.2.0-rc2 Pre-release
Pre-release

Release Notes

v2.2.0-rc1

Highlights

The Multiplexer component proved to be hard to explain and to understand. After reviewing its use cases, the documentation was rewritten and the component was renamed to BranchJoiner to better explain its functionalities.

Add the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' to the OpenAI components.

⬆️ Upgrade Notes

  • BranchJoiner has the very same interface as Multiplexer. To upgrade your code, just rename any occurrence of Multiplexer to BranchJoiner and ajdust the imports accordingly.

🚀 New Features

  • Add BranchJoiner to eventually replace Multiplexer
  • AzureOpenAIGenerator and AzureOpenAIChatGenerator can now be configured passing a timeout for the underlying AzureOpenAI client.

⚡️ Enhancement Notes

  • ChatPromptBuilder now supports changing its template at runtime. This allows you to define a default template and then change it based on your needs at runtime.
  • If an LLM-based evaluator (e.g., Faithfulness or ContextRelevance) is initialised with raise_on_failure=False, and if a call to an LLM fails or an LLM outputs an invalid JSON, the score of the sample is set to NaN instead of raising an exception. The user is notified with a warning indicating the number of requests that failed.
  • Adds inference mode to model call of the ExtractiveReader. This prevents gradients from being calculated during inference time in pytorch.
  • The DocumentCleaner class has the optional attribute keep_id that if set to True it keeps the document ids unchanged after cleanup.
  • DocumentSplitter now has an optional split_threshold parameter. Use this parameter if you want to rather not split inputs that are only slightly longer than the allowed split_length. If when chunking one of the chunks is smaller than the split_threshold, the chunk will be concatenated with the previous one. This avoids having too small chunks that are not meaningful.
  • Re-implement InMemoryDocumentStore BM25 search with incremental indexing by avoiding re-creating the entire inverse index for every new query. This change also removes the dependency on haystack_bm25. Please refer to [PR #7549](#7549) for the full context.
  • Improved MIME type management by directly setting MIME types on ByteStreams, enhancing the overall handling and routing of different file types. This update makes MIME type data more consistently accessible and simplifies the process of working with various document formats.
  • PromptBuilder now supports changing its template at runtime (e.g. for Prompt Engineering). This allows you to define a default template and then change it based on your needs at runtime.
  • Now you can set the timeout and max_retries parameters on OpenAI components by setting the 'OPENAI_TIMEOUT' and 'OPENAI_MAX_RETRIES' environment vars or passing them at __init__.
  • The DocumentJoiner component's run method now accepts a top_k parameter, allowing users to specify the maximum number of documents to return at query time. This fixes issue #7702.
  • Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.
  • Make warm_up() usage consistent across the codebase.
  • Create a class hierarchy for pipeline classes, and move the run logic into the child class. Preparation work for introducing multiple run stratgegies.
  • Make the SerperDevWebSearch more robust when snippet is not present in the request response.
  • Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic
  • `HTMLToDocument`: change the HTML conversion backend from boilerpy3 to trafilatura, which is more robust and better maintained.

⚠️ Deprecation Notes

  • Mulitplexer is now deprecated.
  • DynamicChatPromptBuilder has been deprecated as ChatPromptBuilder fully covers its functionality. Use ChatPromptBuilder instead.
  • DynamicPromptBuilder has been deprecated as PromptBuilder fully covers its functionality. Use PromptBuilder instead.
  • The following parameters of HTMLToDocument are ignored and will be removed in Haystack 2.4.0: extractor_type and try_others.

🐛 Bug Fixes

  • FaithfullnessEvaluator and ContextRelevanceEvaluator now return 0 instead of NaN when applied to an empty context or empty statements.
  • Azure generators components fixed, they were missing the @component decorator.
  • Updates the from_dict method of SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder, NamedEntityExtractor, SentenceTransformersDiversityRanker and LocalWhisperTranscriber to allow None as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using the ComponentDevice.resolve_device logic.
  • Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
  • Fix NamedEntityExtractor crashing in Python 3.12 if constructed using a string backend argument.
  • Fixed the PdfMinerToDocument converter's outputs to be properly wired up to 'documents'.
  • Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.
  • Improves/fixes type serialization of PEP 585 types (e.g. list[Document], and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching of list[X] and List[X] types in component connections after serialization.
  • Fixed (de)serialization of NamedEntityExtractor. Includes updated tests verifying these fixes when NamedEntityExtractor is used in pipelines.
  • The include_outputs_from parameter in Pipeline.run correctly returns outputs of components with multiple outputs.
  • Return an empty list of answers when ExtractiveReader receives an empty list of documents instead of raising an exception.

v2.2.0-rc1

30 May 16:51
3814e75
Compare
Choose a tag to compare
v2.2.0-rc1 Pre-release
Pre-release
v2.2.0-rc1

v2.1.2

16 May 13:40
Compare
Choose a tag to compare

Release Notes

v2.1.2

⚡️ Enhancement Notes

  • Enforce JSON mode on OpenAI LLM-based evaluators so that the they always return valid JSON output. This is to ensure that the output is always in a consistent format, regardless of the input.

🐛 Bug Fixes

  • FaithfullnessEvaluator and ContextRelevanceEvaluator now return 0 instead of NaN when applied to an empty context or empty statements.
  • Azure generators components fixed, they were missing the @component decorator.
  • Updates the from_dict method of SentenceTransformersTextEmbedder, SentenceTransformersDocumentEmbedder, NamedEntityExtractor, SentenceTransformersDiversityRanker and LocalWhisperTranscriber to allow None as a valid value for device when deserializing from a YAML file. This allows a deserialized pipeline to auto-determine what device to use using the ComponentDevice.resolve_device logic.
  • Improves/fixes type serialization of PEP 585 types (e.g. list[Document], and their nested version). This improvement enables better serialization of generics and nested types and improves/fixes matching of list[X] and List[X]` types in component connections after serialization.
  • Fixed (de)serialization of NamedEntityExtractor. Includes updated tests verifying these fixes when NamedEntityExtractor is used in pipelines.
  • The include_outputs_from parameter in Pipeline.run correctly returns outputs of components with multiple outputs.

v2.1.1-rc1

09 May 15:19
Compare
Choose a tag to compare

Release Notes

v2.1.1-rc1

⚡️ Enhancement Notes

  • Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic

🐛 Bug Fixes

  • Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
  • Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.

v2.1.1

09 May 16:03
Compare
Choose a tag to compare

Release Notes

v2.1.1

⚡️ Enhancement Notes

  • Make SparseEmbedding a dataclass, this makes it easier to use the class with Pydantic

🐛 Bug Fixes

  • Fix the broken serialization of HuggingFaceAPITextEmbedder, HuggingFaceAPIDocumentEmbedder, HuggingFaceAPIGenerator, and HuggingFaceAPIChatGenerator.
  • Add to_dict method to DocumentRecallEvaluator to allow proper serialization of the component.