- New highly anticipated feature X added to Python SDK (#X).
- New highly anticipated feature Y added to Java SDK (#Y).
- Support for X source added (Java/Python) (#X).
- X feature added (Java/Python) (#X).
- X behavior was changed (#X).
- X behavior is deprecated and will be removed in X versions (#X).
- Fixed X (Java/Python) (#X).
- Fixed (CVE-YYYY-NNNN)[https://www.cve.org/CVERecord?id=CVE-YYYY-NNNN] (Java/Python/Go) (#X).
- Fixed (CVE-2024-47561)[https://www.cve.org/CVERecord?id=CVE-2024-47561] (Java) by upgrading Avro version to 1.11.4
- (#X).
- New highly anticipated feature X added to Python SDK (#X).
- New highly anticipated feature Y added to Java SDK (#Y).
- [Python] Introduce Managed Transforms API (#31495)
- Flink 1.19 support added (#32648)
- Support for X source added (Java/Python) (#X).
- [Managed Iceberg] Support creating tables if needed (#32686)
- [Managed Iceberg] Now available in Python SDK (#31495)
- [Managed Iceberg] Add support for TIMESTAMP, TIME, and DATE types (#32688)
- BigQuery CDC writes are now available in Python SDK, only supported when using StorageWrite API at least once mode (#32527)
- [Managed Iceberg] Allow updating table partition specs during pipeline runtime (#32879)
- Added BigQueryIO as a Managed IO (#31486)
- Support for writing to Solace messages queues (
SolaceIO.Write
) added (Java) (#31905).
- Added support for read with metadata in MqttIO (Java) (#32195)
- X feature added (Java/Python) (#X).
- Added support for processing events which use a global sequence to "ordered" extension (Java) #32540
- Add new meta-transform FlattenWith and Tee that allow one to introduce branching without breaking the linear/chaining style of pipeline construction.
- X behavior was changed (#X).
- Removed support for Flink 1.15 and 1.16
- Removed support for Python 3.8
- X behavior is deprecated and will be removed in X versions (#X).
- Fixed X (Java/Python) (#X).
- (Java) Fixed tearDown not invoked when DoFn throws on Portable Runners (#18592, #31381).
- (Java) Fixed protobuf error with MapState.remove() in Dataflow Streaming Java Legacy Runner without Streaming Engine (#32892).
- Adding flag to support conditionally disabling auto-commit in JdbcIO ReadFn (#31111)
- (Python) Fixed BigQuery Enrichment bug that can lead to multiple conditions returning duplicate rows, batching returning incorrect results and conditions not scoped by row during batching (#32780).
- Fixed (CVE-YYYY-NNNN)[https://www.cve.org/CVERecord?id=CVE-YYYY-NNNN] (Java/Python/Go) (#X).
- (#X).
- Added support for using vLLM in the RunInference transform (Python) (#32528)
- [Managed Iceberg] Added support for streaming writes (#32451)
- [Managed Iceberg] Added auto-sharding for streaming writes (#32612)
- [Managed Iceberg] Added support for writing to dynamic destinations (#32565)
- PubsubIO can validate that the Pub/Sub topic exists before running the Read/Write pipeline (Java) (#32465)
- Dataflow worker can install packages from Google Artifact Registry Python repositories (Python) (#32123).
- Added support for Zstd codec in SerializableAvroCodecFactory (Java) (#32349)
- Added support for using vLLM in the RunInference transform (Python) (#32528)
- Prism release binaries and container bootloaders are now being built with the latest Go 1.23 patch. (#32575)
- Prism
- Prism now supports Bundle Finalization. (#32425)
- Significantly improved performance of Kafka IO reads that enable commitOffsetsInFinalize by removing the data reshuffle from SDF implementation. (#31682).
- Added support for dynamic writing in MqttIO (Java) (#19376)
- Optimized Spark Runner parDo transform evaluator (Java) (#32537)
- [Managed Iceberg] More efficient manifest file writes/commits (#32666)
- In Python, assert_that now throws if it is not in a pipeline context instead of silently succeeding (#30771)
- In Python and YAML, ReadFromJson now override the dtype from None to
an explicit False. Most notably, string values like
"123"
are preserved as strings rather than silently coerced (and possibly truncated) to numeric values. To retain the old behavior, passdtype=True
(or any other value accepted bypandas.read_json
). - Users of KafkaIO Read transform that enable commitOffsetsInFinalize might encounter pipeline graph compatibility issues when updating the pipeline. To mitigate, set the
updateCompatibilityVersion
option to the SDK version used for the original pipeline, example--updateCompatabilityVersion=2.58.1
- Python 3.8 is reaching EOL and support is being removed in Beam 2.61.0. The 2.60.0 release will warn users when running on 3.8. (#31192)
- (Java) Fixed custom delimiter issues in TextIO (#32249, #32251).
- (Java, Python, Go) Fixed PeriodicSequence backlog bytes reporting, which was preventing Dataflow Runner autoscaling from functioning properly (#32506).
- (Java) Fix improper decoding of rows with schemas containing nullable fields when encoded with a schema with equal encoding positions but modified field order. (#32388).
- BigQuery Enrichment (Python): The following issues are present when using the BigQuery enrichment transform (#32780):
- Duplicate Rows: Multiple conditions may be applied incorrectly, leading to the duplication of rows in the output.
- Incorrect Results with Batched Requests: Conditions may not be correctly scoped to individual rows within the batch, potentially causing inaccurate results.
- Fixed in 2.61.0.
- Added support for setting a configureable timeout when loading a model and performing inference in the RunInference transform using with_exception_handling (#32137)
- Initial experimental support for using Prism with the Java and Python SDKs
- Prism is presently targeting local testing usage, or other small scale execution.
- For Java, use 'PrismRunner', or 'TestPrismRunner' as an argument to the
--runner
flag. - For Python, use 'PrismRunner' as an argument to the
--runner
flag. - Go already uses Prism as the default local runner.
- Improvements to the performance of BigqueryIO when using withPropagateSuccessfulStorageApiWrites(true) method (Java) (#31840).
- [Managed Iceberg] Added support for writing to partitioned tables (#32102)
- Update ClickHouseIO to use the latest version of the ClickHouse JDBC driver (#32228).
- Add ClickHouseIO dedicated User-Agent (#32252).
- BigQuery endpoint can be overridden via PipelineOptions, this enables BigQuery emulators (Java) (#28149).
- Go SDK Minimum Go Version updated to 1.21 (#32092).
- [BigQueryIO] Added support for withFormatRecordOnFailureFunction() for STORAGE_WRITE_API and STORAGE_API_AT_LEAST_ONCE methods (Java) (#31354).
- Updated Go protobuf package to new version (Go) (#21515).
- Added support for setting a configureable timeout when loading a model and performing inference in the RunInference transform using with_exception_handling (#32137)
- Adds OrderedListState support for Java SDK via FnApi.
- Initial support for using Prism from the Python and Java SDKs.
- Fixed incorrect service account impersonation flow for Python pipelines using BigQuery IOs (#32030).
- Auto-disable broken and meaningless
upload_graph
feature when using Dataflow Runner V2 (#32159). - (Python) Upgraded google-cloud-storage to version 2.18.2 to fix a data corruption issue (#32135).
- (Go) Fix corruption on State API writes. (#32245).
-
Prism is under active development and does not yet support all pipelines. See #29650 for progress.
- In the 2.59.0 release, Prism passes most runner validations tests with the exceptions of pipelines using the following features: OrderedListState, OnWindowExpiry (eg. GroupIntoBatches), CustomWindows, MergingWindowFns, Trigger and WindowingStrategy associated features, Bundle Finalization, Looping Timers, and some Coder related issues such as with Python combiner packing, and Java Schema transforms, and heterogenous flatten coders. Processing Time timers do not yet have real time support.
- If your pipeline is having difficulty with the Python or Java direct runners, but runs well on Prism, please let us know.
-
Java file-based IOs read or write lots (100k+) files could experience slowness and/or broken metrics visualization on Dataflow UI #32649.
-
BigQuery Enrichment (Python): The following issues are present when using the BigQuery enrichment transform (#32780):
- Duplicate Rows: Multiple conditions may be applied incorrectly, leading to the duplication of rows in the output.
- Incorrect Results with Batched Requests: Conditions may not be correctly scoped to individual rows within the batch, potentially causing inaccurate results.
- Fixed in 2.61.0.
- Fixed issue where KafkaIO Records read with
ReadFromKafkaViaSDF
are redistributed and may contain duplicates regardless of the configuration. This affects Java pipelines with Dataflow v2 runner and xlang pipelines reading from Kafka, (#32196)
- Large Dataflow graphs using runner v2, or pipelines explicitly enabling the
upload_graph
experiment, will fail at construction time (#32159). - Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- BigQuery Enrichment (Python): The following issues are present when using the BigQuery enrichment transform (#32780):
- Duplicate Rows: Multiple conditions may be applied incorrectly, leading to the duplication of rows in the output.
- Incorrect Results with Batched Requests: Conditions may not be correctly scoped to individual rows within the batch, potentially causing inaccurate results.
- Fixed in 2.61.0.
- Multiple RunInference instances can now share the same model instance by setting the model_identifier parameter (Python) (#31665).
- Added options to control the number of Storage API multiplexing connections (#31721)
- [BigQueryIO] Better handling for batch Storage Write API when it hits AppendRows throughput quota (#31837)
- [IcebergIO] All specified catalog properties are passed through to the connector (#31726)
- Removed a 3rd party LGPL dependency from the Go SDK (#31765).
- Support for MapState and SetState when using Dataflow Runner v1 with Streaming Engine (Java) ([#18200])
- [IcebergIO] IcebergCatalogConfig was changed to support specifying catalog properties in a key-store fashion (#31726)
- [SpannerIO] Added validation that query and table cannot be specified at the same time for SpannerIO.read(). Previously withQuery overrides withTable, if set (#24956).
- [BigQueryIO] Fixed a bug in batch Storage Write API that frequently exhausted concurrent connections quota (#31710)
- Fixed a logging issue where Python worker dependency installation logs sometimes were not emitted in a timely manner (#31977)
- Large Dataflow graphs using runner v2, or pipelines explicitly enabling the
upload_graph
experiment, will fail at construction time (#32159). - Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- [KafkaIO] Records read with
ReadFromKafkaViaSDF
are redistributed and may contain duplicates regardless of the configuration. This affects Java pipelines with Dataflow v2 runner and xlang pipelines reading from Kafka, (#32196) - BigQuery Enrichment (Python): The following issues are present when using the BigQuery enrichment transform (#32780):
- Duplicate Rows: Multiple conditions may be applied incorrectly, leading to the duplication of rows in the output.
- Incorrect Results with Batched Requests: Conditions may not be correctly scoped to individual rows within the batch, potentially causing inaccurate results.
- Fixed in 2.61.0.
- Ensure that BigtableIO closes the reader streams (#31477).
- Added Feast feature store handler for enrichment transform (Python) (#30957).
- BigQuery per-worker metrics are reported by default for Streaming Dataflow Jobs (Java) (#31015)
- Adds
inMemory()
variant of Java List and Map side inputs for more efficient lookups when the entire side input fits into memory. - Beam YAML now supports the jinja templating syntax.
Template variables can be passed with the (json-formatted)
--jinja_variables
flag. - DataFrame API now supports pandas 2.1.x and adds 12 more string functions for Series.(#31185).
- Added BigQuery handler for enrichment transform (Python) (#31295)
- Disable soft delete policy when creating the default bucket for a project (Java) (#31324).
- Added
DoFn.SetupContextParam
andDoFn.BundleContextParam
which can be used as a pythonDoFn.process
,Map
, orFlatMap
parameter to invoke a context manager per DoFn setup or bundle (analogous to usingsetup
/teardown
orstart_bundle
/finish_bundle
respectively.) - Go SDK Prism Runner
- Pre-built Prism binaries are now part of the release and are available via the Github release page. (#29697).
- ProcessingTime is now handled synthetically with TestStream pipelines and Non-TestStream pipelines, for fast test pipeline execution by default. (#30083).
- Prism does NOT yet support "real time" execution for this release.
- Improve processing for large elements to reduce the chances for exceeding 2GB protobuf limits (Python)([apache#31607]).
- Java's View.asList() side inputs are now optimized for iterating rather than indexing when in the global window. This new implementation still supports all (immutable) List methods as before, but some of the random access methods like get() and size() will be slower. To use the old implementation one can use View.asList().withRandomAccess().
- SchemaTransforms implemented with TypedSchemaTransformProvider now produce a
configuration Schema with snake_case naming convention
(#31374). This will make the following
cases problematic:
- Running a pre-2.57.0 remote SDK pipeline containing a 2.57.0+ Java SchemaTransform, and vice versa:
- Running a 2.57.0+ remote SDK pipeline containing a pre-2.57.0 Java SchemaTransform
- All direct uses of Python's SchemaAwareExternalTransform should be updated to use new snake_case parameter names.
- Upgraded Jackson Databind to 2.15.4 (Java) (#26743). jackson-2.15 has known breaking changes. An important one is it imposed a buffer limit for parser. If your custom PTransform/DoFn are affected, refer to #31580 for mitigation.
- Large Dataflow graphs using runner v2, or pipelines explicitly enabling the
upload_graph
experiment, will fail at construction time (#32159). - Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- BigQuery Enrichment (Python): The following issues are present when using the BigQuery enrichment transform (#32780):
- Duplicate Rows: Multiple conditions may be applied incorrectly, leading to the duplication of rows in the output.
- Incorrect Results with Batched Requests: Conditions may not be correctly scoped to individual rows within the batch, potentially causing inaccurate results.
- Fixed in 2.61.0.
- Added FlinkRunner for Flink 1.17, removed support for Flink 1.12 and 1.13. Previous version of Pipeline running on Flink 1.16 and below can be upgraded to 1.17, if the Pipeline is first updated to Beam 2.56.0 with the same Flink version. After Pipeline runs with Beam 2.56.0, it should be possible to upgrade to FlinkRunner with Flink 1.17. (#29939)
- New Managed I/O Java API (#30830).
- New Ordered Processing PTransform added for processing order-sensitive stateful data (#30735).
- Upgraded Avro version to 1.11.3, kafka-avro-serializer and kafka-schema-registry-client versions to 7.6.0 (Java) (#30638). The newer Avro package is known to have breaking changes. If you are affected, you can keep pinned to older Avro versions which are also tested with Beam.
- Iceberg read/write support is available through the new Managed I/O Java API (#30830).
- Added ability to control the exact number of models loaded across processes by RunInference. This may be useful for pipelines with tight memory constraints (#31052)
- Profiling of Cythonized code has been disabled by default. This might improve performance for some Python pipelines (#30938).
- Bigtable enrichment handler now accepts a custom function to build a composite row key. (Python) (#30974).
- Default consumer polling timeout for KafkaIO.Read was increased from 1 second to 2 seconds. Use KafkaIO.read().withConsumerPollingTimeout(Duration duration) to configure this timeout value when necessary (#30870).
- Python Dataflow users no longer need to manually specify --streaming for pipelines using unbounded sources such as ReadFromPubSub.
- Fixed locking issue when shutting down inactive bundle processors. Symptoms of this issue include slowness or stuckness in long-running jobs (Python) (#30679).
- Fixed logging issue that caused silecing the pip output when installing of dependencies provided in
--requirements_file
(Python). - Fixed pipeline stuckness issue by disallowing versions of grpcio that can cause the stuckness (Python) (#30867).
- The beam interactive runner does not correctly run on flink (#31168).
- When using the Flink runner from Python, 1.17 is not supported and 1.12/13 do not work correctly. Support for 1.17 will be added in 2.57.0, and the ability to choose 1.12/13 will be cleaned up and fully removed in 2.57.0 as well (#31168).
- Large Dataflow graphs using runner v2, or pipelines explicitly enabling the
upload_graph
experiment, will fail at construction time (#32159). - Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- Fixed issue that broke WriteToJson in languages other than Java (X-lang) (#30776).
- The Python SDK will now include automatically generated wrappers for external Java transforms! (#29834)
- Added support for handling bad records to BigQueryIO (#30081).
- Full Support for Storage Read and Write APIs
- Partial Support for File Loads (Failures writing to files supported, failures loading files to BQ unsupported)
- No Support for Extract or Streaming Inserts
- Added support for handling bad records to PubSubIO (#30372).
- Support is not available for handling schema mismatches, and enabling error handling for writing to pubsub topics with schemas is not recommended
--enableBundling
pipeline option for BigQueryIO DIRECT_READ is replaced by--enableStorageReadApiV2
. Both were considered experimental and may subject to change (Java) (#26354).
- Allow writing clustered and not time partitioned BigQuery tables (Java) (#30094).
- Redis cache support added to RequestResponseIO and Enrichment transform (Python) (#30307)
- Merged sdks/java/fn-execution and runners/core-construction-java into the main SDK. These artifacts were never meant for users, but noting that they no longer exist. These are steps to bring portability into the core SDK alongside all other core functionality.
- Added Vertex AI Feature Store handler for Enrichment transform (Python) (#30388)
- Arrow version was bumped to 15.0.0 from 5.0.0 (#30181).
- Go SDK users who build custom worker containers may run into issues with the move to distroless containers as a base (see Security Fixes).
- The issue stems from distroless containers lacking additional tools, which current custom container processes may rely on.
- See https://beam.apache.org/documentation/runtime/environments/#from-scratch-go for instructions on building and using a custom container.
- Python SDK has changed the default value for the
--max_cache_memory_usage_mb
pipeline option from 100 to 0. This option was first introduced in 2.52.0 SDK. This change restores the behavior of 2.51.0 SDK, which does not use the state cache. If your pipeline uses iterable side inputs views, consider increasing the cache size by setting the option manually. (#30360).
- Fixed SpannerIO.readChangeStream to support propagating credentials from pipeline options to the getDialect calls for authenticating with Spanner (Java) (#30361).
- Reduced the number of HTTP requests in GCSIO function calls (Python) (#30205)
- Go SDK base container image moved to distroless/base-nossl-debian12, reducing vulnerable container surface to kernel and glibc (#30011).
- In Python pipelines, when shutting down inactive bundle processors, shutdown logic can overaggressively hold the lock, blocking acceptance of new work. Symptoms of this issue include slowness or stuckness in long-running jobs. Fixed in 2.56.0 (#30679).
- WriteToJson broken in languages other than Java (X-lang) (#30776).
- Python pipelines might occasionally become stuck due to a regression in grpcio (#30867). The issue manifests frequently with Bigtable IO connector, but might also affect other GCP connectors. Fixed in 2.56.0.
- Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- Enrichment Transform along with GCP BigTable handler added to Python SDK (#30001).
- Beam Java Batch pipelines run on Google Cloud Dataflow will default to the Portable (Runner V2)[https://cloud.google.com/dataflow/docs/runner-v2] starting with this version. (All other languages are already on Runner V2.)
- This change is still rolling out to the Dataflow service, see (Runner V2 documentation)[https://cloud.google.com/dataflow/docs/runner-v2] for how to enable or disable it intentionally.
- Added support for writing to BigQuery dynamic destinations with Python's Storage Write API (#30045)
- Adding support for Tuples DataType in ClickHouse (Java) (#29715).
- Added support for handling bad records to FileIO, TextIO, AvroIO (#29670).
- Added support for handling bad records to BigtableIO (#29885).
- Enrichment Transform along with GCP BigTable handler added to Python SDK (#30001).
- N/A
- N/A
- Fixed a memory leak affecting some Go SDK since 2.46.0. (#28142)
- N/A
- Some Python pipelines that run with 2.52.0-2.54.0 SDKs and use large materialized side inputs might be affected by a performance regression. To restore the prior behavior on these SDK versions, supply the
--max_cache_memory_usage_mb=0
pipeline option. (#30360). - Python pipelines that run with 2.53.0-2.54.0 SDKs and perform file operations on GCS might be affected by excess HTTP requests. This could lead to a performance regression or a permission issue. (#28398)
- In Python pipelines, when shutting down inactive bundle processors, shutdown logic can overaggressively hold the lock, blocking acceptance of new work. Symptoms of this issue include slowness or stuckness in long-running jobs. Fixed in 2.56.0 (#30679).
- Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- Python streaming users that use 2.47.0 and newer versions of Beam should update to version 2.53.0, which fixes a known issue: (#27330).
- TextIO now supports skipping multiple header lines (Java) (#17990).
- Python GCSIO is now implemented with GCP GCS Client instead of apitools (#25676)
- Added support for handling bad records to KafkaIO (Java) (#29546)
- Add support for generating text embeddings in MLTransform for Vertex AI and Hugging Face Hub models.(#29564)
- NATS IO connector added (Go) (#29000).
- Adding support for LowCardinality (Java) (#29533).
- The Python SDK now type checks
collections.abc.Collections
types properly. Some type hints that were erroneously allowed by the SDK may now fail. (#29272) - Running multi-language pipelines locally no longer requires Docker. Instead, the same (generally auto-started) subprocess used to perform the expansion can also be used as the cross-language worker.
- Framework for adding Error Handlers to composite transforms added in Java (#29164).
- Python 3.11 images now include google-cloud-profiler (#29561).
- Euphoria DSL is deprecated and will be removed in a future release (not before 2.56.0) (#29451)
- (Python) Fixed sporadic crashes in streaming pipelines that affected some users of 2.47.0 and newer SDKs (#27330).
- (Python) Fixed a bug that caused MLTransform to drop identical elements in the output PCollection (#29600).
- Upgraded to go 1.21.5 to build, fixing CVE-2023-45285 and CVE-2023-39326
- Potential race condition causing NPE in DataflowExecutionStateSampler in Dataflow Java Streaming pipelines (#29987).
- Some Python pipelines that run with 2.52.0-2.54.0 SDKs and use large materialized side inputs might be affected by a performance regression. To restore the prior behavior on these SDK versions, supply the
--max_cache_memory_usage_mb=0
pipeline option. (#30360). - Python pipelines that run with 2.53.0-2.54.0 SDKs and perform file operations on GCS might be affected by excess HTTP requests. This could lead to a performance regression or a permission issue. (#28398)
- In Python pipelines, when shutting down inactive bundle processors, shutdown logic can overaggressively hold the lock, blocking acceptance of new work. Symptoms of this issue include slowness or stuckness in long-running jobs. Fixed in 2.56.0 (#30679).
- Python pipelines that run with 2.53.0-2.58.0 SDKs and read data from GCS might be affected by a data corruption issue (#32169). The issue will be fixed in 2.59.0 (#32135). To work around this, update the google-cloud-storage package to version 2.18.2 or newer.
- Previously deprecated Avro-dependent code (Beam Release 2.46.0) has been finally removed from Java SDK "core" package.
Please, use
beam-sdks-java-extensions-avro
instead. This will allow to easily update Avro version in user code without potential breaking changes in Beam "core" since the Beam Avro extension already supports the latest Avro versions and should handle this. (#25252). - Publishing Java 21 SDK container images now supported as part of Apache Beam release process. (#28120)
- Direct Runner and Dataflow Runner support running pipelines on Java21 (experimental until tests fully setup). For other runners (Flink, Spark, Samza, etc) support status depend on runner projects.
- Add
UseDataStreamForBatch
pipeline option to the Flink runner. When it is set to true, Flink runner will run batch jobs using the DataStream API. By default the option is set to false, so the batch jobs are still executed using the DataSet API. upload_graph
as one of the Experiments options for DataflowRunner is no longer required when the graph is larger than 10MB for Java SDK (PR#28621).- Introduced a pipeline option
--max_cache_memory_usage_mb
to configure state and side input cache size. The cache has been enabled to a default of 100 MB. Use--max_cache_memory_usage_mb=X
to provide cache size for the user state API and side inputs. (#28770). - Beam YAML stable release. Beam pipelines can now be written using YAML and leverage the Beam YAML framework which includes a preliminary set of IO's and turnkey transforms. More information can be found in the YAML root folder and in the README.
org.apache.beam.sdk.io.CountingSource.CounterMark
uses customCounterMarkCoder
as a default coder since all Avro-dependent classes finally moved toextensions/avro
. In case if it's still required to useAvroCoder
forCounterMark
, then, as a workaround, a copy of "old"CountingSource
class should be placed into a project code and used directly (#25252).- Renamed
host
tofirestoreHost
inFirestoreOptions
to avoid potential conflict of command line arguments (Java) (#29201). - Transforms which use
SnappyCoder
are update incompatible with previous versions of the same transform (Java) on some runners. This includes PubSubIO's read (#28655).
- Fixed "Desired bundle size 0 bytes must be greater than 0" in Java SDK's BigtableIO.BigtableSource when you have more cores than bytes to read (Java) #28793.
watch_file_pattern
arg of the RunInference arg had no effect prior to 2.52.0. To use the behavior of argwatch_file_pattern
prior to 2.52.0, follow the documentation at https://beam.apache.org/documentation/ml/side-input-updates/ and useWatchFilePattern
PTransform as a SideInput. (#28948)MLTransform
doesn't output artifacts such as min, max and quantiles. Instead,MLTransform
will add a feature to output these artifacts as human readable format - #29017. For now, to use the artifacts such as min and max that were produced by the earilerMLTransform
, useread_artifact_location
ofMLTransform
, which reads artifacts that were produced earlier in a differentMLTransform
(#29016)- Fixed a memory leak, which affected some long-running Python pipelines: #28246.
- Fixed CVE-2023-39325 (Java/Python/Go) (#29118).
- Mitigated CVE-2023-47248 (Python) #29392.
- MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).
- Some Python pipelines that run with 2.52.0-2.54.0 SDKs and use large materialized side inputs might be affected by a performance regression. To restore the prior behavior on these SDK versions, supply the
--max_cache_memory_usage_mb=0
pipeline option. (Python) (#30360). - Users who lauch Python pipelines in an environment without internet access and use the
--setup_file
pipeline option might experience an increase in pipeline submission time. This has been fixed in 2.56.0 (#31070). - Transforms which use
SnappyCoder
are update incompatible with previous versions of the same transform (Java) on some runners. This includes PubSubIO's read (#28655).
- In Python, RunInference now supports loading many models in the same transform using a KeyedModelHandler (#27628).
- In Python, the VertexAIModelHandlerJSON now supports passing in inference_args. These will be passed through to the Vertex endpoint as parameters.
- Added support to run
mypy
on user pipelines (#27906) - Python SDK worker start-up logs and crash logs are now captured by a buffer and logged at appropriate levels via Beam logging API. Dataflow Runner users might observe that most
worker-startup
log content is now captured by theworker
logger. Users who relied onprint()
statements for logging might notice that some logs don't flush before pipeline succeeds - we strongly advise to uselogging
package instead ofprint()
statements for logging. (#28317)
- Removed fastjson library dependency for Beam SQL. Table property is changed to be based on jackson ObjectNode (Java) (#24154).
- Removed TensorFlow from Beam Python container images PR. If you have been negatively affected by this change, please comment on #20605.
- Removed the parameter
t reflect.Type
fromparquetio.Write
. The element type is derived from the input PCollection (Go) (#28490) - Refactor BeamSqlSeekableTable.setUp adding a parameter joinSubsetType. #28283
- Fixed exception chaining issue in GCS connector (Python) (#26769).
- Fixed streaming inserts exception handling, GoogleAPICallErrors are now retried according to retry strategy and routed to failed rows where appropriate rather than causing a pipeline error (Python) (#21080).
- Fixed a bug in Python SDK's cross-language Bigtable sink that mishandled records that don't have an explicit timestamp set: #28632.
- Python containers updated, fixing CVE-2021-30474, CVE-2021-30475, CVE-2021-30473, CVE-2020-36133, CVE-2020-36131, CVE-2020-36130, and CVE-2020-36135
- Used go 1.21.1 to build, fixing CVE-2023-39320
- Long-running Python pipelines might experience a memory leak: #28246.
- Python pipelines using BigQuery Storage Read API might need to pin
fastavro
dependency to 1.8.3 or earlier on some runners that don't use Beam Docker containers: #28811 - MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).
- Spark 3.2.2 is used as default version for Spark runner (#23804).
- The Go SDK has a new default local runner, called Prism (#24789).
- All Beam released container images are now multi-arch images that support both x86 and ARM CPU architectures.
- Java KafkaIO now supports picking up topics via topicPattern (#26948)
- Support for read from Cosmos DB Core SQL API (#23604)
- Upgraded to HBase 2.5.5 for HBaseIO. (Java) (#27711)
- Added support for GoogleAdsIO source (Java) (#27681).
- The Go SDK now requires Go 1.20 to build. (#27558)
- The Go SDK has a new default local runner, Prism. (#24789).
- Prism is a portable runner that executes each transform independantly, ensuring coders.
- At this point it supercedes the Go direct runner in functionality. The Go direct runner is now deprecated.
- See https://github.com/apache/beam/blob/master/sdks/go/pkg/beam/runners/prism/README.md for the goals and features of Prism.
- Hugging Face Model Handler for RunInference added to Python SDK. (#26632)
- Hugging Face Pipelines support for RunInference added to Python SDK. (#27399)
- Vertex AI Model Handler for RunInference now supports private endpoints (#27696)
- MLTransform transform added with support for common ML pre/postprocessing operations (#26795)
- Upgraded the Kryo extension for the Java SDK to Kryo 5.5.0. This brings in bug fixes, performance improvements, and serialization of Java 14 records. (#27635)
- All Beam released container images are now multi-arch images that support both x86 and ARM CPU architectures. (#27674). The multi-arch container images include:
- All versions of Go, Python, Java and Typescript SDK containers.
- All versions of Flink job server containers.
- Java and Python expansion service containers.
- Transform service controller container.
- Spark3 job server container.
- Added support for batched writes to AWS SQS for improved throughput (Java, AWS 2).(#21429)
- Python SDK: Legacy runner support removed from Dataflow, all pipelines must use runner v2.
- Python SDK: Dataflow Runner will no longer stage Beam SDK from PyPI in the
--staging_location
at pipeline submission. Custom container images that are not based on Beam's default image must include Apache Beam installation.(#26996)
- The Go Direct Runner is now Deprecated. It remains available to reduce migration churn.
- Tests can be set back to the direct runner by overriding TestMain:
func TestMain(m *testing.M) { ptest.MainWithDefault(m, "direct") }
- It's recommended to fix issues seen in tests using Prism, as they can also happen on any portable runner.
- Use the generic register package for your pipeline DoFns to ensure pipelines function on portable runners, like prism.
- Do not rely on closures or using package globals for DoFn configuration. They don't function on portable runners.
- Tests can be set back to the direct runner by overriding TestMain:
- Fixed DirectRunner bug in Python SDK where GroupByKey gets empty PCollection and fails when pipeline option
direct_num_workers!=1
.(#27373) - Fixed BigQuery I/O bug when estimating size on queries that utilize row-level security (#27474)
- Long-running Python pipelines might experience a memory leak: #28246.
- Python Pipelines using BigQuery IO or
orjson
dependency might experience segmentation faults or get stuck: #28318. - Beam Python containers rely on a version of Debian/aom that has several security vulnerabilities: CVE-2021-30474, CVE-2021-30475, CVE-2021-30473, CVE-2020-36133, CVE-2020-36131, CVE-2020-36130, and CVE-2020-36135
- Python SDK's cross-language Bigtable sink mishandles records that don't have an explicit timestamp set: #28632. To avoid this issue, set explicit timestamps for all records before writing to Bigtable.
- Python SDK worker start-up logs, particularly PIP dependency installations, that are not logged at warning or higher are suppressed. This suppression is reverted in 2.51.0.
- MLTransform drops the identical elements in the output PCollection. For any duplicate elements, a single element will be emitted downstream. (#29600).
- Support for Bigtable Change Streams added in Java
BigtableIO.ReadChangeStream
(#27183)
- Allow prebuilding large images when using
--prebuild_sdk_container_engine=cloud_build
, like images depending ontensorflow
ortorch
(#27023). - Disabled
pip
cache when installing packages on the workers. This reduces the size of prebuilt Python container images (#27035). - Select dedicated avro datum reader and writer (Java) (#18874).
- Timer API for the Go SDK (Go) (#22737).
- Removed Python 3.7 support. (#26447)
- Fixed KinesisIO
NullPointerException
when a progress check is made before the reader is started (IO) (#23868)
- Long-running Python pipelines might experience a memory leak: #28246.
- Python pipelines using the
--impersonate_service_account
option with BigQuery IOs might fail on Dataflow (#32030). This is fixed in 2.59.0 release.
- "Experimental" annotation cleanup: the annotation and concept have been removed from Beam to avoid the misperception of code as "not ready". Any proposed breaking changes will be subject to case-by-case pro/con decision making (and generally avoided) rather than using the "Experimental" to allow them.
- Added rename for GCS and copy for local filesystem (Go) (#25779).
- Added support for enhanced fan-out in KinesisIO.Read (Java) (#19967).
- This change is not compatible with Flink savepoints created by Beam 2.46.0 applications which had KinesisIO sources.
- Added textio.ReadWithFilename transform (Go) (#25812).
- Added fileio.MatchContinuously transform (Go) (#26186).
- Allow passing service name for google-cloud-profiler (Python) (#26280).
- Dead letter queue support added to RunInference in Python (#24209).
- Support added for defining pre/postprocessing operations on the RunInference transform (#26308)
- Adds a Docker Compose based transform service that can be used to discover and use portable Beam transforms (#26023).
- Passing a tag into MultiProcessShared is now required in the Python SDK (#26168).
- CloudDebuggerOptions is removed (deprecated in Beam v2.47.0) for Dataflow runner as the Google Cloud Debugger service is shutting down. (Java) (#25959).
- AWS 2 client providers (deprecated in Beam v2.38.0) are finally removed (#26681).
- AWS 2 SnsIO.writeAsync (deprecated in Beam v2.37.0 due to risk of data loss) was finally removed (#26710).
- AWS 2 coders (deprecated in Beam v2.43.0 when adding Schema support for AWS Sdk Pojos) are finally removed (#23315).
- Fixed Java bootloader failing with Too Long Args due to long classpaths, with a pathing jar. (Java) (#25582).
- PubsubIO writes will throw SizeLimitExceededException for any message above 100 bytes, when used in batch (bounded) mode. (Java) (#27000).
- Long-running Python pipelines might experience a memory leak: #28246.
- Python SDK's cross-language Bigtable sink mishandles records that don't have an explicit timestamp set: #28632. To avoid this issue, set explicit timestamps for all records before writing to Bigtable.
- Apache Beam adds Python 3.11 support (#23848).
- BigQuery Storage Write API is now available in Python SDK via cross-language (#21961).
- Added HbaseIO support for writing RowMutations (ordered by rowkey) to Hbase (Java) (#25830).
- Added fileio transforms MatchFiles, MatchAll and ReadMatches (Go) (#25779).
- Add integration test for JmsIO + fix issue with multiple connections (Java) (#25887).
- The Flink runner now supports Flink 1.16.x (#25046).
- Schema'd PTransforms can now be directly applied to Beam dataframes just like PCollections.
(Note that when doing multiple operations, it may be more efficient to explicitly chain the operations
like
df | (Transform1 | Transform2 | ...)
to avoid excessive conversions.) - The Go SDK adds new transforms periodic.Impulse and periodic.Sequence that extends support for slowly updating side input patterns. (#23106)
- Several Google client libraries in Python SDK dependency chain were updated to latest available major versions. (#24599)
- If a main session fails to load, the pipeline will now fail at worker startup. (#25401).
- Python pipeline options will now ignore unparsed command line flags prefixed with a single dash. (#25943).
- The SmallestPerKey combiner now requires keyword-only arguments for specifying optional parameters, such as
key
andreverse
. (#25888).
- Cloud Debugger support and its pipeline options are deprecated and will be removed in the next Beam version, in response to the Google Cloud Debugger service turning down. (Java) (#25959).
- BigQuery sink in STORAGE_WRITE_API mode in batch pipelines could result in data consistency issues during the handling of other unrelated transient errors for Beam SDKs 2.35.0 - 2.46.0 (inclusive). For more details see: apache#26521
- The google-cloud-profiler dependency was accidentally removed from Beam's Python Docker Image #26998. Dataflow Docker images still preinstall this dependency.
- Long-running Python pipelines might experience a memory leak: #28246.
- Java SDK containers migrated to Eclipse Temurin as a base. This change migrates away from the deprecated OpenJDK container. Eclipse Temurin is currently based upon Ubuntu 22.04 while the OpenJDK container was based upon Debian 11.
- RunInference PTransform will accept model paths as SideInputs in Python SDK. (#24042)
- RunInference supports ONNX runtime in Python SDK (#22972)
- Tensorflow Model Handler for RunInference in Python SDK (#25366)
- Java SDK modules migrated to use
:sdks:java:extensions:avro
(#24748)
- Added in JmsIO a retry policy for failed publications (Java) (#24971).
- Support for
LZMA
compression/decompression of text files added to the Python SDK (#25316) - Added ReadFrom/WriteTo Csv/Json as top-level transforms to the Python SDK.
- Add UDF metrics support for Samza portable mode.
- Option for SparkRunner to avoid the need of SDF output to fit in memory (#23852).
This helps e.g. with ParquetIO reads. Turn the feature on by adding experiment
use_bounded_concurrent_output_for_sdf
. - Add
WatchFilePattern
transform, which can be used as a side input to the RunInference PTransfrom to watch for model updates using a file pattern. (#24042) - Add support for loading TorchScript models with
PytorchModelHandler
. The TorchScript model path can be passed to PytorchModelHandler usingtorch_script_model_path=<path_to_model>
. (#25321) - The Go SDK now requires Go 1.19 to build. (#25545)
- The Go SDK now has an initial native Go implementation of a portable Beam Runner called Prism. (#24789)
- For more details and current state see https://github.com/apache/beam/tree/master/sdks/go/pkg/beam/runners/prism.
- The deprecated SparkRunner for Spark 2 (see 2.41.0) was removed (#25263).
- Python's BatchElements performs more aggressive batching in some cases,
capping at 10 second rather than 1 second batches by default and excluding
fixed cost in this computation to better handle cases where the fixed cost
is larger than a single second. To get the old behavior, one can pass
target_batch_duration_secs_including_fixed_cost=1
to BatchElements. - Dataflow runner enables sibling SDK protocol for Python pipelines using custom containers on Beam 2.46.0 and newer SDKs.
If your Python pipeline starts to stall after you switch to 2.46.0 and you use a custom container, please verify
that your custom container does not include artifacts from older Beam SDK releases. In particular, check in your
Dockerfile
that the Beam container entrypoint and/or Beam base image version match the Beam SDK version used at job submission.
- Avro related classes are deprecated in module
beam-sdks-java-core
and will be eventually removed. Please, migrate to a new modulebeam-sdks-java-extensions-avro
instead by importing the classes fromorg.apache.beam.sdk.extensions.avro
package. For the sake of migration simplicity, the relative package path and the whole class hierarchy of Avro related classes in new module is preserved the same as it was before. For example, importorg.apache.beam.sdk.extensions.avro.coders.AvroCoder
class instead oforg.apache.beam.sdk.coders.AvroCoder
. (#24749).
- RunInference Wrapper with Sklearn Model Handler support added in Go SDK (#24497).
- Adding override of allowed TLS algorithms (Java), now maintaining the disabled/legacy algorithms present in 2.43.0 (up to 1.8.0_342, 11.0.16, 17.0.2 for respective Java versions). This is accompanied by an explicit re-enabling of TLSv1 and TLSv1.1 for Java 8 and Java 11.
- Add UDF metrics support for Samza portable mode.
- Portable Java pipelines, Go pipelines, Python streaming pipelines, and portable Python batch
pipelines on Dataflow are required to use Runner V2. The
disable_runner_v2
,disable_runner_v2_until_2023
,disable_prime_runner_v2
experiments will raise an error during pipeline construction. You can no longer specify the Dataflow worker jar override. Note that non-portable Java jobs and non-portable Python batch jobs are not impacted. (#24515). - Beam now requires
pyarrow>=3
andpandas>=1.4.3
since older versions are not compatible withnumpy==1.24.0
.
- Avoids Cassandra syntax error when user-defined query has no where clause in it (Java) (#24829).
- Fixed JDBC connection failures (Java) during handshake due to deprecated TLSv1(.1) protocol for the JDK. (#24623)
- Fixed Python BigQuery Batch Load write may truncate valid data when deposition sets to WRITE_TRUNCATE and incoming data is large (Python) (#24623).
- Support for Bigtable sink (Write and WriteBatch) added (Go) (#23324).
- S3 implementation of the Beam filesystem (Go) (#23991).
- Support for SingleStoreDB source and sink added (Java) (#22617).
- Added support for DefaultAzureCredential authentication in Azure Filesystem (Python) (#24210).
- Beam now provides a portable "runner" that can render pipeline graphs with
graphviz. See
python -m apache_beam.runners.render --help
for more details. - Local packages can now be used as dependencies in the requirements.txt file, rather
than requiring them to be passed separately via the
--extra_package
option (Python) (#23684). - Pipeline Resource Hints now supported via
--resource_hints
flag (Go) (#23990). - Make Python SDK containers reusable on portable runners by installing dependencies to temporary venvs (BEAM-12792, #16658).
- RunInference model handlers now support the specification of a custom inference function in Python (#22572).
- Support for
map_windows
urn added to Go SDK (#24307).
ParquetIO.withSplit
was removed since splittable reading has been the default behavior since 2.35.0. The effect of this change is to drop support for non-splittable reading (Java)(#23832).beam-sdks-java-extensions-google-cloud-platform-core
is no longer a dependency of the Java SDK Harness. Some users of a portable runner (such as Dataflow Runner v2) may have an undeclared dependency on this package (for example using GCS with TextIO) and will now need to declare the dependency.beam-sdks-java-core
is no longer a dependency of the Java SDK Harness. Users of a portable runner (such as Dataflow Runner v2) will need to provide this package and its dependencies.- Slices now use the Beam Iterable Coder. This enables cross language use, but breaks pipeline updates if a Slice type is used as a PCollection element or State API element. (Go)#24339
- If you activated a virtual environment in your custom container image, this environment might no longer be activated, since a new environment will be created (see the note about BEAM-12792 above).
To work around, install dependencies into the default (global) python environment. When using poetry you may need to use
poetry config virtualenvs.create false
before installing deps, see an example in: #25085. If you were negatively impacted by this change and cannot find a workaround, feel free to chime in on #16658. To disable this behavior, you could upgrade to Beam 2.48.0 and set an environment variableENV RUN_PYTHON_SDK_IN_DEFAULT_ENVIRONMENT=1
in your Dockerfile.
- X behavior is deprecated and will be removed in X versions (#X).
- Fixed X (Java/Python) (#X).
- Fixed JmsIO acknowledgment issue (Java) (#20814)
- Fixed Beam SQL CalciteUtils (Java) and Cross-language JdbcIO (Python) did not support JDBC CHAR/VARCHAR, BINARY/VARBINARY logical types (#23747, #23526).
- Ensure iterated and emitted types are used with the generic register package are registered with the type and schema registries.(Go) (#23889)
- Python 3.10 support in Apache Beam (#21458).
- An initial implementation of a runner that allows us to run Beam pipelines on Dask. Try it out and give us feedback! (Python) (#18962).
- Decreased TextSource CPU utilization by 2.3x (Java) (#23193).
- Fixed bug when using SpannerIO with RuntimeValueProvider options (Java) (#22146).
- Fixed issue for unicode rendering on WriteToBigQuery (#22312)
- Remove obsolete variants of BigQuery Read and Write, always using Beam-native variant (#23564 and #23559).
- Bumped google-cloud-spanner dependency version to 3.x for Python SDK (#21198).
- Dataframe wrapper added in Go SDK via Cross-Language (with automatic expansion service). (Go) (#23384).
- Name all Java threads to aid in debugging (#23049).
- An initial implementation of a runner that allows us to run Beam pipelines on Dask. (Python) (#18962).
- Allow configuring GCP OAuth scopes via pipeline options. This unblocks usages of Beam IOs that require additional scopes. For example, this feature makes it possible to access Google Drive backed tables in BigQuery (#23290).
- An example for using Python RunInference from Java (#23290).
- Data can now be read from BigQuery and directly plumbed into a DeferredDataframe in the Dataframe API. Users no longer have to re-specify the schema in this case (#22907).
- CoGroupByKey transform in Python SDK has changed the output typehint. The typehint component representing grouped values changed from List to Iterable, which more accurately reflects the nature of the arbitrarily large output collection. #21556 Beam users may see an error on transforms downstream from CoGroupByKey. Users must change methods expecting a List to expect an Iterable going forward. See document for information and fixes.
- The PortableRunner for Spark assumes Spark 3 as default Spark major version unless configured otherwise using
--spark_version
. Spark 2 support is deprecated and will be removed soon (#23728).
- Fixed Python cross-language JDBC IO Connector cannot read or write rows containing Numeric/Decimal type values (#19817).
- Added support for stateful DoFns to the Go SDK.
- Added support for Batched DoFns to the Python SDK.
- Added support for Zstd compression to the Python SDK.
- Added support for Google Cloud Profiler to the Go SDK.
- Added support for stateful DoFns to the Go SDK.
- The Go SDK's Row Coder now uses a different single-precision float encoding for float32 types to match Java's behavior (#22629).
- Fixed Python cross-language JDBC IO Connector cannot read or write rows containing Timestamp type values #19817.
- Fixed
AfterProcessingTime
behavior in Python'sDirectRunner
to match Java (#23071)
- Go SDK doesn't yet support Slowly Changing Side Input pattern (#23106)
- Projection Pushdown optimizer is now on by default for streaming, matching the behavior of batch pipelines since 2.38.0. If you encounter a bug with the optimizer, please file an issue and disable the optimizer using pipeline option
--experiments=disable_projection_pushdown
.
- Previously available in Java sdk, Python sdk now also supports logging level overrides per module. (#18222).
- Added support for accessing GCP PubSub Message ordering keys (Java) (BEAM-13592)
- Projection Pushdown optimizer may break Dataflow upgrade compatibility for optimized pipelines when it removes unused fields. If you need to upgrade and encounter a compatibility issue, disable the optimizer using pipeline option
--experiments=disable_projection_pushdown
.
- Support for Spark 2.4.x is deprecated and will be dropped with the release of Beam 2.44.0 or soon after (Spark runner) (#22094).
- The modules amazon-web-services and kinesis for AWS Java SDK v1 are deprecated in favor of amazon-web-services2 and will be eventually removed after a few Beam releases (Java) (#21249).
- Fixed a condition where retrying queries would yield an incorrect cursor in the Java SDK Firestore Connector (#22089).
- Fixed plumbing allowed lateness in Go SDK. It was ignoring the user set value earlier and always used to set to 0. (#22474).
- Added RunInference API, a framework agnostic transform for inference. With this release, PyTorch and Scikit-learn are supported by the transform. See also example at apache_beam/examples/inference/pytorch_image_classification.py
- Upgraded to Hive 3.1.3 for HCatalogIO. Users can still provide their own version of Hive. (Java) (Issue-19554).
- Go SDK users can now use generic registration functions to optimize their DoFn execution. (BEAM-14347)
- Go SDK users may now write self-checkpointing Splittable DoFns to read from streaming sources. (BEAM-11104)
- Go SDK textio Reads have been moved to Splittable DoFns exclusively. (BEAM-14489)
- Pipeline drain support added for Go SDK has now been tested. (BEAM-11106)
- Go SDK users can now see heap usage, sideinput cache stats, and active process bundle stats in Worker Status. (BEAM-13829)
- The Go Sdk now requires a minimum version of 1.18 in order to support generics (BEAM-14347).
- synthetic.SourceConfig field types have changed to int64 from int for better compatibility with Flink's use of Logical types in Schemas (Go) (BEAM-14173)
- Default coder updated to compress sources used with
BoundedSourceAsSDFWrapperFn
andUnboundedSourceAsSDFWrapper
.
- Fixed Java expansion service to allow specific files to stage (BEAM-14160).
- Fixed Elasticsearch connection when using both ssl and username/password (Java) (BEAM-14000)
- Watermark estimation is now supported in the Go SDK (BEAM-11105).
- Support for impersonation credentials added to dataflow runner in the Java and Python SDK (BEAM-14014).
- Implemented Apache PulsarIO (BEAM-8218).
- JmsIO gains the ability to map any kind of input to any subclass of
javax.jms.Message
(Java) (BEAM-16308). - JmsIO introduces the ability to write to dynamic topics (Java) (BEAM-16308).
- A
topicNameMapper
must be set to extract the topic name from the input value. - A
valueMapper
must be set to convert the input value to JMS message.
- A
- Reduce number of threads spawned by BigqueryIO StreamingInserts ( BEAM-14283).
- Implemented Apache PulsarIO (BEAM-8218).
- Support for flink scala 2.12, because most of the libraries support version 2.12 onwards. (beam-14386)
- 'Manage Clusters' JupyterLab extension added for users to configure usage of Dataproc clusters managed by Interactive Beam (Python) (BEAM-14130).
- Pipeline drain support added for Go SDK (BEAM-11106). Note: this feature is not yet fully validated and should be treated as experimental in this release.
DataFrame.unstack()
,DataFrame.pivot()
andSeries.unstack()
implemented for DataFrame API (BEAM-13948, BEAM-13966).- Support for impersonation credentials added to dataflow runner in the Java and Python SDK (BEAM-14014).
- Implemented Jupyterlab extension for managing Dataproc clusters (BEAM-14130).
- ExternalPythonTransform API added for easily invoking Python transforms from Java (BEAM-14143).
- Added Add support for Elasticsearch 8.x (BEAM-14003).
- Shard aware Kinesis record aggregation (AWS Sdk v2), (BEAM-14104).
- Upgrade to ZetaSQL 2022.04.1 (BEAM-14348).
- Fixed ReadFromBigQuery cannot be used with the interactive runner (BEAM-14112).
- Unused functions
ShallowCloneParDoPayload()
,ShallowCloneSideInput()
, andShallowCloneFunctionSpec()
have been removed from the Go SDK's pipelinex package (BEAM-13739). - JmsIO requires an explicit
valueMapper
to be set (BEAM-16308). You can use theTextMessageMapper
to convertString
inputs to JMSTestMessage
s:
JmsIO.<String>write()
.withConnectionFactory(jmsConnectionFactory)
.withValueMapper(new TextMessageMapper());
- Coders in Python are expected to inherit from Coder. (BEAM-14351).
- New abstract method
metadata()
added to io.filesystem.FileSystem in the Python SDK. (BEAM-14314)
- Flink 1.11 is no longer supported (BEAM-14139).
- Python 3.6 is no longer supported (BEAM-13657).
- Fixed Java Spanner IO NPE when ProjectID not specified in template executions (Java) (BEAM-14405).
- Fixed potential NPE in BigQueryServicesImpl.getErrorInfo (Java) (BEAM-14133).
- Introduce projection pushdown optimizer to the Java SDK (BEAM-12976). The optimizer currently only works on the BigQuery Storage API, but more I/Os will be added in future releases. If you encounter a bug with the optimizer, please file a JIRA and disable the optimizer using pipeline option
--experiments=disable_projection_pushdown
. - A new IO for Neo4j graph databases was added. (BEAM-1857) It has the ability to update nodes and relationships using UNWIND statements and to read data using cypher statements with parameters.
amazon-web-services2
has reached feature parity and is finally recommended over the earlieramazon-web-services
andkinesis
modules (Java). These will be deprecated in one of the next releases (BEAM-13174).- Long outstanding write support for
Kinesis
was added (BEAM-13175). - Configuration was simplified and made consistent across all IOs, including the usage of
AwsOptions
(BEAM-13563, BEAM-13663, BEAM-13587). - Additionally, there's a long list of recent improvements and fixes to
S3
Filesystem (BEAM-13245, BEAM-13246, BEAM-13441, BEAM-13445, BEAM-14011),DynamoDB
IO (BEAM-13209, BEAM-13209),SQS
IO (BEAM-13631, BEAM-13510) and others.
- Long outstanding write support for
- Pipeline dependencies supplied through
--requirements_file
will now be staged to the runner using binary distributions (wheels) of the PyPI packages for linux_x86_64 platform (BEAM-4032). To restore the behavior to use source distributions, set pipeline option--requirements_cache_only_sources
. To skip staging the packages at submission time, set pipeline option--requirements_cache=skip
(Python). - The Flink runner now supports Flink 1.14.x (BEAM-13106).
- Interactive Beam now supports remotely executing Flink pipelines on Dataproc (Python) (BEAM-14071).
- (Python) Previously
DoFn.infer_output_types
was expected to returnIterable[element_type]
whereelement_type
is the PCollection elemnt type. It is now expected to returnelement_type
. Take care if you have overrideninfer_output_type
in aDoFn
(this is not common). See BEAM-13860. - (
amazon-web-services2
) The types ofawsRegion
/endpoint
inAwsOptions
changed from String toRegion
/URI
(BEAM-13563).
- Beam 2.38.0 will be the last minor release to support Flink 1.11.
- (
amazon-web-services2
) Client providers (withXYZClientProvider()
) as well as IO specificRetryConfiguration
s are deprecated, instead usewithClientConfiguration()
orAwsOptions
to configure AWS IOs / clients. Custom implementations of client providers shall be replaced with a respectiveClientBuilderFactory
and configured throughAwsOptions
(BEAM-13563).
- Fix S3 copy for large objects (Java) (BEAM-14011)
- Fix quadratic behavior of pipeline canonicalization (Go) (BEAM-14128)
- This caused unnecessarily long pre-processing times before job submission for large complex pipelines.
- Fix
pyarrow
version parsing (Python)(BEAM-14235)
- Some pipelines that use Java SpannerIO may raise a NPE when the project ID is not specified (BEAM-14405)
- Java 17 support for Dataflow (BEAM-12240).
- Users using Dataflow Runner V2 may see issues with state cache due to inaccurate object sizes (BEAM-13695).
- ZetaSql is currently unsupported (issue).
- Python 3.9 support in Apache Beam (BEAM-12000).
- Go SDK now has wrappers for the following Cross Language Transforms from Java, along with automatic expansion service startup for each.
- JDBCIO (BEAM-13293).
- Debezium (BEAM-13761).
- BeamSQL (BEAM-13683).
- BiqQuery (BEAM-13732).
- KafkaIO now also has automatic expansion service startup. (BEAM-13821).
- DataFrame API now supports pandas 1.4.x (BEAM-13605).
- Go SDK DoFns can now observe trigger panes directly (BEAM-13757).
- Added option to specify a caching directory in Interactive Beam (Python) (BEAM-13685).
- Added support for caching batch pipelines to GCS in Interactive Beam (Python) (BEAM-13734).
- On rare occations, Python Datastore source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Python GCS source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- Support for stopReadTime on KafkaIO SDF (Java).(BEAM-13171).
- Added ability to register URI schemes to use the S3 protocol via FileIO using amazon-web-services2 (amazon-web-services already had this ability). (BEAM-12435, BEAM-13245).
- Added support for cloudpickle as a pickling library for Python SDK (BEAM-8123). To use cloudpickle, set pipeline option: --pickler_lib=cloudpickle
- Added option to specify triggering frequency when streaming to BigQuery (Python) (BEAM-12865).
- Added option to enable caching uploaded artifacts across job runs for Python Dataflow jobs (BEAM-13459). To enable, set pipeline option: --enable_artifact_caching, this will be enabled by default in a future release.
- Updated the jedis from 3.x to 4.x to Java RedisIO. If you are using RedisIO and using jedis directly, please refer to this page to update it. (BEAM-12092).
- Datatype of timestamp fields in
SqsMessage
for AWS IOs for SDK v2 was changed fromString
tolong
, visibility of all fields was fixed frompackage private
topublic
BEAM-13638.
- Properly check output timestamps on elements output from DoFns, timers, and onWindowExpiration in Java BEAM-12931.
- Fixed a bug with DeferredDataFrame.xs when used with a non-tuple key (BEAM-13421).
- Users may encounter an unexpected java.lang.ArithmeticException when outputting a timestamp for an element further than allowedSkew from an allowed DoFN skew set to a value more than Integer.MAX_VALUE.
- On rare occations, Python Datastore source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Python GCS source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Java SpannerIO source may swallow some exceptions. Users are adviced to upgrade to Beam 2.37.0 or later (BEAM-14005)
- MultiMap side inputs are now supported by the Go SDK (BEAM-3293).
- Side inputs are supported within Splittable DoFns for Dataflow Runner V1 and Dataflow Runner V2. (BEAM-12522).
- Upgrades Log4j version used in test suites (Apache Beam testing environment only, not for end user consumption) to 2.17.0(BEAM-13434). Note that Apache Beam versions do not depend on the Log4j 2 dependency (log4j-core) impacted by CVE-2021-44228. However we urge users to update direct and indirect dependencies (if any) on Log4j 2 to the latest version by updating their build configuration and redeploying impacted pipelines.
- We changed the data type for ranges in
JdbcIO.readWithPartitions
fromint
tolong
(BEAM-13149). This is a relatively minor breaking change, which we're implementing to improve the usability of the transform without increasing cruft. This transform is relatively new, so we may implement other breaking changes in the future to improve its usability. - Side inputs are supported within Splittable DoFns for Dataflow Runner V1 and Dataflow Runner V2. (BEAM-12522).
- Added custom delimiters to Python TextIO reads (BEAM-12730).
- Added escapechar parameter to Python TextIO reads (BEAM-13189).
- Splittable reading is enabled by default while reading data with ParquetIO (BEAM-12070).
- DoFn Execution Time metrics added to Go (BEAM-13001).
- Cross-bundle side input caching is now available in the Go SDK for runners that support the feature by setting the EnableSideInputCache hook (BEAM-11097).
- Upgraded the GCP Libraries BOM version to 24.0.0 and associated dependencies (BEAM-11205). For Google Cloud client library versions set by this BOM, see this table.
- Removed avro-python3 dependency in AvroIO. Fastavro has already been our Avro library of choice on Python 3. Boolean use_fastavro is left for api compatibility, but will have no effect.(BEAM-13016).
- MultiMap side inputs are now supported by the Go SDK (BEAM-3293).
- Remote packages can now be downloaded from locations supported by apache_beam.io.filesystems. The files will be downloaded on Stager and uploaded to staging location. For more information, see BEAM-11275
- A new URN convention was adopted for cross-language transforms and existing URNs were updated. This may break advanced use-cases, for example, if a custom expansion service is used to connect diffrent Beam Java and Python versions. (BEAM-12047).
- The upgrade to Calcite 1.28.0 introduces a breaking change in the SUBSTRING function in SqlTransform, when used with the Calcite dialect (BEAM-13099, CALCITE-4427).
- ListShards (with DescribeStreamSummary) is used instead of DescribeStream to list shards in Kinesis streams (AWS SDK v2). Due to this change, as mentioned in AWS documentation, for fine-grained IAM policies it is required to update them to allow calls to ListShards and DescribeStreamSummary APIs. For more information, see Controlling Access to Amazon Kinesis Data Streams (BEAM-13233).
- Non-splittable reading is deprecated while reading data with ParquetIO (BEAM-12070).
- Properly map main input windows to side input windows by default (Go) (BEAM-11087).
- Fixed data loss when writing to DynamoDB without setting deduplication key names (Java) (BEAM-13009).
- Go SDK Examples now have types and functions registered. (Go) (BEAM-5378)
- Users of beam-sdks-java-io-hcatalog (and beam-sdks-java-extensions-sql-hcatalog) must take care to override the transitive log4j dependency when they add a hive dependency (BEAM-13499).
- On rare occations, Python Datastore source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Python GCS source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Java SpannerIO source may swallow some exceptions. Users are adviced to upgrade to Beam 2.37.0 or later (BEAM-14005)
- The Beam Java API for Calcite SqlTransform is no longer experimental (BEAM-12680).
- Python's ParDo (Map, FlatMap, etc.) transforms now suport a
with_exception_handling
option for easily ignoring bad records and implementing the dead letter pattern.
ReadFromBigQuery
andReadAllFromBigQuery
now run queries with BATCH priority by default. Thequery_priority
parameter is introduced to the same transforms to allow configuring the query priority (Python) (BEAM-12913).- [EXPERIMENTAL] Support for BigQuery Storage Read API added to
ReadFromBigQuery
. The newly introducedmethod
parameter can be set asDIRECT_READ
to use the Storage Read API. The default isEXPORT
which invokes a BigQuery export request. (Python) (BEAM-10917). - [EXPERIMENTAL] Added
use_native_datetime
parameter toReadFromBigQuery
to configure the return type of DATETIME fields when usingReadFromBigQuery
. This parameter can only be used whenmethod = DIRECT_READ
(Python) (BEAM-10917). - [EXPERIMENTAL] Added support for writing to Redis Streams as a sink in RedisIO (BEAM-13159)
- Upgraded to Calcite 1.26.0 (BEAM-9379).
- Added a new
dataframe
extra to the Python SDK that trackspandas
versions we've verified compatibility with. We now recommend installing Beam withpip install apache-beam[dataframe]
when you intend to use the DataFrame API (BEAM-12906). - Added an example of deploying Python Apache Beam job with Spark Cluster
- SQL Rows are no longer flattened (BEAM-5505).
- [Go SDK] beam.TryCrossLanguage's signature now matches beam.CrossLanguage. Like other Try functions it returns an error instead of panicking. (BEAM-9918).
- BEAM-12925 was fixed. It used to silently pass incorrect null data read from JdbcIO. Pipelines affected by this will now start throwing failures instead of silently passing incorrect data.
- Fixed error while writing multiple DeferredFrames to csv (Python) (BEAM-12701).
- Fixed error when importing the DataFrame API with pandas 1.0.x installed (BEAM-12945).
- Fixed top.SmallestPerKey implementation in the Go SDK (BEAM-12946).
- On rare occations, Python Datastore source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Python GCS source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- On rare occations, Java SpannerIO source may swallow some exceptions. Users are adviced to upgrade to Beam 2.37.0 or later (BEAM-14005)
- Go SDK is no longer experimental, and is officially part of the Beam release process.
- Matching Go SDK containers are published on release.
- Batch usage is well supported, and tested on Flink, Spark, and the Python Portable Runner.
- SDK Tests are also run against Google Cloud Dataflow, but this doesn't indicate reciprical support.
- The SDK supports Splittable DoFns, Cross Language transforms, and most Beam Model basics.
- Go Modules are now used for dependency management.
- This is a breaking change, see Breaking Changes for resolution.
- Easier path to contribute to the Go SDK, no need to set up a GO_PATH.
- Minimum Go version is now Go v1.16
- See the announcement blogpost for full information once published.
- Projection pushdown in SchemaIO (BEAM-12609).
- Upgrade Flink runner to Flink versions 1.13.2, 1.12.5 and 1.11.4 (BEAM-10955).
- Go SDK pipelines require new import paths to use this release due to migration to Go Modules.
go.mod
files will need to change to requiregithub.com/apache/beam/sdks/v2
.- Code depending on beam imports need to include v2 on the module path.
- Fix by'v2' to the import paths, turning
.../sdks/go/...
to.../sdks/v2/go/...
- Fix by'v2' to the import paths, turning
- No other code change should be required to use v2.33.0 of the Go SDK.
- Since release 2.30.0, "The AvroCoder changes for BEAM-2303 [changed] the reader/writer from the Avro ReflectDatum* classes to the SpecificDatum* classes" (Java). This default behavior change has been reverted in this release. Use the
useReflectApi
setting to control it (BEAM-12628).
- Python GBK will stop supporting unbounded PCollections that have global windowing and a default trigger in Beam 2.34. This can be overriden with
--allow_unsafe_triggers
. (BEAM-9487). - Python GBK will start requiring safe triggers or the
--allow_unsafe_triggers
flag starting with Beam 2.34. (BEAM-9487).
- Workaround to not delete orphaned files to avoid missing events when using Python WriteToFiles in streaming pipeline (BEAM-12950))
- Spark 2.x users will need to update Spark's Jackson runtime dependencies (
spark.jackson.version
) to at least version 2.9.2, due to Beam updating its dependencies. - Go SDK jobs may produce "Failed to deduce Step from MonitoringInfo" messages following successful job execution. The messages are benign and don't indicate job failure. These are due to not yet handling PCollection metrics.
- On rare occations, Python GCS source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- The Beam DataFrame API is no longer experimental! We've spent the time since the 2.26.0 preview announcement implementing the most frequently used pandas operations (BEAM-9547), improving documentation and error messages, adding examples, integrating DataFrames with interactive Beam, and of course finding and fixing bugs. Leaving experimental just means that we now have high confidence in the API and recommend its use for production workloads. We will continue to improve the API, guided by your feedback.
- New experimental Firestore connector in Java SDK, providing sources and sinks to Google Cloud Firestore (BEAM-8376).
- Added ability to use JdbcIO.Write.withResults without statement and preparedStatementSetter. (BEAM-12511)
- Added ability to register URI schemes to use the S3 protocol via FileIO. (BEAM-12435).
- Respect number of shards set in SnowflakeWrite batch mode. (BEAM-12715)
- Java SDK: Update Google Cloud Healthcare IO connectors from using v1beta1 to using the GA version.
- Add support to convert Beam Schema to Avro Schema for JDBC LogicalTypes:
VARCHAR
,NVARCHAR
,LONGVARCHAR
,LONGNVARCHAR
,DATE
,TIME
(Java)(BEAM-12385). - Reading from JDBC source by partitions (Java) (BEAM-12456).
- PubsubIO can now write to a dead-letter topic after a parsing error (Java)(BEAM-12474).
- New append-only option for Elasticsearch sink (Java) BEAM-12601
- DatastoreIO: Write and delete operations now follow automatic gradual ramp-up, in line with best practices (Java/Python) (BEAM-12260, BEAM-12272).
- ListShards (with DescribeStreamSummary) is used instead of DescribeStream to list shards in Kinesis streams. Due to this change, as mentioned in AWS documentation, for fine-grained IAM policies it is required to update them to allow calls to ListShards and DescribeStreamSummary APIs. For more information, see Controlling Access to Amazon Kinesis Data Streams (BEAM-12225).
- Python GBK will stop supporting unbounded PCollections that have global windowing and a default trigger in Beam 2.33. This can be overriden with
--allow_unsafe_triggers
. (BEAM-9487). - Python GBK will start requiring safe triggers or the
--allow_unsafe_triggers
flag starting with Beam 2.33. (BEAM-9487).
- Fixed race condition in RabbitMqIO causing duplicate acks (Java) (BEAM-6516))
- On rare occations, Python GCS source may swallow some exceptions. Users are adviced to upgrade to Beam 2.38.0 or later (BEAM-14282)
- Fixed bug in ReadFromBigQuery when a RuntimeValueProvider is used as value of table argument (Python) (BEAM-12514).
CREATE FUNCTION
DDL statement added to Calcite SQL syntax.JAR
andAGGREGATE
are now reserved keywords. (BEAM-12339).- Flink 1.13 is now supported by the Flink runner (BEAM-12277).
- Python
TriggerFn
has a newmay_lose_data
method to signal potential data loss. Default behavior assumes safe (necessary for backwards compatibility). See Deprecations for potential impact of overriding this. (BEAM-9487).
- Python Row objects are now sensitive to field order. So
Row(x=3, y=4)
is no longer considered equal toRow(y=4, x=3)
(BEAM-11929). - Kafka Beam SQL tables now ascribe meaning to the LOCATION field; previously it was ignored if provided.
TopCombineFn
disallowcompare
as its argument (Python) (BEAM-7372).- Drop support for Flink 1.10 (BEAM-12281).
- Python GBK will stop supporting unbounded PCollections that have global windowing and a default trigger in Beam 2.33. This can be overriden with
--allow_unsafe_triggers
. (BEAM-9487). - Python GBK will start requiring safe triggers or the
--allow_unsafe_triggers
flag starting with Beam 2.33. (BEAM-9487).
- Allow splitting apart document serialization and IO for ElasticsearchIO
- Support Bulk API request size optimization through addition of ElasticsearchIO.Write.withStatefulBatches
- Added capability to declare resource hints in Java and Python SDKs (BEAM-2085).
- Added Spanner IO Performance tests for read and write. (Python) (BEAM-10029).
- Added support for accessing GCP PubSub Message ordering keys, message IDs and message publish timestamp (Python) (BEAM-7819).
- DataFrame API: Added support for collecting DataFrame objects in interactive Beam (BEAM-11855)
- DataFrame API: Added apache_beam.examples.dataframe module (BEAM-12024)
- Upgraded the GCP Libraries BOM version to 20.0.0 (BEAM-11205). For Google Cloud client library versions set by this BOM, see this table.
- Drop support for Flink 1.8 and 1.9 (BEAM-11948).
- MongoDbIO: Read.withFilter() and Read.withProjection() are removed since they are deprecated since Beam 2.12.0 (BEAM-12217).
- RedisIO.readAll() was removed since it was deprecated since Beam 2.13.0. Please use RedisIO.readKeyPatterns() for the equivalent functionality. (BEAM-12214).
- MqttIO.create() with clientId constructor removed because it was deprecated since Beam 2.13.0 (BEAM-12216).
- Spark Classic and Portable runners officially support Spark 3 (BEAM-7093).
- Official Java 11 support for most runners (Dataflow, Flink, Spark) (BEAM-2530).
- DataFrame API now supports GroupBy.apply (BEAM-11628).
- Added support for S3 filesystem on AWS SDK V2 (Java) (BEAM-7637)
- DataFrame API now supports pandas 1.2.x (BEAM-11531).
- Multiple DataFrame API bugfixes (BEAM-12071, BEAM-11929)
- Deterministic coding enforced for GroupByKey and Stateful DoFns. Previously non-deterministic coding was allowed, resulting in keys not properly being grouped in some cases. (BEAM-11719)
To restore the old behavior, one can register
FakeDeterministicFastPrimitivesCoder
withbeam.coders.registry.register_fallback_coder(beam.coders.coders.FakeDeterministicFastPrimitivesCoder())
or use theallow_non_deterministic_key_coders
pipeline option.
- Support for Flink 1.8 and 1.9 will be removed in the next release (2.30.0) (BEAM-11948).
- Many improvements related to Parquet support (BEAM-11460, BEAM-8202, and BEAM-11526)
- Hash Functions in BeamSQL (BEAM-10074)
- Hash functions in ZetaSQL (BEAM-11624)
- Create ApproximateDistinct using HLL Impl (BEAM-10324)
- SpannerIO supports using BigDecimal for Numeric fields (BEAM-11643)
- Add Beam schema support to ParquetIO (BEAM-11526)
- Support ParquetTable Writer (BEAM-8202)
- GCP BigQuery sink (streaming inserts) uses runner determined sharding (BEAM-11408)
- PubSub support types: TIMESTAMP, DATE, TIME, DATETIME (BEAM-11533)
- ParquetIO add methods readGenericRecords and readFilesGenericRecords can read files with an unknown schema. See PR-13554 and (BEAM-11460)
- Added support for thrift in KafkaTableProvider (BEAM-11482)
- Added support for HadoopFormatIO to skip key/value clone (BEAM-11457)
- Support Conversion to GenericRecords in Convert.to transform (BEAM-11571).
- Support writes for Parquet Tables in Beam SQL (BEAM-8202).
- Support reading Parquet files with unknown schema (BEAM-11460)
- Support user configurable Hadoop Configuration flags for ParquetIO (BEAM-11527)
- Expose commit_offset_in_finalize and timestamp_policy to ReadFromKafka (BEAM-11677)
- S3 options does not provided to boto3 client while using FlinkRunner and Beam worker pool container (BEAM-11799)
- HDFS not deduplicating identical configuration paths (BEAM-11329)
- Hash Functions in BeamSQL (BEAM-10074)
- Create ApproximateDistinct using HLL Impl (BEAM-10324)
- Add Beam schema support to ParquetIO (BEAM-11526)
- Add a Deque Encoder (BEAM-11538)
- Hash functions in ZetaSQL (BEAM-11624)
- Refactor ParquetTableProvider ()
- Add JVM properties to JavaJobServer (BEAM-8344)
- Single source of truth for supported Flink versions ()
- Use metric for Python BigQuery streaming insert API latency logging (BEAM-11018)
- Use metric for Java BigQuery streaming insert API latency logging (BEAM-11032)
- Upgrade Flink runner to Flink versions 1.12.1 and 1.11.3 (BEAM-11697)
- Upgrade Beam base image to use Tensorflow 2.4.1 (BEAM-11762)
- Create Beam GCP BOM (BEAM-11665)
- The Java artifacts "beam-sdks-java-io-kinesis", "beam-sdks-java-io-google-cloud-platform", and
"beam-sdks-java-extensions-sql-zetasql" declare Guava 30.1-jre dependency (It was 25.1-jre in Beam 2.27.0).
This new Guava version may introduce dependency conflicts if your project or dependencies rely
on removed APIs. If affected, ensure to use an appropriate Guava version via
dependencyManagement
in Maven andforce
in Gradle.
- ReadFromMongoDB can now be used with MongoDB Atlas (Python) (BEAM-11266.)
- ReadFromMongoDB/WriteToMongoDB will mask password in display_data (Python) (BEAM-11444.)
- Support for X source added (Java/Python) (BEAM-X).
- There is a new transform
ReadAllFromBigQuery
that can receive multiple requests to read data from BigQuery at pipeline runtime. See PR 13170, and BEAM-9650.
- Beam modules that depend on Hadoop are now tested for compatibility with Hadoop 3 (BEAM-8569). (Hive/HCatalog pending)
- Publishing Java 11 SDK container images now supported as part of Apache Beam release process. (BEAM-8106)
- Added Cloud Bigtable Provider extension to Beam SQL (BEAM-11173, BEAM-11373)
- Added a schema provider for thrift data (BEAM-11338)
- Added combiner packing pipeline optimization to Dataflow runner. (BEAM-10641)
- Support for the Deque structure by adding a coder (BEAM-11538)
- HBaseIO hbase-shaded-client dependency should be now provided by the users (BEAM-9278).
--region
flag in amazon-web-services2 was replaced by--awsRegion
(BEAM-11331).
- Splittable DoFn is now the default for executing the Read transform for Java based runners (Spark with bounded pipelines) in addition to existing runners from the 2.25.0 release (Direct, Flink, Jet, Samza, Twister2). The expected output of the Read transform is unchanged. Users can opt-out using
--experiments=use_deprecated_read
. The Apache Beam community is looking for feedback for this change as the community is planning to make this change permanent with no opt-out. If you run into an issue requiring the opt-out, please send an e-mail to [email protected] specifically referencing BEAM-10670 in the subject line and why you needed to opt-out. (Java) (BEAM-10670)
- Java BigQuery streaming inserts now have timeouts enabled by default. Pass
--HTTPWriteTimeout=0
to revert to the old behavior. (BEAM-6103) - Added support for Contextual Text IO (Java), a version of text IO that provides metadata about the records (BEAM-10124). Support for this IO is currently experimental. Specifically, there are no update-compatibility guarantees for streaming jobs with this IO between current future verisons of Apache Beam SDK.
- Added support for avro payload format in Beam SQL Kafka Table (BEAM-10885)
- Added support for json payload format in Beam SQL Kafka Table (BEAM-10893)
- Added support for protobuf payload format in Beam SQL Kafka Table (BEAM-10892)
- Added support for avro payload format in Beam SQL Pubsub Table (BEAM-5504)
- Added option to disable unnecessary copying between operators in Flink Runner (Java) (BEAM-11146)
- Added CombineFn.setup and CombineFn.teardown to Python SDK. These methods let you initialize the CombineFn's state before any of the other methods of the CombineFn is executed and clean that state up later on. If you are using Dataflow, you need to enable Dataflow Runner V2 by passing
--experiments=use_runner_v2
before using this feature. (BEAM-3736) - Added support for NestedValueProvider for the Python SDK (BEAM-10856).
- BigQuery's DATETIME type now maps to Beam logical type org.apache.beam.sdk.schemas.logicaltypes.SqlTypes.DATETIME
- Pandas 1.x is now required for dataframe operations.
- Non-idempotent combiners built via
CombineFn.from_callable()
orCombineFn.maybe_from_callable()
can lead to incorrect behavior. (BEAM-11522).
- Splittable DoFn is now the default for executing the Read transform for Java based runners (Direct, Flink, Jet, Samza, Twister2). The expected output of the Read transform is unchanged. Users can opt-out using
--experiments=use_deprecated_read
. The Apache Beam community is looking for feedback for this change as the community is planning to make this change permanent with no opt-out. If you run into an issue requiring the opt-out, please send an e-mail to [email protected] specifically referencing BEAM-10670 in the subject line and why you needed to opt-out. (Java) (BEAM-10670)
- Added cross-language support to Java's KinesisIO, now available in the Python module
apache_beam.io.kinesis
(BEAM-10138, BEAM-10137). - Update Snowflake JDBC dependency for SnowflakeIO (BEAM-10864)
- Added cross-language support to Java's SnowflakeIO.Write, now available in the Python module
apache_beam.io.snowflake
(BEAM-9898). - Added delete function to Java's
ElasticsearchIO#Write
. Now, Java's ElasticsearchIO can be used to selectively delete documents usingwithIsDeleteFn
function (BEAM-5757). - Java SDK: Added new IO connector for InfluxDB - InfluxDbIO (BEAM-2546).
- Config options added for Python's S3IO (BEAM-9094)
- Support for repeatable fields in JSON decoder for
ReadFromBigQuery
added. (Python) (BEAM-10524) - Added an opt-in, performance-driven runtime type checking system for the Python SDK (BEAM-10549). More details will be in an upcoming blog post.
- Added support for Python 3 type annotations on PTransforms using typed PCollections (BEAM-10258). More details will be in an upcoming blog post.
- Improved the Interactive Beam API where recording streaming jobs now start a long running background recording job. Running ib.show() or ib.collect() samples from the recording (BEAM-10603).
- In Interactive Beam, ib.show() and ib.collect() now have "n" and "duration" as parameters. These mean read only up to "n" elements and up to "duration" seconds of data read from the recording (BEAM-10603).
- Initial preview of Dataframes support. See also example at apache_beam/examples/wordcount_dataframe.py
- Fixed support for type hints on
@ptransform_fn
decorators in the Python SDK. (BEAM-4091) This has not enabled by default to preserve backwards compatibility; use the--type_check_additional=ptransform_fn
flag to enable. It may be enabled by default in future versions of Beam.
- Python 2 and Python 3.5 support dropped (BEAM-10644, BEAM-9372).
- Pandas 1.x allowed. Older version of Pandas may still be used, but may not be as well tested.
- Python transform ReadFromSnowflake has been moved from
apache_beam.io.external.snowflake
toapache_beam.io.snowflake
. The previous path will be removed in the future versions.
- Dataflow streaming timers once against not strictly time ordered when set earlier mid-bundle, as the fix for BEAM-8543 introduced more severe bugs and has been rolled back.
- Default compressor change breaks dataflow python streaming job update compatibility. Please use python SDK version <= 2.23.0 or > 2.25.0 if job update is critical.(BEAM-11113)
- Apache Beam 2.24.0 is the last release with Python 2 and Python 3.5 support.
- New overloads for BigtableIO.Read.withKeyRange() and BigtableIO.Read.withRowFilter() methods that take ValueProvider as a parameter (Java) (BEAM-10283).
- The WriteToBigQuery transform (Python) in Dataflow Batch no longer relies on BigQuerySink by default. It relies on
a new, fully-featured transform based on file loads into BigQuery. To revert the behavior to the old implementation,
you may use
--experiments=use_legacy_bq_sink
. - Add cross-language support to Java's JdbcIO, now available in the Python module
apache_beam.io.jdbc
(BEAM-10135, BEAM-10136). - Add support of AWS SDK v2 for KinesisIO.Read (Java) (BEAM-9702).
- Add streaming support to SnowflakeIO in Java SDK (BEAM-9896)
- Support reading and writing to Google Healthcare DICOM APIs in Python SDK (BEAM-10601)
- Add dispositions for SnowflakeIO.write (BEAM-10343)
- Add cross-language support to SnowflakeIO.Read now available in the Python module
apache_beam.io.external.snowflake
(BEAM-9897).
- Shared library for simplifying management of large shared objects added to Python SDK. An example use case is sharing a large TF model object across threads (BEAM-10417).
- Dataflow streaming timers are not strictly time ordered when set earlier mid-bundle (BEAM-8543).
- OnTimerContext should not create a new one when processing each element/timer in FnApiDoFnRunner (BEAM-9839)
- Key should be available in @OnTimer methods (Spark Runner) (BEAM-9850)
- WriteToBigQuery transforms now require a GCS location to be provided through either custom_gcs_temp_location in the constructor of WriteToBigQuery or the fallback option --temp_location, or pass method="STREAMING_INSERTS" to WriteToBigQuery (BEAM-6928).
- Python SDK now understands
typing.FrozenSet
type hints, which are not interchangeable withtyping.Set
. You may need to update your pipelines if type checking fails. (BEAM-10197)
- When a timer fires but is reset prior to being executed, a watermark hold may be leaked, causing a stuck pipeline BEAM-10991.
- Default compressor change breaks dataflow python streaming job update compatibility. Please use python SDK version <= 2.23.0 or > 2.25.0 if job update is critical.(BEAM-11113)
- Support for reading from Snowflake added (Java) (BEAM-9722).
- Support for writing to Splunk added (Java) (BEAM-8596).
- Support for assume role added (Java) (BEAM-10335).
- A new transform to read from BigQuery has been added:
apache_beam.io.gcp.bigquery.ReadFromBigQuery
. This transform is experimental. It reads data from BigQuery by exporting data to Avro files, and reading those files. It also supports reading data by exporting to JSON files. This has small differences in behavior for Time and Date-related fields. See Pydoc for more information.
- Update Snowflake JDBC dependency and add application=beam to connection URL (BEAM-10383).
RowJson.RowJsonDeserializer
,JsonToRow
, andPubsubJsonTableProvider
now accept "implicit nulls" by default when deserializing JSON (Java) (BEAM-10220). Previously nulls could only be represented with explicit null values, as in{"foo": "bar", "baz": null}
, whereas an implicit null like{"foo": "bar"}
would raise an exception. Now both JSON strings will yield the same result by default. This behavior can be overridden withRowJson.RowJsonDeserializer#withNullBehavior
.- Fixed a bug in
GroupIntoBatches
experimental transform in Python to actually group batches by key. This changes the output type for this transform (BEAM-6696).
- Remove Gearpump runner. (BEAM-9999)
- Remove Apex runner. (BEAM-9999)
- RedisIO.readAll() is deprecated and will be removed in 2 versions, users must use RedisIO.readKeyPatterns() as a replacement (BEAM-9747).
- Fixed X (Java/Python) (BEAM-X).
- Basic Kafka read/write support for DataflowRunner (Python) (BEAM-8019).
- Sources and sinks for Google Healthcare APIs (Java)(BEAM-9468).
- Support for writing to Snowflake added (Java) (BEAM-9894).
--workerCacheMB
flag is supported in Dataflow streaming pipeline (BEAM-9964)--direct_num_workers=0
is supported for FnApi runner. It will set the number of threads/subprocesses to number of cores of the machine executing the pipeline (BEAM-9443).- Python SDK now has experimental support for SqlTransform (BEAM-8603).
- Add OnWindowExpiration method to Stateful DoFn (BEAM-1589).
- Added PTransforms for Google Cloud DLP (Data Loss Prevention) services integration (BEAM-9723):
- Inspection of data,
- Deidentification of data,
- Reidentification of data.
- Add a more complete I/O support matrix in the documentation site (BEAM-9916).
- Upgrade Sphinx to 3.0.3 for building PyDoc.
- Added a PTransform for image annotation using Google Cloud AI image processing service (BEAM-9646)
- Dataflow streaming timers are not strictly time ordered when set earlier mid-bundle (BEAM-8543).
- The Python SDK now requires
--job_endpoint
to be set when using--runner=PortableRunner
(BEAM-9860). Users seeking the old default behavior should set--runner=FlinkRunner
instead.
- Python: Deprecated module
apache_beam.io.gcp.datastore.v1
has been removed as the client it uses is out of date and does not support Python 3 (BEAM-9529). Please migrate your code to use apache_beam.io.gcp.datastore.v1new. See the updated datastore_wordcount for example usage. - Python SDK: Added integration tests and updated batch write functionality for Google Cloud Spanner transform (BEAM-8949).
-
Python SDK will now use Python 3 type annotations as pipeline type hints. (#10717)
If you suspect that this feature is causing your pipeline to fail, calling
apache_beam.typehints.disable_type_annotations()
before pipeline creation will disable is completely, and decorating specific functions (such asprocess()
) with@apache_beam.typehints.no_annotations
will disable it for that function.More details will be in Ensuring Python Type Safety and an upcoming blog post.
-
Java SDK: Introducing the concept of options in Beam Schemas. These options add extra context to fields and schemas. This replaces the current Beam metadata that is present in a FieldType only, options are available in fields and row schemas. Schema options are fully typed and can contain complex rows. Remark: Schema aware is still experimental. (BEAM-9035)
-
Java SDK: The protobuf extension is fully schema aware and also includes protobuf option conversion to beam schema options. Remark: Schema aware is still experimental. (BEAM-9044)
-
Added ability to write to BigQuery via Avro file loads (Python) (BEAM-8841)
By default, file loads will be done using JSON, but it is possible to specify the temp_file_format parameter to perform file exports with AVRO. AVRO-based file loads work by exporting Python types into Avro types, so to switch to Avro-based loads, you will need to change your data types from Json-compatible types (string-type dates and timestamp, long numeric values as strings) into Python native types that are written to Avro (Python's date, datetime types, decimal, etc). For more information see https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro#avro_conversions.
-
Added integration of Java SDK with Google Cloud AI VideoIntelligence service (BEAM-9147)
-
Added integration of Java SDK with Google Cloud AI natural language processing API (BEAM-9634)
-
docker-pull-licenses
tag was introduced. Licenses/notices of third party dependencies will be added to the docker images whendocker-pull-licenses
was set. The files are added to/opt/apache/beam/third_party_licenses/
. By default, no licenses/notices are added to the docker images. (BEAM-9136)
- Dataflow runner now requires the
--region
option to be set, unless a default value is set in the environment (BEAM-9199). See here for more details. - HBaseIO.ReadAll now requires a PCollection of HBaseIO.Read objects instead of HBaseQuery objects (BEAM-9279).
- ProcessContext.updateWatermark has been removed in favor of using a WatermarkEstimator (BEAM-9430).
- Coder inference for PCollection of Row objects has been disabled (BEAM-9569).
- Go SDK docker images are no longer released until further notice.
- Java SDK: Beam Schema FieldType.getMetadata is now deprecated and is replaced by the Beam
Schema Options, it will be removed in version
2.23.0
. (BEAM-9704) - The
--zone
option in the Dataflow runner is now deprecated. Please use--worker_zone
instead. (BEAM-9716)
- Java SDK: Adds support for Thrift encoded data via ThriftIO. (BEAM-8561)
- Java SDK: KafkaIO supports schema resolution using Confluent Schema Registry. (BEAM-7310)
- Java SDK: Add Google Cloud Healthcare IO connectors: HL7v2IO and FhirIO (BEAM-9468)
- Python SDK: Support for Google Cloud Spanner. This is an experimental module for reading and writing data from Google Cloud Spanner (BEAM-7246).
- Python SDK: Adds support for standard HDFS URLs (with server name). (#10223).
- New AnnotateVideo & AnnotateVideoWithContext PTransform's that integrates GCP Video Intelligence functionality. (Python) (BEAM-9146)
- New AnnotateImage & AnnotateImageWithContext PTransform's for element-wise & batch image annotation using Google Cloud Vision API. (Python) (BEAM-9247)
- Added a PTransform for inspection and deidentification of text using Google Cloud DLP. (Python) (BEAM-9258)
- New AnnotateText PTransform that integrates Google Cloud Natural Language functionality (Python) (BEAM-9248)
- ReadFromBigQuery now supports value providers for the query string (Python) (BEAM-9305)
- Direct runner for FnApi supports further parallelism (Python) (BEAM-9228)
- Support for @RequiresTimeSortedInput in Flink and Spark (Java) (BEAM-8550)
- ReadFromPubSub(topic=) in Python previously created a subscription under the same project as the topic. Now it will create the subscription under the project specified in pipeline_options. If the project is not specified in pipeline_options, then it will create the subscription under the same project as the topic. (BEAM-3453).
- SpannerAccessor in Java is now package-private to reduce API surface.
SpannerConfig.connectToSpanner
has been moved toSpannerAccessor.create
. (BEAM-9310). - ParquetIO hadoop dependency should be now provided by the users (BEAM-8616).
- Docker images will be deployed to apache/beam repositories from 2.20. They used to be deployed to apachebeam repository. (BEAM-9063)
- PCollections now have tags inferred from the result type (e.g. the keys of a dict or index of a tuple). Users may expect the old implementation which gave PCollection output ids a monotonically increasing id. To go back to the old implementation, use the
force_generated_pcollection_output_ids
experiment.
- Fixed numpy operators in ApproximateQuantiles (Python) (BEAM-9579).
- Fixed exception when running in IPython notebook (Python) (BEAM-X9277).
- Fixed Flink uberjar job termination bug. (BEAM-9225)
- Fixed SyntaxError in process worker startup (BEAM-9503)
- Key should be available in @OnTimer methods (Java) (BEAM-1819).
- For versions 2.19.0 and older release notes are available on Apache Beam Blog.