Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump tensorflow from 2.17.0 to 2.18.0 in /engine #83

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Nov 14, 2024

Bumps tensorflow from 2.17.0 to 2.18.0.

Release notes

Sourced from tensorflow's releases.

TensorFlow 2.18.0

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

Known Caveats

Major Features and Improvements

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.
    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
  • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data

    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.
    • Add tf.data.experimental.get_model_proto to allow users to peek into the analytical model inside of a dataset iterator.
  • tf.lite

    • Dequantize op supports TensorType_INT4.
      • This change includes per-channel dequantization.
    • Add support for stablehlo.composite.
    • EmbeddingLookup op supports per-channel quantization and TensorType_INT4 values.
    • FullyConnected op supports TensorType_INT16 activation and TensorType_Int4 weight per-channel quantization.
  • tf.tensor_scatter_update, tf.tensor_scatter_add and of other reduce types.

    • Support bad_indices_policy.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Akhil Goel, akhilgoe, Alexander Pivovarov, Amir Samani, Andrew Goodbody, Andrey Portnoy, Anthony Platanios, bernardoArcari, Brett Taylor, buptzyb, Chao, Christian Clauss, Cocoa, Daniil Kutz, Darya Parygina, dependabot[bot], Dimitris Vardoulakis, Dragan Mladjenovic, Elfie Guo, eukub, Faijul Amin, flyingcat, Frédéric Bastien, ganyu.08, Georg Stefan Schmid, Grigory Reznikov, Harsha H S, Harshit Monish, Heiner, Ilia Sergachev, Jan, Jane Liu, Jaroslav Sevcik, Kaixi Hou, Kanvi Khanna, Kristof Maar, Kristóf Maár, LakshmiKalaKadali, Lbertho-Gpsw, lingzhi98, MarcoFalke, Masahiro Hiramori, Mmakevic-Amd, mraunak, Nobuo Tsukamoto, Notheisz57, Olli Lupton, Pearu Peterson, pemeliya, Peyara Nando, Philipp Hack, Phuong Nguyen, Pol Dellaiera, Rahul Batra, Ruturaj Vaidya, sachinmuradi, Sergey Kozub, Shanbin Ke, Sheng Yang, shengyu, Shraiysh, Shu Wang, Surya, sushreebarsa, Swatheesh-Mcw, syzygial, Tai Ly, terryysun, tilakrayal, Tj Xu, Trevor Morris, Tzung-Han Juang, wenchenvincent, wondertx, Xuefei Jiang, Ye Huang, Yimei Sun, Yunlong Liu, Zahid Iqbal, Zhan Lu, Zoranjovanovic-Ns, Zuri Obozuwa

... (truncated)

Changelog

Sourced from tensorflow's changelog.

Release 2.18.0

TensorFlow

Breaking Changes

  • tf.lite

    • Interpreter:
      • tf.lite.Interpreter gives a deprecation warning and a redirection notice to its new location at ai_edge_litert.interpreter. See the migration guide for details.
    • C API:
      • An optional, fourth parameter was added TfLiteOperatorCreate as a step forward towards a cleaner API for TfLiteOperator. Function TfLiteOperatorCreate was added recently, in TensorFlow Lite version 2.17.0, released on 7/11/2024, and we do not expect there will be much code using this function yet. Any code breakages can be easily resolved by passing nullptr as the new, 4th parameter.
  • TensorRT support is disabled in CUDA builds for code health improvement.

  • TensorFlow now supports and is compiled with NumPy 2.0 by default. Please see the NumPy 2 release notes and the NumPy 2 migration guide.

    • Note that NumPy's type promotion rules have been changed(See NEP 50for details). This may change the precision at which computations happen, leading either to type errors or to numerical changes to results.
    • Tensorflow will continue to support NumPy 1.26 until 2025, aligning with community standard deprecation timeline here.
  • Hermetic CUDA support is added.

    Hermetic CUDA uses a specific downloadable version of CUDA instead of the user’s locally installed CUDA. Bazel will download CUDA, CUDNN and NCCL distributions, and then use CUDA libraries and tools as dependencies in various Bazel targets. This enables more reproducible builds for Google ML projects and supported CUDA versions.

  • Remove the EnumNamesXNNPackFlags function in tensorflow/lite/acceleration/configuration/configuration_generated.h.

    This change is a bug fix in the automatically generated code. This change is automatically generated by the new flatbuffer generator. The flatbuffers library is updated to 24.3.25 in tensorflow/tensorflow@c17d64d. The new flatbuffers library includes the following change google/flatbuffers#7813 which fixed a underlying flatbuffer code generator bug.

Known Caveats

Major Features and Improvements

  • tf.lite:
    • The LiteRT repo is live (see announcement), which means that in the coming months there will be changes to the development experience for TFLite. The TF Lite Runtime source will be moved later this year, and sometime after that we will start accepting contributions through that repo.
    • SignatureRunner is now supported for models with no signatures.

Bug Fixes and Other Changes

  • tf.data

    • Add optional synchronous argument to map, to specify that the map should run synchronously, as opposed to be parallelizable when options.experimental_optimization.map_parallelization=True. This saves memory compared to setting num_parallel_calls=1.
    • Add optional use_unbounded_threadpool argument to map, to specify that the map should use an unbounded threadpool instead of the default pool that is based on the number of cores on the machine. This can improve throughput for map functions which perform IO or otherwise release the CPU.
    • Add tf.data.experimental.get_model_proto to allow users to peek into the analytical model inside of a dataset iterator.
  • tf.lite

    • Dequantize op supports TensorType_INT4.
      • This change includes per-channel dequantization.
    • Add support for stablehlo.composite.
    • EmbeddingLookup op supports per-channel quantization and TensorType_INT4 values.
    • FullyConnected op supports TensorType_INT16 activation and TensorType_Int4 weight per-channel quantization.
    • Enable per-tensor quantization support in dynamic range quantization of TRANSPOSE_CONV layer. Fixes TFLite converter bug.

... (truncated)

Commits
  • 6550e4b Merge pull request #78464 from tensorflow/rtg0795-patch-1
  • 7e0c244 Merge pull request #78463 from tensorflow-jenkins/version-numbers-2.18.0-21101
  • 35624d2 Update RELEASE.md to move TFLite SignatureRunner to the right section
  • 8d2c5e1 Update version numbers to 2.18.0
  • d5f4a3f Merge pull request #77589 from tensorflow-jenkins/version-numbers-2.18.0rc2-1...
  • 7cbcbf3 Update version numbers to 2.18.0-rc2
  • 84c9398 Merge pull request #77576 from tensorflow/r2.18-be4f646ec43
  • 8fca5e7 PR #17430: [ROCm] Use unique_ptr for TupleHandle in pjrt_se_client
  • 2c3c798 Merge pull request #77025 from tensorflow-jenkins/version-numbers-2.18.0rc1-2...
  • 10693a4 Update version numbers to 2.18.0-rc1
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.17.0 to 2.18.0.
- [Release notes](https://github.com/tensorflow/tensorflow/releases)
- [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md)
- [Commits](tensorflow/tensorflow@v2.17.0...v2.18.0)

---
updated-dependencies:
- dependency-name: tensorflow
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file ENGINE python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants