The fixed version is available on PyPI. We recommend that you update all of your code using this fix. You can also help us by updating all your code. We need people to report issues and give us feedback if this is a fix you are happy with. The issue has been fixed in TensorFlow 2.11.
The TensorFlow team is happy to announce the release of TensorFlow 2.11. This update includes many improvements for both research and production use cases. You can update to the latest version of TensorFlow via the Python pip tool or by downloading from https://www.tensorflow.org/. The easiest way to upgrade is to run ``pip install --upgrade TensorFlow``. The upgraded TensorFlow code is currently in the process of being merged into the TensorFlow codebase, which may take several weeks. To help speed up the process, please report any issues you encounter on the TensorFlow issue tracker. Doing so will make it easier for us to quickly triage and fix issues reported by users.
TensorFlow 2.11 Highlights
- A new tutorial has been added throughout the codebase to help beginners learn TensorFlow
- Improved performance of inference on small and medium sized graphs, especially for PyTorch
- Improvements to the Probability API, including a shared probability function
- Performance improvements for different model types in HPC
- Support for Neon and NCS model types in TensorBoard
Issues fixed in this release:
TensorFlow 2.11.0: CVE-2022-41880 - Fixed version is available on PyPI. We recommend that you update all of your code using this fix
Highlights of TensorFlow 2.11
TensorFlow 2.11 includes many improvements across the board, but some highlights include:
* Various bug fixes
* Support for image recognition in Keras (API change)
* Support for generative adversarial networks (GANs) with Keras and TensorFlow Lite (API changes)
* Support for t-SNE on CPU and GPU using the new t-SNE backend
* Support for natural language processing using a new AutoEncoder model
* Improvements to the tf.keras library, including better support for regression models and custom metrics types
* New operators and a new optimization algorithm in the Optimizer class
What's new in TensorFlow 2.11?
TensorFlow 2.11 includes improvements to the following areas:
- Improvements to training, including a new way to find optimal training parameters with gradient clipping.
- A new TensorFlow Estimator that makes it easier for researchers and data scientists to develop neural networks using Keras and TensorFlow.
- Improvements for running large computations in parallel on multiple CPUs or GPUs on Windows and Linux.
- A new tf.keras._estimator object that extends the functionality of tf.estimator .
- Improvements for Keras, including a new tutorial and updates to the Keras functionality library.
Many bugs were fixed during this release, so it is recommended that you update your code using this fix or better yet, update all of your code! We need people to report bugs as well as give us feedback if this is a fix you are happy with. For more information on what's been fixed in TensorFlow 2.11, please see our changelog at https://www.tensorflow.org/versions/2111/changelog/.
What is new in TensorFlow 2.11?
The release of TensorFlow 2.11 includes many improvements for both research and production use cases. Some highlights are:
* Support for a new version of the Python Imaging Library (PIL)
* Improvements in speed and accuracy of tensors with specific support for multiple dimensions
* More powerful support for recurrent neural networks (RNNs)
* Improved performance for large data sets on a single machine
* New model selection APIs that make it easier to choose from existing models or experiment with new ones
* Faster training on a GPU using the new `-m batch_size` option
What's New in TensorFlow 2.11?
The following are the major changes in version 2.11.
- TensorFlow now has a more flexible dataflow API that supports different use cases. If you're using the DataLoader API, this is a required upgrade to ensure compatibility with future releases of TensorFlow.
- Users can now configure multi-GPU training via tf_train_dataflow, which sets up distributed training for multiple GPUs without any changes to your code. This feature is intended for high throughput applications that require thousands of parameters to be updated in parallel, such as machine translation, speech recognition, and generative modeling.
- Learning rate schedules are now supported by all estimators (e.g., Adam).