Introducing Wake Vision: A High-Quality, Large-Scale Dataset for TinyML Computer Vision Applications
The TensorFlow Blog
by
1M ago
Posted by Colby Banbury, Emil Njor, Andrea Mattia Garavagno, Vijay Janapa Reddi – Harvard University TinyML is an exciting frontier in machine learning, enabling models to run on extremely low-power devices such as microcontrollers and edge devices. However, the growth of this field has been stifled by a lack of tailored large and high-quality datasets. That's where Wake Vision comes in—a new dataset designed to accelerate research and development in TinyML. Why TinyML Needs Better Data The development of TinyML requires compact and efficient models, often only a few hundred kilobytes in si ..read more
Visit website
What's new in TensorFlow 2.15
The TensorFlow Blog
by
1M ago
Posted by the TensorFlow team TensorFlow 2.15 has been released! Highlights of this release (and 2.14) include a much simpler installation method for NVIDIA CUDA libraries for Linux, oneDNN CPU performance optimizations for Windows x64 and x86, full availability of tf.function types, an upgrade to Clang 17.0.1, and much more! For the full release note, please check here. Note: Release updates on the new multi-backend Keras will be published on keras.io starting with Keras 3.0. For more information, please check here. TensorFlow Core NVIDIA CUDA libraries for Linux The tensorflow pi ..read more
Visit website
MLSysBook.AI: Principles and Practices of Machine Learning Systems Engineering
The TensorFlow Blog
by
2M ago
Posted by Jason Jabbour, Kai Kleinbard and Vijay Janapa Reddi (Harvard University) Everyone wants to do the modeling work, but no one wants to do the engineering. If ML developers are like astronauts exploring new frontiers, ML systems engineers are the rocket scientists designing and building the engines that take them there. Introduction "Everyone wants to do modeling, but no one wants to do the engineering," highlights a stark reality in the machine learning (ML) world: the allure of building sophisticated models often overshadows the critical task of engineering them into robust, scalable ..read more
Visit website
What's new in TensorFlow 2.18
The TensorFlow Blog
by
2M ago
Posted by the TensorFlow team TensorFlow 2.18 has been released! Highlights of this release (and 2.17) include NumPy 2.0, LiteRT repository, CUDA Update, Hermetic CUDA and more. For the full release notes, please click here. Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/. TensorFlow Core NumPy 2.0 The upcoming TensorFlow 2.18 release will include support for NumPy 2.0. While the majority of TensorFlow APIs will function seamlessly with NumPy 2.0, this may break some edge ca ..read more
Visit website
What's new in TensorFlow 2.17
The TensorFlow Blog
by
6M ago
Posted by the TensorFlow team TensorFlow 2.17 has been released! Highlights of this release (and 2.16) include CUDA update, upcoming Numpy 2.0, and more. For the full release notes, please click here. Note: Release updates on the new multi-backend Keras will be published on keras.io, starting with Keras 3.0. For more information, please see https://keras.io/keras_3/. TensorFlow Core CUDA Update TensorFlow binary distributions now ship with dedicated CUDA kernels for GPUs with a compute capability of 8.9. This improves the performance on the popular Ada-Generation GPUs like NVIDIA RTX 40**, L ..read more
Visit website
Faster Dynamically Quantized Inference with XNNPack
The TensorFlow Blog
by
10M ago
Posted by Alan Kelly, Software Engineer We are excited to announce that XNNPack’s Fully Connected and Convolution 2D operators now support dynamic range quantization. XNNPack is TensorFlow Lite’s CPU backend and CPUs deliver the widest reach for ML inference and remain the default target for TensorFlow Lite. Consequently, improving CPU inference performance is a top priority. We quadrupled inference performance in TensorFlow Lite’s XNNPack backend compared to the single precision baseline by adding support for dynamic range quantization to the Fully Connected and Convolution operators. This m ..read more
Visit website
What's new in TensorFlow 2.16
The TensorFlow Blog
by
11M ago
Posted by the TensorFlow team TensorFlow 2.16 has been released! Highlights of this release (and 2.15) include Clang as default compiler for building TensorFlow CPU wheels on Windows, Keras 3 as default version, support for Python 3.12, and much more! For the full release note, please click here. Note: Release updates on the new multi-backend Keras will be published on keras.io starting with Keras 3.0. For more information, please see https://keras.io/keras_3/. TensorFlow Core Clang 17 Clang is now the preferred compiler to build TensorFlow CPU wheels on the Windows Platform starting with th ..read more
Visit website
Graph neural networks in TensorFlow
The TensorFlow Blog
by
1y ago
Posted by Dustin Zelle – Software Engineer, Research and Arno Eigenwillig – Software Engineer, CoreML This article is also shared on the Google Research Blog Objects and their relationships are ubiquitous in the world around us, and relationships can be as important to understanding an object as its own attributes viewed in isolation — for example: transportation networks, production networks, knowledge graphs, or social networks. Discrete mathematics and computer science have a long history of formalizing such networks them as graphs, consisting of nodes arbitrarily connected by edges ..read more
Visit website
TensorFlow 2.15 update: hot-fix for Linux installation issue
The TensorFlow Blog
by
1y ago
Posted by the TensorFlow team We are releasing a hot-fix for an installation issue affecting the TensorFlow installation process. The TensorFlow 2.15.0 Python package was released such that it requested tensorrt-related packages that cannot be found unless the user installs them beforehand or provides additional installation flags. This dependency affected anyone installing TensorFlow 2.15 alongside NVIDIA CUDA dependencies via pip install tensorflow[and-cuda]. Depending on the installation method, TensorFlow 2.14 would be installed instead of 2.15, or users could receive an installation err ..read more
Visit website
Half-precision Inference Doubles On-Device Inference Performance
The TensorFlow Blog
by
1y ago
Posted by Marat Dukhan and Frank Barchard, Software Engineers CPUs deliver the widest reach for ML inference and remain the default target for TensorFlow Lite. Consequently, improving CPU inference performance is a top priority, and we are excited to announce that we doubled floating-point inference performance in TensorFlow Lite’s XNNPack backend by enabling half-precision inference on ARM CPUs. This means that more AI powered features may be deployed to older and lower tier devices. Traditionally, TensorFlow Lite supported two kinds of numerical computations in machine learning models: a ..read more
Visit website

Follow The TensorFlow Blog on FeedSpot

Continue with Google
Continue with Apple
OR