Releases: keras-team/keras
Keras 3.3.2
This is a simple fix release that re-surfaces legacy Keras 2 APIs that aren't part of Keras package proper, but that are still featured in tf.keras
. No other content has changed.
Keras 3.3.1
This is a simple fix release that moves the legacy _tf_keras
API directory to the root of the Keras pip package. This is done in order to preserve import paths like from tensorflow.keras import layers
without making any changes to the TensorFlow API files.
No other content has changed.
Keras 3.3.0
What's Changed
- Introduce float8 training.
- Add LoRA to ConvND layers.
- Add
keras.ops.ctc_decode
for JAX and TensorFlow. - Add
keras.ops.vectorize
,keras.ops.select
. - Add
keras.ops.image.rgb_to_grayscale
. - Add
keras.losses.Tversky
loss. - Add full
bincount
anddigitize
sparse support. - Models and layers now return owned metrics recursively.
- Add pickling support for Keras models. Note that pickling is not recommended, prefer using Keras saving APIs.
- Bug fixes and performance improvements.
In addition, the codebase structure has evolved:
- All source files are now in
keras/src/
. - All API files are now in
keras/api/
. - The codebase structure stays unchanged when building the Keras pip package. This means you can
pip install
Keras directly from the GitHub sources.
New Contributors
- @kapoor1992 made their first contribution in #19484
- @IMvision12 made their first contribution in #19393
- @alanwilter made their first contribution in #19438
- @chococigar made their first contribution in #19323
- @LukeWood made their first contribution in #19555
- @AlexanderLavelle made their first contribution in #19575
Full Changelog: v3.2.1...v3.3.0
Keras 3.2.1
Keras 3.2.0
What changed
- Introduce QLoRA-like technique for LoRA fine-tuning of
Dense
andEinsumDense
layers (thereby any LLM) in int8 precision. - Extend
keras.ops.custom_gradient
support to PyTorch. - Add
keras.layers.JaxLayer
andkeras.layers.FlaxLayer
to wrap JAX/Flax modules as Keras layers. - Allow
save_model
&load_model
to accept a file-like object. - Add quantization support to the
Embedding
layer. - Make it possible to update metrics inside a custom
compute_loss
method with all backends. - Make it possible to access
self.losses
inside a customcompute_loss
method with the JAX backend. - Add
keras.losses.Dice
loss. - Add
keras.ops.correlate
. - Make it possible to use cuDNN LSTM & GRU with a mask with the TensorFlow backend.
- Better JAX support in
model.export()
: add support for aliases, finer control overjax2tf
options, and dynamic batch shapes. - Bug fixes and performance improvements.
New Contributors
- @abhaskumarsinha made their first contribution in #19302
- @qaqland made their first contribution in #19378
- @tvogel made their first contribution in #19310
- @lpizzinidev made their first contribution in #19409
- @Murhaf made their first contribution in #19444
Full Changelog: v3.1.1...v3.2.0
Keras 3.1.1
This is a minor bugfix release over 3.1.0.
What's Changed
- Unwrap variable values in all stateless calls. by @hertschuh in #19287
- Fix
draw_seed
causing device discrepancy issue duringtorch
's symbolic execution by @KhawajaAbaid in #19289 - Fix TestCase.run_layer_test for multi-output layers by @shkarupa-alex in #19293
- Sine docstring by @grasskin in #19295
- Fix
keras.ops.softmax
for the tensorflow backend by @tirthasheshpatel in #19300 - Fix mixed precision check in TestCase.run_layer_test: compare with output_spec dtype instead of hardcoded float16 by @shkarupa-alex in #19297
- ArrayDataAdapter no longer converts to NumPy and supports sparse tens⦠by @hertschuh in #19298
- add token to codecov by @haifeng-jin in #19312
- Add Tensorflow support for variable
scatter_update
in optimizers. by @hertschuh in #19313 - Replace
dm-tree
withoptree
by @james77777778 in #19306 - downgrade codecov to v3 by @haifeng-jin in #19319
- Allow tensors in
tf.Dataset
s to have different dimensions. by @hertschuh in #19318 - update codecov setting by @haifeng-jin in #19320
- Set dtype policy for uint8 by @sampathweb in #19327
- Use Value dim shape for Attention compute_output_shape by @sampathweb in #19284
New Contributors
- @tirthasheshpatel made their first contribution in #19300
Full Changelog: v3.1.0...v3.1.1
Keras 3.1.0
New features
- Add support for
int8
inference. Just callmodel.quantize("int8")
to do an in-place conversion of a bfloat16 or float32 model to an int8 model. Note that onlyDense
andEinsumDense
layers will be converted (this covers LLMs and all Transformers in general). We may add more supported layers over time. - Add
keras.config.set_backend(backend)
utility to reload a different backend. - Add
keras.layers.MelSpectrogram
layer for turning raw audio data into Mel spectrogram representation. - Add
keras.ops.custom_gradient
decorator (only for JAX and TensorFlow). - Add
keras.ops.image.crop_images
. - Add
pad_to_aspect_ratio
argument toimage_dataset_from_directory
. - Add
keras.random.binomial
andkeras.random.beta
functions. - Enable
keras.ops.einsum
to run with int8 x int8 inputs and int32 output. - Add
verbose
argument in all dataset-creation utilities.
Notable fixes
- Fix Functional model slicing
- Fix for TF XLA compilation error for
SpectralNormalization
- Refactor
axis
logic across all backends and add support for multiple axes inexpand_dims
andsqueeze
New Contributors
- @mykolaskrynnyk made their first contribution in #19190
- @chicham made their first contribution in #19201
- @joycebrum made their first contribution in #19214
- @EtiNL made their first contribution in #19228
Full Changelog: v3.0.5...v3.1.0
Keras 3.0.5
This release brings many bug fixes and performance improvements, new linear algebra ops, and sparse tensor support for the JAX backend.
Highlights
- Add support for sparse tensors with the JAX backend.
- Add support for saving/loading in bfloat16.
- Add linear algebra ops in
keras.ops.linalg
. - Support nested structures in
while_loop
op. - Add
erfinv
op. - Add
normalize
op. - Add support for
IterableDataset
toTorchDataLoaderAdapter
.
New Contributors
- @frazane made their first contribution in #19107
- @SamanehSaadat made their first contribution in #19111
- @sitamgithub-MSIT made their first contribution in #19142
- @timotheeMM made their first contribution in #19169
Full Changelog: v3.0.4...v3.0.5
Keras 3.0.4
This is a minor release with improvements to the LoRA API required by the next release of KerasNLP.
Full Changelog: v3.0.3...v3.0.4
Keras 3.0.3 release
This is a minor Keras release.
What's Changed
- Add built-in LoRA (low-rank adaptation) API to all relevant layers (
Dense
,EinsumDense
,Embedding
). - Add
SwapEMAWeights
callback to make it easier to evaluate model metrics using EMA weights during training. - All
DataAdapters
now create a native iterator for each backend, improving performance. - Add built-in prefetching for JAX, improving performance.
- The
bfloat16
dtype is now allowed in the globalset_dtype
configuration utility. - Bug fixes and performance improvements.
New Contributors
- @kiraksi made their first contribution in #18977
- @dugujiujian1999 made their first contribution in #19010
- @neo-alex made their first contribution in #18997
- @anas-rz made their first contribution in #19057
Full Changelog: v3.0.2...v3.0.3