Skip to content

Version 0.22.0

Compare
Choose a tag to compare
@dhruvesh09 dhruvesh09 released this 13 May 22:01

Major Features and Improvements

  • Added support for jackknife-based confidence intervals.
  • Add EvalResult.get_metrics(), which extracts slice metrics in dictionary
    format from EvalResults.
  • Adds TFMD Schema as an available argument to computations callbacks.

Bug fixes and other changes

  • Version is now available under tfma.version.VERSION or tfma.__version__.
  • Add auto slicing utilities for significance testing.
  • Fixed error when a metric and loss with the same classname are used.
  • Adding two new ratios (false discovery rate and false omission rate) in
    Fairness Indicators.
  • MetricValues can now contain both a debug message and a value (rather than
    one or the other).
  • Fix issue with displaying ConfusionMatrixPlot in colab.
  • CalibrationPlot now infers left and right values from schema, when
    available. This makes the calibration plot useful to regression users.
  • Fix issue with metrics not being computed properly when mixed with specs
    containing micro-aggregation computations.
  • Remove batched keys. Instead use the same keys for batched and unbatched
    extract.
  • Adding support to visualize Fairness Indicators in Fairness Indicators
    TensorBoard Plugin by providing remote evalution path in query parameter:
    <tensorboard_url>#fairness_indicators& p.fairness_indicators.evaluation_output_path=<evaluation_path>.
  • Fixed invalid metrics calculations for serving models using the
    classification API with binary outputs.
  • Moved config writing code to extend from tfma.writer.Writer and made it a
    member of default_writers.
  • Updated tfma.ExtractEvaluateAndWriteResults to accept Extracts as input in
    addition to serialize bytes and Arrow RecordBatches.
  • Depends on apache-beam[gcp]>=2.20,<3.
  • Depends on pyarrow>=0.16,<1.
  • Depends on tensorflow>=1.15,!=2.0.*,<3.
  • Depends on tensorflow-metadata>=0.22,<0.23.
  • Depends on tfx-bsl>=0.22,<0.23.

Breaking changes

  • Remove desired_batch_size as an option. Large batch failures can be handled
    via serially processing the failed batch which also acts as a deterent from
    scaling up batch sizes further. Batch size can be handled via BEAM batch
    size tuning.

Deprecations

  • Deprecating Py2 support.