Data Science, ML and Analytics Engineering

Evidently and custom metrics

Evidently AI is a library that helps analyze machine learning models during testing or monitoring production.

Evidently AI

The tool generates interactive visual reports and JSON profiles from pandas DataFrame or csv files. There are currently 6 reports available:

  • Data Drift – detects changes in feature distribution
  • Numerical Target Drift – detects numerical target changes and feature behavior
  • Categorical Target Drift – detects changes in categorical target and feature behavior
  • Regression Model Performance – analyzes regression model performance and model errors
  • Classification Model Performance – Analyzes the performance and errors of the classification model. Works for both binary and multiclass models.
  • Probabilistic Classification Model Performance – Analyzes the performance of a probabilistic classification model, the quality of model calibration, and model errors. Works for both binary and multiclass models.

Metrics

A metric is a component that evaluates a specific aspect of data or model quality.

Metrics Evidently AI
Metrics Evidently AI

Metric Preset


A metrics preset is a pre-built report that aggregates metrics for a specific use case (for example, DataDriftPreset, RegressionPreset, etc.).

Metric Preset Evidently AI

How it works?

Generate a report on reference and current datasets

  • Reference dataset is the base dataset for comparison. This could be a training set or previous production data.
  • Current dataset – the second dataset compared with the base one. It may contain the latest production data.
How it works Evidently AI

Implementation of custom metrics

There are times in work when you need to monitor metrics that are not available in Evidently. For example, in telecom they really like the lift metric. Business loves her and understands her very well. You can read more about lift metrics here.

To add a new metric you need to do two things:

  • Implement metric
  • Add visualization to plotly – optional

The official documentation has an example of implementing a custom metric: https://docs.evidentlyai.com/user-guide/customization/add-custom-metric-or-test

But we will take a more complicated route and implement the metric directly in Evidently in order to then make a pull request

Where to add:

  • /src/evidently/calculations – add a metric depending on the task (classification, regression, etc.)
  • /src/evidently/metrics — add code for calculating metrics depending on the task
  • /src/evidently/renderers/html_widgets.py – visualization of metrics
  • /src/evidently/metrics/init.py — initialize metrics
  • /src/evidently/metric_results.py – add visualization

The metric code can be viewed in the already accepted pull request

Metric call

#probabilistic binary classification
classification_report = Report(metrics=[
    ClassificationLiftCurve(),
    ClassificationLiftTable(),
])

classification_report.run(reference_data=bcancer_ref, current_data=bcancer_cur)
classification_report
Lift metric table Evidently AI
Lift metric curve Evidently AI

Course Machine Learning and MLOps

Share it

If you liked the article - subscribe to my channel in the telegram https://t.me/renat_alimbekov or you can support me Become a Patron!


Other entries in this category: