H2O Eval Studio

H2O.ai’s h2o_sonar is a Python package for the introspection of machine learning models by enabling various facets of Responsible AI for both predictive and generative models. It incorporates methods for model explanations to understand, trust, and ensure the fairness - bias detection & remediation, model debugging for accuracy, privacy, and security, model assessments, and documentation.

Predictive AI

This package enables a new, holistic, low-risk, human-interpretable, fair, and trustable approach to machine learning. H2O-3, scikit-learn and Driverless AI models are the first-class citizens of the package, but it will be designed to accommodate several types of Python models. Specifically the product:

  • Explains many types of models.

  • Assesses the observational fairness of many types of models.

  • Debugs many types of models.

  • Integrates with Enterprise h2oGPT for model insights and problem mitigation plans.

The functionality is available to engineers and data scientists through a Python API and Command Line Interface API.

Explainers overview

Supported environments & Python versions:

  • Driverless AI MOJO runtime - daimojo library - supports Linux only.

  • H2O Model Validation based explainers are not available on Python 3.11 as Driverless AI client is not available for this runtime.

Generative AI

h2oGPTe, h2oGPT, H2O LLMOps/MLOps, OpenAI, Microsoft Azure Open AI, Amazon Bedrock, and ollama RAGs/LLMs hosts are the first-class citizens of the H2O Eval Studio.

Explainers overview

Documentation

Python API

Third-Party Notices

Indices and Tables

Disclaimer

h2o_sonar is in active development and is in the alpha stage - its API may be unstable and some of the core features might be incomplete and/or missing.