h2o_sonar package

Subpackages

Submodules

h2o_sonar.config module

class h2o_sonar.config.ConfigItemType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

Configuration item types.

CONNECTION = 1
LICENSE = 2
class h2o_sonar.config.ConfigKeys

Bases: object

KEY_DESCRIPTION = 'description'
KEY_ENCRYPTED = 'encrypted'
KEY_NAME = 'name'
class h2o_sonar.config.ConnectionConfig(connection_type: str, name: str, description: str, server_url: str = '', server_id: str = '', auth_server_url: str = '', environment_url: str = '', client_id: str = '', realm_name: str = '', token: str = '', token_use_type: str = '', username: str = '', password: str = '', extra_params: Dict = None, key: str = '')

Bases: ConfigKeys

A generic purpose connection configuration which can be used to connect to various H2O.ai and 3rd party products and services.

Overview of the fields which are important for the H2O.ai authentication methods:

Username/password authentication method requires (Keycloak): - authentication server URL: auth_server_url - client ID: client_id - client secret: token + token_use_type==SECRET - username and password (on getting the access token)

Refresh token authentication method requires (Keycloak): - authentication server URL: auth_server_url - client ID: client_id - refresh token: token + token_use_type==REFRESH_TOKEN - realm: realm_name

H2O token provider authentication method requires: - environment URL: environment_url - refresh token: token + token_use_type==REFRESH_TOKEN

ENCRYPTED_FIELDS = ['token', 'password']
KEY_AUTH_SERVER_URL = 'auth_server_url'
KEY_CLIENT_ID = 'client_id'
KEY_ENV_URL = 'environment_url'
KEY_EXTRA_PARAMS = 'extra_params'
KEY_KEY = 'key'
KEY_PASSWORD = 'password'
KEY_REALM_NAME = 'realm_name'
KEY_SERVER_ID = 'server_id'
KEY_SERVER_URL = 'server_url'
KEY_TOKEN = 'token'
KEY_TOKEN_USE_TYPE = 'token_use_type'
KEY_TYPE = 'connection_type'
KEY_USERNAME = 'username'
static from_dict(config_dict: Dict, decrypt: bool = True, encryption_key: str = '') ConnectionConfig

Create the connection configuration from dictionary.

JSon example w/ unencrypted fields:

{

“key”: “096ca3c2-4715-11ee-9e2f-10828613f8ad”, “connection_type”: “ML_API”, “name”: “My connection name”, “description”: “My connection description.”, “server_url”: “http://localhost:8080”, “server_id”: “my-model-validation-dai”, “auth_server_url”: “http://localhost:8080/auth”, “environment_url”: “https://cloud-qa.h2o.ai/”, “realm_name”: “my_realm”, “client_id”: “my_client_id”, “token”: “”, “token_use_type”: “”, “username”: “sonaruser”, “password”: “s3cr3tpa33word”

}

JSon example w/ encrypted fields:

{

“key”: “096ca3c2-4715-11ee-9e2f-10828613f8ad”, “connection_type”: “ML_API”, “name”: “My connection name”, “description”: “My connection description.”, “server_url”: “http://localhost:8080”, “server_id”: “my-model-validation-dai”, “auth_server_url”: “http://localhost:8080/auth”, “environment_url”: “https://cloud-qa.h2o.ai/”, “realm_name”: “my_realm”, “client_id”: “my_client_id”, “token”: {

“encrypted”: “gAAAAABkTfy18iis3ya8nitGi…UwGZ0z5XGBvZxu8URMxE14aJJk=”

}, “token_use_type”: “REFRESH_TOKEN”, “username”: “sonaruser”, “password”: {

“encrypted”: “py18iis3ya8nitG=”

}

}

to_dict(encrypt: bool = True, encryption_key: str = '') Dict
class h2o_sonar.config.ConnectionConfigType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

Predefined connection types.

AMAZON_BEDROCK = 13
AZURE_OPENAI_CHAT = 11
DRIVERLESS_AI = 1
DRIVERLESS_AI_AIEM = 3
DRIVERLESS_AI_STEAM = 2
H2O_3 = 4
H2O_GPT = 5
H2O_GPT_E = 6
H2O_LLM_OPS = 7
HF_SPACES = 12
OLLAMA = 8
OPENAI_CHAT = 10
OPENAI_RAG = 9
class h2o_sonar.config.EvaluationJudgeConfig(name: str, description: str, judge_type: str, connection: ConnectionConfig, llm_model_name: str = '', collection_id: str = '', key: str = '')

Bases: object

Evaluation judge configuration.

KEY_COLLECTION_ID = 'collection_id'
KEY_CONNECTION = 'connection'
KEY_DESCRIPTION = 'description'
KEY_JUDGE_TYPE = 'judge_type'
KEY_KEY = 'key'
KEY_LLM_MODEL_NAME = 'llm_model_name'
KEY_NAME = 'name'
static from_dict(config_dict: Dict) EvaluationJudgeConfig
to_dict() Dict
class h2o_sonar.config.EvaluationJudgeType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

azure_openai_llm = 7
custom = 9
h2ogpt = 1
h2ogpte = 2
h2ogpte_llm = 3
h2ollmops = 4
ollama = 8
openai_llm = 6
openai_rag = 5
class h2o_sonar.config.H2o3Config

Bases: object

H2O-3 configuration keys used by H2O Eval Studio.

DEFAULT_AUTO_CLEANUP = True
DEFAULT_AUTO_START = True
DEFAULT_AUTO_STOP = False
DEFAULT_HOST = 'localhost'
DEFAULT_MAX_MEM_SIZE = '4G'
DEFAULT_MIN_MEM_SIZE = '2G'
DEFAULT_PORT = 12349
KEYS = ['h2o_min_mem_size', 'h2o_max_mem_size', 'h2o_host', 'h2o_port', 'h2o_auto_start', 'h2o_auto_cleanup', 'h2o_auto_stop']
KEY_AUTO_CLEANUP = 'h2o_auto_cleanup'
KEY_AUTO_START = 'h2o_auto_start'
KEY_AUTO_STOP = 'h2o_auto_stop'
KEY_HOST = 'h2o_host'
KEY_MAX_MEM_SIZE = 'h2o_max_mem_size'
KEY_MIN_MEM_SIZE = 'h2o_min_mem_size'
KEY_PORT = 'h2o_port'
class h2o_sonar.config.H2oSonarConfig(connections: List[ConnectionConfig] | None = None, licenses: List[LicenseConfig] | None = None, evaluation_judges: List[EvaluationJudgeConfig] | None = None, ignore_env: bool | None = False)

Bases: object

H2O Eval Studio configuration with global configuration items which impact H2O Eval Studio behavior, methods and explainers.

Configuration priority from lowest to highest:

  • default value

  • configuration file

  • environment variable

  • command line argument

  • configuration item

CFG_CREATE_HTML_REPRESENTATIONS = <h2o_sonar.lib.api.commons.Param object>
CFG_CUSTOM_EXPLAINERS = <h2o_sonar.lib.api.commons.Param object>
CFG_DEVICE = <h2o_sonar.lib.api.commons.Param object>
CFG_DO_SAMPLE = <h2o_sonar.lib.api.commons.Param object>
CFG_ENABLE_DATASET_DOWNLOADING = <h2o_sonar.lib.api.commons.Param object>
CFG_FORCE_EVAL_JUDGE = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_AUTO_CLEANUP = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_AUTO_START = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_AUTO_STOP = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_HOST = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_MAX_MEM_SIZE = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_MIN_MEM_SIZE = <h2o_sonar.lib.api.commons.Param object>
CFG_H2O_PORT = <h2o_sonar.lib.api.commons.Param object>
CFG_HTTP_SSL_CERT_VERIFY = <h2o_sonar.lib.api.commons.Param object>
CFG_LOOK_AND_FEEL = <h2o_sonar.lib.api.commons.Param object>
CFG_MODEL_CACHE_DIR = <h2o_sonar.lib.api.commons.Param object>
CFG_MP_START_METHOD = <h2o_sonar.lib.api.commons.Param object>
CFG_NUM_QUANTILES = <h2o_sonar.lib.api.commons.Param object>
CFG_PER_EXPLAINER_LOGGER = <h2o_sonar.lib.api.commons.Param object>
CFG_SAMPLE_SIZE = <h2o_sonar.lib.api.commons.Param object>
ENV_VAR_CFG_PREFIX = 'H2O_SONAR_CFG_'
KEY_CONNECTIONS = 'connections'
KEY_CREATE_HTML_REPRESENTATIONS = 'create_html_representations'
KEY_CUSTOM_EXPLAINERS = 'custom_explainers'
KEY_DEVICE = 'device'
KEY_EVALUATION_JUDGES = 'evaluation_judges'
KEY_FORCE_EVAL_JUDGE = 'force_eval_judge'
KEY_HTTP_SSL_CERT_VERIFY = 'http_ssl_cert_verify'
KEY_LICENSES = 'licenses'
KEY_LOOK_AND_FEEL = 'look_and_feel'
KEY_MODEL_CACHE_DIR = 'model_cache_dir'
KEY_PER_EXPLAINER_LOGGER = 'per_explainer_logger'
MP_START_METHOD_FORK = 'fork'
MP_START_METHOD_FORKSERVER = 'forkserver'
MP_START_METHOD_SPAWN = 'spawn'
VALUE_AUTO = ''
VALUE_CPU = 'cpu'
VALUE_CUDA = 'cuda'
VALUE_GPU = 'gpu'
VALUE_MPS = 'mps'
VALUE_NPU = 'npu'
VALUE_STR_FALSE = 'false'
VALUE_STR_TRUE = 'true'
add_connection(connection_config: ConnectionConfig)
add_evaluation_judge(evaluation_judge_config: EvaluationJudgeConfig)
add_license(license_config: LicenseConfig)
copy()
describe_config_item(config_item_name: str) Param | None
describe_config_items() Dict[str, Param]
env_and_override()

Get configuration from environment variables following the naming convention and override this instance configuration.

from_toml()
get_connection(connection_key: str, connection_type: str = '') ConnectionConfig | None
get_evaluation_judge(judge_key: str = '') EvaluationJudgeConfig | None
get_license(license_key: str) LicenseConfig | None
property http_ssl_cert_verify
static load(config_path: str, encryption_key: str = '') Dict | None

Load JSon/TOML file with configuration items specified in the configuration file.

Parameters:
config_pathstr

Path to the configuration file.

encryption_keystr

Optional encryption key to decrypt/encrypt sensitive data. If not specified, then shell environment variable H2O_SONAR_ENCRYPTION_KEY will be used.

Returns:
Dict

Dictionary with the configuration if file can be found and parsed, None otherwise.

load_and_override(config_path: str, encryption_key: str = '')

Load JSon/TOML file and override this instance configuration items with configuration items specified in the configuration file.

Parameters:
config_pathstr

Path to the configuration file.

encryption_keystr

Optional encryption key to decrypt encrypted fields in the configuration file. If not specified, then shell environment variable H2O_SONAR_ENCRYPTION_KEY will be used.

save(config_path: str, config_data: Dict | None = None, encrypt: bool = True, encryption_key: str = '')

Save configuration to JSon file.

Parameters:
config_pathstr

Path to the configuration file.

config_dataDict

Optional dictionary with configuration data

encryptbool

Whether to encrypt sensitive data.

encryption_keystr

Optional encryption key to encrypt sensitive data. If not specified, then shell environment variable H2O_SONAR_ENCRYPTION_KEY will be used.

to_dict(encrypt: bool = True, encryption_key: str = '') Dict
to_toml()
class h2o_sonar.config.LicenseConfig(product: str, name: str, description: str, license: str = '', license_file: str = '', key: str = '')

Bases: ConfigKeys

A product license configuration.

ENCRYPTED_FIELDS = ['license']
KEY_KEY = 'key'
KEY_LICENSE = 'license'
KEY_LICENSE_FILE = 'license_file'
KEY_PRODUCT = 'product'
static from_dict(config_dict: Dict, decrypt: bool = True, encryption_key: str = '') LicenseConfig
to_dict(encrypt: bool = True, encryption_key: str = '') Dict
class h2o_sonar.config.ProductLicenseConfig(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

Predefined license types.

DRIVERLESS_AI = 1
class h2o_sonar.config.TokenUseType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

Predefined token use types.

ACCESS_TOKEN = 2
API_KEY = 1
REFRESH_TOKEN = 3
SECRET = 4

h2o_sonar.errors module

exception h2o_sonar.errors.DatasetTooBigError(message, suggestion=None)

Bases: MliError

The dataset is too big to be processed by H2O Eval Studio.

exception h2o_sonar.errors.ExplainerCompatibilityError(message, suggestion=None)

Bases: MliError

Explainer not compatible error.

exception h2o_sonar.errors.InvalidArgumentError(message, suggestion=None)

Bases: MliError

Invalid (CLI) argument error.

exception h2o_sonar.errors.InvalidArgumentValueError(message, suggestion=None)

Bases: MliError

Invalid (CLI) argument value error.

exception h2o_sonar.errors.InvalidDataError(message, suggestion=None)

Bases: MliError

Invalid data error.

exception h2o_sonar.errors.MliError(message, suggestion=None)

Bases: Exception

MLI error.

exception h2o_sonar.errors.MliJsonDeserializationError(message, suggestion=None)

Bases: MliError

MLI JSon deserialization error.

exception h2o_sonar.errors.MliJsonSerializationError(message, suggestion=None)

Bases: MliError

MLI JSon serialization error.

exception h2o_sonar.errors.MliNotFoundError(message, suggestion=None)

Bases: MliError

Entity not found error.

exception h2o_sonar.errors.MliPredictMethodError(message, suggestion=None)

Bases: MliError

Predict method failure.

exception h2o_sonar.errors.MliTypeError(message, suggestion=None)

Bases: MliError

Wrong type error.

exception h2o_sonar.errors.MliUnsupportedDataFormatError(message, suggestion=None)

Bases: MliUnsupportedError

Unsupported data type error.

exception h2o_sonar.errors.MliUnsupportedError(message, suggestion=None)

Bases: MliError

exception h2o_sonar.errors.MliUnsupportedOperationError(message, suggestion=None)

Bases: MliUnsupportedError

Unsupported operation.

exception h2o_sonar.errors.RenderingError(message, suggestion=None)

Bases: MliError

Chart or image rendering error.

exception h2o_sonar.errors.UnknownExplainerError(explainer_id: str)

Bases: MliError

Explainer not known to explainer container error.

h2o_sonar.h2o_sonar_cli module

h2o_sonar.h2o_sonar_cli.main() int

H2O Eval Studio Python library for Responsible AI command line interface (CLI). See argparse configuration for more details.

h2o_sonar.interpret module

h2o_sonar.interpret.add_config_item(config_type: str, config_value: Dict | str, h2o_sonar_config_path: str | Path | None = None, encryption_key: str = '')

Add configuration item.

Parameters:
config_typestr

Configuration item type - see config.ConfigItemType for options.

config_valueUnion[Dict, str]

Configuration item value represented either as dictionary or as string with JSon serialization of the configuration item. It is expected that the config item is NOT encrypted.

h2o_sonar_config_pathOptional[Union[str, pathlib.Path]]

Path to the H2O Eval Studio configuration file. If None, then in-memory (current) configuration singleton is modified.

encryption_keystr

Encryption key to be used for encrypting sensitive data written in the configuration. If None, shell environment variable H2O_SONAR_ENCRYPTION_KEY with the encryption key must be set.

h2o_sonar.interpret.describe_explainer(explainer: Type[Explainer] | str) Dict

Get explainer description.

Parameters:
explainerUnion[Type[e8s.Explainer], str]

Explainer to describe.

Returns:
Dict

Dictionary with explainer name and parameters.

h2o_sonar.interpret.gc()

Free system resources:

  • shutdowns process pool(s)

  • runs garbage collector

  • clears temporary files

h2o_sonar.interpret.get_config(h2o_sonar_config_path: str | Path | None = None, encryption_key: str = '') Dict

Get configuration item.

Parameters:
h2o_sonar_config_pathOptional[Union[str, pathlib.Path]]

Path to the H2O Eval Studio configuration file. If None, then in-memory ( current) configuration singleton will be returned.

encryption_keyOptional[str]

Encryption key to be used for decryption of the sensitive data in the configuration. If not set, then the encryption key from shell environment variable H2O_SONAR_ENCRYPTION_KEY will be used. If encryption key is not provided, then sensitive data will be returned encrypted.

h2o_sonar.interpret.list_explainers(experiment_types: List[str] | None = None, explanation_scopes: List[str] | None = None, model_meta: ExplainableModelMeta | None = None, keywords: List[str] | None = None, explainer_filter: List[FilterEntry] | None = None, container: str | ExplainerContainer | None = None, args_as_json_location: str | Path | None = None, extra_params: Dict | None = None) List[ExplainerDescriptor]

List explainers by supported experiment types, scores, keywords and other criteria. Parameters are used with AND logical operator.

Parameters:
experiment_typesOptional[List[str]]

Filter explainers by supported experiment types - regression, binomial or multinomial.

explanation_scopesOptional[List[str]]

Filter explainers by supported explanation scopes - local or global.

model_metaOptional[models.ExplainableModelMeta]

Filter explainers by model metadata.

keywordsOptional[List[str]]

Filter explainers by keywords.

explainer_filterOptional[List[commons.FilterEntry]]

Filter explainers by generic filter (forward compatible filtering). See FilterEntry for supported search types.

containerOptional[Union[str, explainer_container.ExplainerContainer]]

Optional explainer container name or instance to be used to run the interpretation.

args_as_json_locationOptional[Union[str, pathlib.Path]]

Load all positional arguments and keyword arguments from JSon file. This is useful when input is generated, persisted, repeated and used from CLI (which doesn’t support all the options). IMPORTANT: if this argument is specified, then all other function parameters are ignored.

extra_paramsOptional[Dict]

Extra parameters.

Returns:
List[explainers.ExplainerDescriptor]

Explainers compliant with provided filters.

h2o_sonar.interpret.list_interpretations(results_location: str | Path | Dict, persistence_type: PersistenceType = PersistenceType.file_system, container: str | ExplainerContainer | None = None, log_level: int = 30, extra_params: Dict | None = None) List[str]

List interpretations in given results location.

Parameters:
results_locationUnion[str, pathlib.Path, Dict]

Location used e.g. by run_interpretation() to store interpretations results - filesystem (path as string or Path), memory (dictionary) or DB. If None, then results are loaded from the current directory.

persistence_typepersist.PersistenceType

Optional choice of the persistence type: file-system (default), in-memory or database. This option does not override persistence type in case that container is provided.

containerOptional[Union[str, explainer_container.ExplainerContainer]]

Optional explainer container name (str) or container instance to be used to run the interpretation.

log_levelint

Optional container and explainers log level.

extra_paramsOptional[Dict]

Extra parameters.

Returns:
List[str]

Interpretation keys in given results location.

h2o_sonar.interpret.register_explainer(explainer_class: Type[Explainer], explainer_id: str = '', container: str | ExplainerContainer | None = None, extra_params: Dict | None = None) str

Register explainer.

Parameters:
explainer_classType[explainers.Explainer]

Explainer class.

explainer_idstr

Optional custom explainer ID to be used for explainer identification.

containerOptional[Union[str, explainer_container.ExplainerContainer]]

Optional explainer container name or instance to be used to run the interpretation.

extra_paramsOptional[Dict]

Extra parameters.

Returns:
str

Explainer ID.

h2o_sonar.interpret.resolve_container(container: str | ExplainerContainer | None = None, results_location: str | Any = '', persistence_api: PersistenceApi | None = None, persistence_type: PersistenceType | None = PersistenceType.file_system, do_setup: bool = True, logger: SonarLogger | None = None, log_level: int | None = None) ExplainerContainer

Get explainer container instance to configure, register explainers and tune it.

Parameters:
containerOptional[Union[str, explainer_container.ExplainerContainer]]

Optional explainer container name (str) or container instance to be used to run the interpretation.

results_locationOptional[Union[str, pathlib.Path, Dict, Any]]

Where to store interpretation results - filesystem (path as string or Path), memory (dictionary) or DB. If None, then results are stored to the current directory.

persistence_apiOptional[persist.PersistenceApi]

Instance of the persistence API allowing to create various persistence types (like file-system or DB)

persistence_typepersist.PersistenceType

Optional choice of the persistence type: file-system (default), in-memory or database. This option does not override persistence type in case that container is provided.

do_setupbool

Ensure explainer container set up.

loggerOptional[loggers.SonarLogger]

Optional custom logger.

log_levelint

Optional container and explainers log level.

Returns:
explainer_container.ExplainerContainer

Explainer container instance.

h2o_sonar.interpret.run_interpretation(dataset: str | Path | ExplainableDataset | ResourceHandle | Frame | DataFrame | ExplainableDatasetHandle | hmli.H2OFrame, model: str | Path | ExplainableModel | ResourceHandle | ExplainableModelHandle | Any | None = None, models: List[str | Path | ExplainableModel | ResourceHandle | ExplainableModelHandle | Any] | None = None, target_col: str = '', explainers: List[str | ExplainerToRun] | None = None, explainer_keywords: List[str] | None = None, validset: str | Path | ExplainableDataset | ResourceHandle | Frame | DataFrame | ExplainableDatasetHandle | hmli.H2OFrame | None = None, testset: str | Path | ExplainableDataset | ResourceHandle | Frame | DataFrame | ExplainableDatasetHandle | hmli.H2OFrame | None = None, use_raw_features: bool = True, used_features: List | None = None, weight_col: str = '', prediction_col: str = '', drop_cols: List | None = None, sample_num_rows: int | None = 0, sampler: DatasetSampler | None = None, container: str | ExplainerContainer | None = None, results_location: str | Path | Dict | Any | None = None, results_formats: List[str] | None = None, persistence_type: PersistenceType = PersistenceType.file_system, run_asynchronously: bool = False, run_explainers_in_parallel: bool = False, progress_callback: AbstractProgressCallbackContext | None = None, logger: SonarLogger | None = None, log_level: int = 30, args_as_json_location: str | Path | None = None, upload_to: str | ConnectionConfig = None, key: str = '', extra_params: Dict | None = None) Interpretation

Run interpretation.

Parameters:
datasetUnion[str, Path, ExplainableDataset, datatable.Frame, hmli.H2OFrame, Any]

Dataset source: explainable dataset instance, datatable frame, string (expect path to CSV, .jay or any other file type supported by datatable), dictionary (used to construct frame).

modelUnion[str, Path, ExplainableModel, Any]

Path to model (str, Path), explainable model (ExplainableModel) or an instance of 3rd party model (like Scikit) to interpret.

modelsList[Union[str, Path, ExplainableModel, Any]]

Paths to models (str, Path), explainable models (ExplainableModel) or an instances of 3rd party models (like Scikit) to interpret.

target_colstr

Target column name - must be valid dataset column name.

explainersOptional[List[Union[str, commons.ExplainerToRun]]]

Explainer IDs to run within the interpretation or ExplainerToRun instances with explainer parameters. In case of None or empty list are run all default compatible explainers.

explainer_keywords: Optional[List[str]]

Run compatible explainers which have given keyword (AND). This setting is used only in case that explainers parameter is empty list (or None).

validsetOptional[Union[src, Path, ExplainableDataset, hmli.H2OFrame, Any]]

Optional path to validation dataset (str, Path) or datatable Frame instance.

testsetOptional[Union[src, Path, ExplainableDataset, hmli.H2OFrame, Any]]

Optional path to test dataset (str, Path) or datatable Frame instance.

use_raw_featuresbool

True to use original features (default), False to force the use of transformed features in surrogate models.

used_featuresOptional[List]

Optional parameter specifying features (dataset columns) used by the model. This parameter is used in case that an instance of the model (not ExplainableModel) is provided by the user - therefore ExplainableModel’s metadata are not available.

weight_colstr

Name of the weight column to be used by explainers.

prediction_colstr

Name of the predictions column - in case of 3rd party model (standalone MLI).

drop_colsOptional[List]

List of the columns to drop from the interpretation i.e. columns names which should not be explained.

sample_num_rowsOptional[int]

If None, then automatically sample based on the dataset and RAM size. If > 0, then do sample the dataset to sample_num_rows number of rows. If == 0, then do NOT sample.

samplerOptional[DatasetSampler]

Sampling method (implementation) to be used - see h2o_sonar.utils.sampling module (documentation) for available sampling methods. Use a sampler instance to use the specific sampling method.

containerOptional[Union[str, explainer_container.ExplainerContainer]]

Optional explainer container name (str) or container instance to be used to run the interpretation.

results_locationOptional[Union[str, pathlib.Path, Dict, Any]]

Where to store interpretation results - filesystem (path as string or Path), memory (dictionary) or DB. If None, then results are stored to the current directory.

results_formatsOptional[List[str]]

Optional list of the result formats (MIME types) to be generated. If None, then report in HTML and JSon is created. Supported formats: MIME_PDF, MIME_HTML and MIME_JSON.

persistence_typepersist.PersistenceType

Optional choice of the persistence type: file-system (default), in-memory or database. This option does not override persistence type in case that container is provided.

run_asynchronouslybool

True to run the interpretation asynchronously - the interpretation is run synchronously by default (False).

run_explainers_in_parallelbool

True to run explainers in parallel - explainers are run sequentially by default.

progress_callbackOptional[progress.AbstractProgressCallbackContext]

Optional progress callback context which is stacked atop default logging callback in h2o_sonar.lib.api.interpretations::Interpretation constructor.

loggerOptional[loggers.SonarLogger]

Optional custom logger which implements loggers.SonarLogger interface. If logger is provided, then log_level is ignored.

log_levelint

Optional container and explainers log level. If logger is provided, then log_level is ignored.

args_as_json_locationOptional[Union[str, pathlib.Path]]

Load all positional arguments and keyword arguments from JSon file. This is useful when input is generated, persisted, repeated and used from CLI (which doesn’t support all the options). IMPORTANT: if this argument is specified, then all other function parameters are ignored.

upload_toUnion[str, config.ConnectionConfig]

Upload the interpretation report to the H2O GPT Enterprise in order to talk to the report.

keystr

Custom interpretation key which must be valid UUID4 string.

extra_paramsOptional[Dict]

Extra parameters.

Returns:
interpretations.Interpretation

Interpretations instance with the explainer results (references).

h2o_sonar.interpret.unregister_explainer(explainer_id: str, container: str | ExplainerContainer | None = None, extra_params: Dict | None = None) str

Unregister explainer.

Parameters:
explainer_idstr

Custom explainer to be unregistered.

containerOptional[Union[str, explainer_container.ExplainerContainer]]

Optional explainer container name or instance to be used to run the interpretation.

extra_paramsOptional[Dict]

Extra parameters.

Returns:
str

ID of the unregistered explainer or "" no explainer was unregistered.

h2o_sonar.interpret.upload_interpretation(interpretation_result: Interpretation | InterpretationResult | str | Path, connection: str | ConnectionConfig, collection_name: str = '', extra_params: Dict | None = None) Tuple[str, str]

Upload interpretation to an LLM like h2oGPTE in order to talk to the interpretation report.

Returns:
Tuple[str, str]

h2oGPT Enterprise collection ID and URL.

h2o_sonar.loggers module

class h2o_sonar.loggers.SonarFileLogger(logger_name: str, log_file: str, log_level=30)

Bases: SonarLogger

File logger saves log messages to log files.

FORMATTER = <logging.Formatter object>
data(data, *args, **kwargs)
debug(msg, *args, **kwargs)
error(msg, *args, **kwargs)
info(msg, *args, **kwargs)
warning(msg, *args, **kwargs)
class h2o_sonar.loggers.SonarLogger

Bases: ABC

Abstract logger base class to be extended by runtime specific logging implementations.

FILE_NAME_H2O_SONAR_LOG = 'h2o-sonar.log'
data(data, *args, **kwargs)
debug(msg, *args, **kwargs)
error(msg, *args, **kwargs)
info(msg, *args, **kwargs)
warning(msg, *args, **kwargs)
class h2o_sonar.loggers.SonarPrintLogger

Bases: SonarLogger

Print logger prints log messages using print() method.

data(data, *args, **kwargs)
debug(msg, *args, **kwargs)
error(msg, *args, **kwargs)
info(msg, *args, **kwargs)
warning(msg, *args, **kwargs)
h2o_sonar.loggers.debug(msg, *args, **kwargs)
h2o_sonar.loggers.error(msg, *args, **kwargs)
h2o_sonar.loggers.fatal(msg, *args, **kwargs)
h2o_sonar.loggers.get_level()
h2o_sonar.loggers.info(msg, *args, **kwargs)
h2o_sonar.loggers.log(level, msg, *args, **kwargs)
h2o_sonar.loggers.setLevel(level=0)
h2o_sonar.loggers.warn(msg, *args, **kwargs)
h2o_sonar.loggers.warning(msg, *args, **kwargs)

h2o_sonar.version module

Module contents