h2o_sonar.explainers package

Submodules

h2o_sonar.explainers.adversarial_similarity_explainer module

class h2o_sonar.explainers.adversarial_similarity_explainer.AdversarialSimilarityExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Adversarial similarity explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests/

supported-validation-tests/adversarial-similarity/adversarial-similarity

CLASS_ONE_AND_ONLY = 'global'
DEFAULT_DROP_COLS = []
DEFAULT_SHAPLEY_VALUES = False
PARAM_DROP_COLS = 'drop_cols'
PARAM_SHAPLEY_VALUES = 'shapley_values'
PARAM_WORKER = 'worker_connection_key'
PLOT_TITLE = 'Similar-to-Secondary Probabilities Histogram'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

data() Frame
plot(*, file_path: str = '')
summary(**kwargs) dict
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() Result
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.backtesting_explainer module

class h2o_sonar.explainers.backtesting_explainer.BacktestingExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Size dependency explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests/

/supported-validation-tests/backtesting/backtesting

DEFAULT_CUSTOM_DATES = None
DEFAULT_FORECAST_PERIOD_UNIT = None
DEFAULT_NUMBER_OF_FORECAST_PERIODS = None
DEFAULT_NUMBER_OF_SPLITS = 2
DEFAULT_NUMBER_OF_TRAINING_PERIODS = None
DEFAULT_SPLIT_TYPE = 'auto'
DEFAULT_TIME_COLUMN = ''
DEFAULT_TRAINING_PERIOD_UNIT = None
OPT_SPLIT_TYPE_AUTO = 'auto'
OPT_SPLIT_TYPE_CUSTOM = 'custom'
PARAM_CUSTOM_DATES = 'custom_dates'
PARAM_FORECAST_PERIOD_UNIT = 'forecast_period_unit'
PARAM_NUMBER_OF_FORECAST_PERIODS = 'number_of_forecast_period'
PARAM_NUMBER_OF_SPLITS = 'number_of_splits'
PARAM_NUMBER_OF_TRAINING_PERIODS = 'number_of_training_period'
PARAM_PLOT_TYPE = 'plot_type'
PARAM_SPLIT_TYPE = 'split_type'
PARAM_TIME_COLUMN = 'time_col'
PARAM_TRAINING_PERIOD_UNIT = 'training_period_unit'
PARAM_WORKER = 'worker_connection_key'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, y=None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() Data3dResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.calibration_score_explainer module

class h2o_sonar.explainers.calibration_score_explainer.CalibrationScoreExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Calibration score explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests

/supported-validation-tests/calibration-score/calibration-score

COL_PROB_PRED = 'prob_pred'
COL_PROB_TRUE = 'prob_true'
COL_SCORE = 'Score'
COL_TARGET = 'Target'
DEFAULT_BIN_STRATEGY = 'uniform'
DEFAULT_NUMBER_OF_BINS = 10
KEY_BRIER_SCORE = 'brier_score'
KEY_CALIBRATION_CURVE = 'calibration_curve'
KEY_CLASSES_LABELS = 'classes_labels'
KEY_CLASSES_LEGENDS = 'classes_legends'
KEY_DATA = 'data'
KEY_PLOTS_PATHS = 'plots_paths'
OPT_BIN_STRATEGY_QUANTILE = 'uniform'
OPT_BIN_STRATEGY_UNIFORM = 'quantile'
PARAM_BIN_STRATEGY = 'bin_strategy'
PARAM_NUMBER_OF_BINS = 'number_of_bins'
PARAM_WORKER = 'worker_connection_key'
RESULT_FILE_JSON = 'mv_result.json'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

data() dict
plot(*, clazz: str = '', file_path: str = '')
summary(**kwargs) dict
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() Result
normalize_to_gom(mv_results) dict
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.dataset_and_model_insights_explainer module

class h2o_sonar.explainers.dataset_and_model_insights_explainer.DatasetAndModelInsightsExplainer

Bases: Explainer

Dataset and model insights explainer.

explain(X: Frame, y=None, explanations_types=None, **kwargs) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_problems() list[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
list[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() TemplateResult
setup(model, persistence, **e_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.dia_explainer module

class h2o_sonar.explainers.dia_explainer.DiaArgs(parameters: list[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: list[str] | None = None)

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: dict | None

Explainer parameters as dictionary.

class h2o_sonar.explainers.dia_explainer.DiaExplainer

Bases: Explainer

Disparate Impact Analysis (DIA) explainer for explainable models.

PARAM_CUT_OFF = 'cut_off'
PARAM_FAST_APPROX = 'fast_approx'
PARAM_FEATURES = 'dia_cols'
PARAM_FEATURE_NAME = 'feature_name'
PARAM_FEATURE_SUMMARIES = 'feature_summaries'
PARAM_MAXIMIZE_METRIC = 'maximize_metric'
PARAM_MAX_CARD = 'max_cardinality'
PARAM_MIN_CARD = 'min_cardinality'
PARAM_NAME = 'name'
PARAM_NUM_CARD = 'num_card'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_USE_HOLDOUT_PREDS = 'use_holdout_preds'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **kwargs)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

static get_entry_constants()
get_max_metric()
get_result() DiaResult
static is_enabled() bool

Return True in case that explainer is enabled, else False which will make explainer to be completely ignored (unlisted, not loaded, not executed).

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.drift_explainer module

class h2o_sonar.explainers.drift_explainer.DriftDetectionExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Drift detection explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests

/supported-validation-tests/drift-detection/drift-detection

DEFAULT_DRIFT_THRESHOLD = 0.1
DEFAULT_DROP_COLS = []
PARAM_DRIFT_THRESHOLD = 'drift_threshold'
PARAM_DROP_COLS = 'drop_cols'
PARAM_WORKER = 'worker_connection_key'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.dt_surrogate_explainer module

class h2o_sonar.explainers.dt_surrogate_explainer.DecisionTreeConstants

Bases: object

CAT_ENCODING_DICT = {'AUTO': 'auto', 'Enum Limited': 'enumlimited', 'Label Encoder': 'labelencoder', 'One Hot Encoding': 'onehotexplicit', 'Sort by Response': 'sortbyresponse'}
CAT_ENCODING_LIST = ['AUTO', 'One Hot Encoding', 'Enum Limited', 'Sort by Response', 'Label Encoder']
COLUMN_DAI_PREDICT = 'model_pred'
COLUMN_DT_PATH = 'T1'
COLUMN_MODEL_PRED = 'model_pred'
COLUMN_ORIG_PRED = 'orig_pred'
DEFAULT_NFOLDS = 0
DEFAULT_TREE_DEPTH = 3
DIR_DT_SURROGATE = 'dt_surrogate_rules'
ENC_AUTO = 'AUTO'
ENC_ENUM_LTD = 'Enum Limited'
ENC_LE = 'Label Encoder'
ENC_ONE_HOT = 'One Hot Encoding'
ENC_SORT = 'Sort by Response'
FILE_DEFAULT_DETAILS = 'dtModel.json'
FILE_DEFAULT_TREE = 'dtSurrogate.json'
FILE_DRF_VAR_IMP = 'varImp.json'
FILE_METRICS_DT = 'dtModel.json'
FILE_METRICS_DT_MULTI_PREFIX = 'dtModel_'
FILE_WORK_DT = 'dtSurrogate.json'
FILE_WORK_DT_MULTI_PREFIX = 'dtSurrogate_'
H2O_ENCODING_NAMES = ['auto', 'onehotexplicit', 'enumlimited', 'sortbyresponse', 'labelencoder']
H2O_ENC_AUTO = 'auto'
H2O_ENC_ENUM_LTD = 'enumlimited'
H2O_ENC_LE = 'labelencoder'
H2O_ENC_ONE_HOT = 'onehotexplicit'
H2O_ENC_SORT = 'sortbyresponse'
KEY_LABELS_MAP = 'labels_2_pd_map'
SEED = 12345
class h2o_sonar.explainers.dt_surrogate_explainer.DecisionTreeSurrogateExplainer

Bases: Explainer, DecisionTreeConstants

Surrogate decision tree explainer.

PARAM_CAT_ENCODING = 'categorical_encoding'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_DEBUG_RESIDUALS_CLASS = 'debug_residuals_class'
PARAM_DT_DEPTH = 'dt_tree_depth'
PARAM_NFOLDS = 'nfolds'
PARAM_QBIN_COLS = 'qbin_cols'
PARAM_QBIN_COUNT = 'qbin_count'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **explainer_params) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_local(X: Frame, y: Frame = None, **extra_params) str

DT surrogate local explanation requires SYNCHRONOUS execution.

Parameters:
Xdatatable.Frame

Dataset frame.

ydatatable.Frame, optional

Labels.

extra_paramsdict

Extra parameters including:

  • persistence: Persistence object initialized for explainer/MLI run

  • explanation_type: Explanation type ~ explainer ID

  • row: Local explanation to be provided for given row

  • explanation_filter: Required filter entries (class)

Returns:
str

JSon representation of the local explanation.

Notes

JSon DT representation:

{
    data: [
        {
          key: str,
          name: str,
          parent: str,
          edge_in: str,
          edge_weight: num,
          leaf_path: bool,
        }+
    ]
}
get_result() DtResult
static is_enabled() bool

Return True in case that explainer is enabled, else False which will make explainer to be completely ignored (unlisted, not loaded, not executed).

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.fi_kernel_shap_explainer module

class h2o_sonar.explainers.fi_kernel_shap_explainer.KernelShapFeatureImportanceExplainer

Bases: Explainer, AbstractFeatureImportanceExplainer

Kernel SHAP based original feature importance explainer.

OPT_BIN_1_CLASS = True
PARAM_FAST_APPROX = 'fast_approx'
PARAM_L1 = 'L1'
PARAM_LEAKAGE_WARN_THRESHOLD = 'leakage_warning_threshold'
PARAM_MAXRUNTIME = 'max runtime'
PARAM_NSAMPLE = 'nsample'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame = None, explanations_types: list = None, **kwargs) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_problems() list[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
list[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.fi_naive_shapley_explainer module

class h2o_sonar.explainers.fi_naive_shapley_explainer.NaiveShapleyMojoFeatureImportanceExplainer

Bases: Explainer, AbstractFeatureImportanceExplainer

DEFAULT_FAST_APPROX = False
OPT_BIN_1_CLASS = True
PARAM_FAST_APPROX = 'fast_approx_contribs'
PARAM_LEAKAGE_WARN_THRESHOLD = 'leakage_warning_threshold'
PARAM_SAMPLE_SIZE = 'sample_size'
PREFIX_CONTRIB = 'contrib_'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame = None, explanations_types: list = None, **kwargs) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_local(X: Frame, y: Frame = None, **extra_params) list

Execute explainer to calculate on-demand local explanations. This method is expected to be overridden if explainer doesn’t pre-compute local explanations. Default implementation just returns local instance explanations computed by explain() method.

X :

Data frame.

y :

Labels.

Returns:
list[Explanation]:

Explanations.

explain_problems() list[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
list[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, explain_original_features: bool = True, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.friedman_h_statistic_explainer module

class h2o_sonar.explainers.friedman_h_statistic_explainer.FriedmanHStatisticExplainer

Bases: Explainer

Friedman’s H-statistic explainer.

The explainer provides implementation of Friedman’s H-statistic feature interactions. The statistic is 0 when there is no interaction at all and 1 if all the variance of the predict function is explained by a sum of the partial dependence functions.

References

DEFAULT_FEATURES_NUMBER = 4
DEFAULT_GRID_RESOLUTION = 3
DEFAULT_SAMPLE_SIZE = 25000
FILE_RAW_RESULT = 'h_statistic.json'
PARAM_FEATURES = 'features'
PARAM_FEATURES_NUMBER = 'features_number'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

COL_FEATURE = 'feature'
COL_INTERACTIONS = 'interactions'
data(*, clazz: str | None = None) Frame
plot(*, clazz=None, file_path: str = '')
summary(**kwargs) dict
UPDATE_PARAM_NUMCAT_OVERRIDE = 'numcat_override'
check_compatibility(params: CommonInterpretationParams | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, y=None, explanations_types: list = None, **e_params) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
static normalize_to_md_list(h_statistic_result: dict) str
static normalize_to_sorted_list(h_statistic_result: dict, esc_char: str = "'") list
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.morris_sa_explainer module

class h2o_sonar.explainers.morris_sa_explainer.MorrisSensitivityAnalysisExplainer

Bases: Explainer

InterpretML: Morris sensitivity analysis explainer.

PARAM_LEAKAGE_WARN_THRESHOLD = 'leakage_warning_threshold'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, explainable_x: ExplainableDataset | None = None, y=None, **kwargs) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.pd_2_features_explainer module

class h2o_sonar.explainers.pd_2_features_explainer.PdFor2FeaturesArgs(parameters: list[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: list[str] | None = None) dict

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: dict | None

Explainer parameters as dictionary.

class h2o_sonar.explainers.pd_2_features_explainer.PdFor2FeaturesExplainer

Bases: Explainer

PD for two features explainer.

FILE_Y_HAT = 'mli_dataset_y_hat.jay'
GRID_RESOLUTION = 10
MAX_FEATURES = 3
OPT_ICE_1_FRAME_ENABLED = True
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_MAX_FEATURES = 'max_features'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_PLOT_TYPE = 'plot_type'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PROGRESS_MAX = 0.9
PROGRESS_MIN = 0.1
SAMPLE_SIZE = 25000
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() Data3dResult
normalize(pd: PD)
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.pd_ice_explainer module

class h2o_sonar.explainers.pd_ice_explainer.PdIceArgs(parameters: list[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: list[str] | None = None) dict

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: dict | None

Explainer parameters as dictionary.

class h2o_sonar.explainers.pd_ice_explainer.PdIceExplainer

Bases: Explainer

PD/ICE explainer.

FILE_ICE_JSON = 'h2o_sonar-ice-dai-model.json'
FILE_PD_JSON = 'h2o_sonar-pd-dai-model.json'
FILE_Y_HAT = 'mli_dataset_y_hat.jay'
GRID_RESOLUTION = 20
KEY_BINS = 'bins'
KEY_LABELS_MAP = 'labels_2_pd_map'
MAX_FEATURES = 10
NUMCAT_NUM_CHART = True
NUMCAT_THRESHOLD = 11
OPT_ICE_1_FRAME_ENABLED = True
PARAM_CENTER = 'center'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_HISTOGRAMS = 'histograms'
PARAM_MAX_FEATURES = 'max_features'
PARAM_NUMCAT_NUM_CHART = 'numcat_num_chart'
PARAM_NUMCAT_THRESHOLD = 'numcat_threshold'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_SORT_BINS = 'sort_bins'
PROGRESS_MAX = 0.9
PROGRESS_MIN = 0.1
SAMPLE_SIZE = 25000
UPDATE_PARAM_NUMCAT_OVERRIDE = 'numcat_override'
UPDATE_SCOPE_NUMCAT = 'numcat'
UPDATE_TYPE_ADD_FEATURE = 'add_feature'
UPDATE_TYPE_ADD_NUMCAT = 'add_num_cat'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_global(X, y=None, **e_params) list

Update/re-calculate PD explanation.

Parameters:
X: datatable.Frame

Dataset (whole) as datatable frame (handle).

y: datatable.Frame

Optional predictions as datatable frame (handle).

Returns:
list[OnDemandExplanation]:

Update on-demand explanations.

explain_local(X, y=None, **e_params) list

Execute load-from-cache or calculate on-demand ICE explanations.

Parameters:
X: datatable.Frame

Dataset (whole) as datatable frame (handle).

y: datatable.Frame

Optional predictions as datatable frame (handle).

Returns:
list[OnDemandExplanation]:

On-demand explanations with single row ICE as JSon representation.

get_result() PdResult
normalize_data(explanations, features_meta, is_sampling, pd=None, dataset: Frame | None = None, explainer_data_dir_path: str = '')
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.residual_dt_surrogate_explainer module

class h2o_sonar.explainers.residual_dt_surrogate_explainer.ResidualDecisionTreeSurrogateExplainer

Bases: Explainer

Residual Decision tree surrogate explainer.

PARAM_CAT_ENCODING = 'categorical_encoding'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_DEBUG_RESIDUALS_CLASS = 'debug_residuals_class'
PARAM_DT_DEPTH = 'dt_tree_depth'
PARAM_NFOLDS = 'nfolds'
PARAM_QBIN_COLS = 'qbin_cols'
PARAM_QBIN_COUNT = 'qbin_count'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **explainer_params) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_problems() list[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
list[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() DtResult
static is_enabled() bool

Return True in case that explainer is enabled, else False which will make explainer to be completely ignored (unlisted, not loaded, not executed).

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.residual_pd_ice_explainer module

class h2o_sonar.explainers.residual_pd_ice_explainer.ResidualPdIceExplainer

Bases: Explainer

Residual PD/ICE explainer.

PARAM_CENTER = 'center'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_HISTOGRAMS = 'histograms'
PARAM_MAX_FEATURES = 'max_features'
PARAM_NUMCAT_NUM_CHART = 'numcat_num_chart'
PARAM_NUMCAT_THRESHOLD = 'numcat_threshold'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_SORT_BINS = 'sort_bins'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **explainer_params) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

explain_problems() list[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
list[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() PdResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.segment_performance_explainer module

class h2o_sonar.explainers.segment_performance_explainer.SegmentPerformanceExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Drift detection explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests

/supported-validation-tests/segment-performance/segment-performance

DEFAULT_DROP_COLS = []
DEFAULT_NUMBER_OF_BINS = 5
DEFAULT_PRECISION = 5
PARAM_DROP_COLS = 'drop_cols'
PARAM_NUMBER_OF_BINS = 'number_of_bins'
PARAM_PRECISION = 'precision'
PARAM_WORKER = 'worker_connection_key'
RESULT_FILE_CSV = 'segment-performance.csv'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

data() Frame
plot(*, feature_1: str = '', feature_2: str = '', file_path: str = '')
summary(**kwargs) dict
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() Result
static normalize_scatter_plot(x: list, y: list, z: list, colors: list, x_axis_label: str, y_axis_label: str, plot_file_path: str, color_map: str = 'Wistia', figsize=(12, 10), dpi=120) str
normalize_to_gom(pd_result_df: DataFrame)
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.size_dependency_explainer module

class h2o_sonar.explainers.size_dependency_explainer.SizeDependencyExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Size dependency explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests/

/supported-validation-tests/size-dependency/size-dependency

DEFAULT_NUMBER_OF_SPLITS = 2
DEFAULT_TIME_COLUMN = ''
DEFAULT_WORKER_CLEANUP = True
PARAM_NUMBER_OF_SPLITS = 'number_of_splits'
PARAM_PLOT_TYPE = 'plot_type'
PARAM_TIME_COLUMN = 'time_col'
PARAM_WORKER = 'worker_connection_key'
PARAM_WORKER_CLEANUP = 'worker_cleanup'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, y=None, explanations_types: list = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() Data3dResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.summary_shap_explainer module

class h2o_sonar.explainers.summary_shap_explainer.SummaryShapleyArgs(parameters: list[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: list[str] | None = None) dict

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: dict | None

Explainer parameters as dictionary.

class h2o_sonar.explainers.summary_shap_explainer.SummaryShapleyExplainer

Bases: Explainer

Shapley summary plot for original features.

COLOR_GRAY = '#cccccc'
DEFAULT_MAX_FEATURES = 50
DEFAULT_SAMPLE_SIZE = 20000
DEFAULT_X_RESOLUTION = 500
FILE_RAW_CONTRIBS_IDX = 'raw_shapley_contribs_index.json'
MARKDOWN_TEMPLATE: str = '# Summary Shapley Feature Importance Report\nThis summary Shapley feature importance explainer report.\n\n{}\n\n    '
OPT_DRILL_CAT_VALUES = False
OPT_DRILL_SKIP_CAT = True
OPT_MODEL_FI = False
PARAM_DRILLDOWN_CHARTS = 'enable_drilldown_charts'
PARAM_FAST_APPROX = 'fast_approx_contribs'
PARAM_MAX_FEATURES = 'max_features'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_X_RESOLUTION = 'x_shapley_resolution'
PREFIX_CONTRIB = 'contrib_'
PROGRESS_MAX = 0.9
PROGRESS_MIN = 0.1
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, y: Frame | None = None, explanations_types: list = None, **kwargs)

Create global (pre-computed/cached) explanations.

get_gom_from_shapley_contrib_frames(shapley_contribs_dict: dict[str, Frame], file_mapping_dict: dict[str, str]) tuple[list[str], float]
get_gom_from_shapley_frames(shapley_means_dict: dict[str, Frame], file_mapping_dict: dict[str, str]) list[str]

This method is not used by Summary Shapley feature importance explainer - its purpose is to be library method for other explainers.

get_result() SummaryShapResult
normalize_shapleys_to_dt_gom(dataset, index_dict: dict, clazz: str, class_offset: int, bin_resolution: int = 500, progress_start: float = 0.45, progress_end: float = 0.96) tuple[Frame, dict]

Get chart data for (sampled) dataset.

Parameters:
datasetdatatable.Frame

Training dataset.

index_dictdict

Representation index file with per-class Shapley contribs file names.

clazzstr

Class for which to calculate explanations.

class_offsetint

Class “number” alias.

bin_resolution

Bin resolution.

progress_startfloat

Method progress % range begin.

progress_endfloat

Method progress % range end.

Returns:
Tuple[datatable.Frame, dict]:

GoM chart frame and map of feature names to chart (details scatter plot) file-system paths.

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.transformed_fi_shapley_explainer module

class h2o_sonar.explainers.transformed_fi_shapley_explainer.ShapleyMojoTransformedFeatureImportanceExplainer

Bases: Explainer, AbstractFeatureImportanceExplainer

DEFAULT_FAST_APPROX = True
OPT_BIN_1_CLASS = True
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame = None, explanations_types: list = None, **kwargs) list

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

Xdatatable.Frame

Dataset frame.

y :

Labels.

explanations_types: list[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
list[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
model

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: str | None

Explainer specific parameters in string representation.

dataset_apidatasets.DatasetApi | None

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[m4s.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerloggers.SonarLogger | None

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

Module contents