h2o_sonar.explainers package

Submodules

h2o_sonar.explainers.adversarial_similarity_explainer module

class h2o_sonar.explainers.adversarial_similarity_explainer.AdversarialSimilarityExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Adversarial similarity explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests/

supported-validation-tests/adversarial-similarity/adversarial-similarity

CLASS_ONE_AND_ONLY = 'global'
DEFAULT_DROP_COLS = []
DEFAULT_SHAPLEY_VALUES = False
PARAM_DROP_COLS = 'drop_cols'
PARAM_SHAPLEY_VALUES = 'shapley_values'
PARAM_WORKER = 'worker_connection_key'
PLOT_TITLE = 'Similar-to-Secondary Probabilities Histogram'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

data() Frame
plot(*, file_path: str = '')
summary(**kwargs) Dict
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() Result
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.backtesting_explainer module

class h2o_sonar.explainers.backtesting_explainer.BacktestingExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Size dependency explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests/

/supported-validation-tests/backtesting/backtesting

DEFAULT_CUSTOM_DATES = None
DEFAULT_FORECAST_PERIOD_UNIT = None
DEFAULT_NUMBER_OF_FORECAST_PERIODS = None
DEFAULT_NUMBER_OF_SPLITS = 2
DEFAULT_NUMBER_OF_TRAINING_PERIODS = None
DEFAULT_SPLIT_TYPE = 'auto'
DEFAULT_TIME_COLUMN = ''
DEFAULT_TRAINING_PERIOD_UNIT = None
OPT_SPLIT_TYPE_AUTO = 'auto'
OPT_SPLIT_TYPE_CUSTOM = 'custom'
PARAM_CUSTOM_DATES = 'custom_dates'
PARAM_FORECAST_PERIOD_UNIT = 'forecast_period_unit'
PARAM_NUMBER_OF_FORECAST_PERIODS = 'number_of_forecast_period'
PARAM_NUMBER_OF_SPLITS = 'number_of_splits'
PARAM_NUMBER_OF_TRAINING_PERIODS = 'number_of_training_period'
PARAM_PLOT_TYPE = 'plot_type'
PARAM_SPLIT_TYPE = 'split_type'
PARAM_TIME_COLUMN = 'time_col'
PARAM_TRAINING_PERIOD_UNIT = 'training_period_unit'
PARAM_WORKER = 'worker_connection_key'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, y=None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() Data3dResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.calibration_score_explainer module

class h2o_sonar.explainers.calibration_score_explainer.CalibrationScoreExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Calibration score explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests

/supported-validation-tests/calibration-score/calibration-score

COL_PROB_PRED = 'prob_pred'
COL_PROB_TRUE = 'prob_true'
COL_SCORE = 'Score'
COL_TARGET = 'Target'
DEFAULT_BIN_STRATEGY = 'uniform'
DEFAULT_NUMBER_OF_BINS = 10
KEY_BRIER_SCORE = 'brier_score'
KEY_CALIBRATION_CURVE = 'calibration_curve'
KEY_CLASSES_LABELS = 'classes_labels'
KEY_CLASSES_LEGENDS = 'classes_legends'
KEY_DATA = 'data'
KEY_PLOTS_PATHS = 'plots_paths'
OPT_BIN_STRATEGY_QUANTILE = 'uniform'
OPT_BIN_STRATEGY_UNIFORM = 'quantile'
PARAM_BIN_STRATEGY = 'bin_strategy'
PARAM_NUMBER_OF_BINS = 'number_of_bins'
PARAM_WORKER = 'worker_connection_key'
RESULT_FILE_JSON = 'mv_result.json'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

data() Dict
plot(*, clazz: str = '', file_path: str = '')
summary(**kwargs) Dict
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() Result
normalize_to_gom(mv_results) Dict
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.dataset_and_model_insights_explainer module

class h2o_sonar.explainers.dataset_and_model_insights_explainer.DatasetAndModelInsightsExplainer

Bases: Explainer

Dataset and model insights explainer.

explain(X: Frame, y=None, explanations_types=None, **kwargs) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_problems() List[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
List[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() TemplateResult
setup(model, persistence, **e_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.dia_explainer module

class h2o_sonar.explainers.dia_explainer.DiaArgs(parameters: List[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: List[str] | None = None)

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: Optional[dict]

Explainer parameters as dictionary.

class h2o_sonar.explainers.dia_explainer.DiaExplainer

Bases: Explainer

Disparate Impact Analysis (DIA) explainer for explainable models.

PARAM_CUT_OFF = 'cut_off'
PARAM_FAST_APPROX = 'fast_approx'
PARAM_FEATURES = 'dia_cols'
PARAM_FEATURE_NAME = 'feature_name'
PARAM_FEATURE_SUMMARIES = 'feature_summaries'
PARAM_MAXIMIZE_METRIC = 'maximize_metric'
PARAM_MAX_CARD = 'max_cardinality'
PARAM_MIN_CARD = 'min_cardinality'
PARAM_NAME = 'name'
PARAM_NUM_CARD = 'num_card'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_USE_HOLDOUT_PREDS = 'use_holdout_preds'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: list = None, **kwargs)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

static get_entry_constants()
get_max_metric()
get_result() DiaResult
static is_enabled() bool

Return True in case that explainer is enabled, else False which will make explainer to be completely ignored (unlisted, not loaded, not executed).

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.drift_explainer module

class h2o_sonar.explainers.drift_explainer.DriftDetectionExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Drift detection explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests

/supported-validation-tests/drift-detection/drift-detection

DEFAULT_DRIFT_THRESHOLD = 0.1
DEFAULT_DROP_COLS = []
PARAM_DRIFT_THRESHOLD = 'drift_threshold'
PARAM_DROP_COLS = 'drop_cols'
PARAM_WORKER = 'worker_connection_key'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.dt_surrogate_explainer module

class h2o_sonar.explainers.dt_surrogate_explainer.DecisionTreeConstants

Bases: object

CAT_ENCODING_DICT = {'AUTO': 'auto', 'Enum Limited': 'enumlimited', 'Label Encoder': 'labelencoder', 'One Hot Encoding': 'onehotexplicit', 'Sort by Response': 'sortbyresponse'}
CAT_ENCODING_LIST = ['AUTO', 'One Hot Encoding', 'Enum Limited', 'Sort by Response', 'Label Encoder']
COLUMN_DAI_PREDICT = 'model_pred'
COLUMN_DT_PATH = 'T1'
COLUMN_MODEL_PRED = 'model_pred'
COLUMN_ORIG_PRED = 'orig_pred'
DEFAULT_NFOLDS = 0
DEFAULT_TREE_DEPTH = 3
DIR_DT_SURROGATE = 'dt_surrogate_rules'
ENC_AUTO = 'AUTO'
ENC_ENUM_LTD = 'Enum Limited'
ENC_LE = 'Label Encoder'
ENC_ONE_HOT = 'One Hot Encoding'
ENC_SORT = 'Sort by Response'
FILE_DEFAULT_DETAILS = 'dtModel.json'
FILE_DEFAULT_TREE = 'dtSurrogate.json'
FILE_DRF_VAR_IMP = 'varImp.json'
FILE_METRICS_DT = 'dtModel.json'
FILE_METRICS_DT_MULTI_PREFIX = 'dtModel_'
FILE_WORK_DT = 'dtSurrogate.json'
FILE_WORK_DT_MULTI_PREFIX = 'dtSurrogate_'
H2O_ENCODING_NAMES = ['auto', 'onehotexplicit', 'enumlimited', 'sortbyresponse', 'labelencoder']
H2O_ENC_AUTO = 'auto'
H2O_ENC_ENUM_LTD = 'enumlimited'
H2O_ENC_LE = 'labelencoder'
H2O_ENC_ONE_HOT = 'onehotexplicit'
H2O_ENC_SORT = 'sortbyresponse'
KEY_LABELS_MAP = 'labels_2_pd_map'
SEED = 12345
class h2o_sonar.explainers.dt_surrogate_explainer.DecisionTreeSurrogateExplainer

Bases: Explainer, DecisionTreeConstants

Surrogate decision tree explainer.

PARAM_CAT_ENCODING = 'categorical_encoding'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_DEBUG_RESIDUALS_CLASS = 'debug_residuals_class'
PARAM_DT_DEPTH = 'dt_tree_depth'
PARAM_NFOLDS = 'nfolds'
PARAM_QBIN_COLS = 'qbin_cols'
PARAM_QBIN_COUNT = 'qbin_count'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **explainer_params) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_local(X: Frame, y: Frame = None, **extra_params) str

DT surrogate local explanation requires SYNCHRONOUS execution.

persistence :

Persistence object initialized for explainer/MLI run.

explanation_type: str

Explanation type ~ explainer ID.

rowint

Local explanation to be provided for given row.

explanation_filter: List[FilterEntry]
Required filter entries:

class

Returns:
str

JSon representation of the local explanation.

JSon DT representation:
{
data: [
{

key: str, name: str, parent: str, edge_in: str, edge_weight: num, leaf_path: bool,

}+

]

}
get_result() DtResult
static is_enabled() bool

Return True in case that explainer is enabled, else False which will make explainer to be completely ignored (unlisted, not loaded, not executed).

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.fi_kernel_shap_explainer module

class h2o_sonar.explainers.fi_kernel_shap_explainer.KernelShapFeatureImportanceExplainer

Bases: Explainer, AbstractFeatureImportanceExplainer

Kernel SHAP based original feature importance explainer.

OPT_BIN_1_CLASS = True
PARAM_FAST_APPROX = 'fast_approx'
PARAM_L1 = 'L1'
PARAM_LEAKAGE_WARN_THRESHOLD = 'leakage_warning_threshold'
PARAM_MAXRUNTIME = 'max runtime'
PARAM_NSAMPLE = 'nsample'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame = None, explanations_types: List = None, **kwargs) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_problems() List[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
List[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.fi_naive_shapley_explainer module

class h2o_sonar.explainers.fi_naive_shapley_explainer.NaiveShapleyMojoFeatureImportanceExplainer

Bases: Explainer, AbstractFeatureImportanceExplainer

DEFAULT_FAST_APPROX = False
OPT_BIN_1_CLASS = True
PARAM_FAST_APPROX = 'fast_approx_contribs'
PARAM_LEAKAGE_WARN_THRESHOLD = 'leakage_warning_threshold'
PARAM_SAMPLE_SIZE = 'sample_size'
PREFIX_CONTRIB = 'contrib_'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame = None, explanations_types: List = None, **kwargs) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_local(X: Frame, y: Frame = None, **extra_params) list

Execute explainer to calculate on-demand local explanations. This method is expected to be overridden if explainer doesn’t pre-compute local explanations. Default implementation just returns local instance explanations computed by explain() method.

X: Union[datatable.Frame, Any]

Data frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

Returns:
List[Explanation]:

Explanations.

explain_problems() List[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
List[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, explain_original_features: bool = True, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.friedman_h_statistic_explainer module

h2o_sonar.explainers.morris_sa_explainer module

class h2o_sonar.explainers.morris_sa_explainer.MorrisSensitivityAnalysisExplainer

Bases: Explainer

InterpretML: Morris sensitivity analysis explainer.

PARAM_LEAKAGE_WARN_THRESHOLD = 'leakage_warning_threshold'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, explainable_x: ExplainableDataset | None = None, y=None, **kwargs) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.pd_2_features_explainer module

class h2o_sonar.explainers.pd_2_features_explainer.PdFor2FeaturesArgs(parameters: List[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: List[str] | None = None) dict

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: Optional[dict]

Explainer parameters as dictionary.

class h2o_sonar.explainers.pd_2_features_explainer.PdFor2FeaturesExplainer

Bases: Explainer

PD for two features explainer.

FILE_Y_HAT = 'mli_dataset_y_hat.jay'
GRID_RESOLUTION = 10
MAX_FEATURES = 3
OPT_ICE_1_FRAME_ENABLED = True
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_MAX_FEATURES = 'max_features'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_PLOT_TYPE = 'plot_type'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PROGRESS_MAX = 0.9
PROGRESS_MIN = 0.1
SAMPLE_SIZE = 25000
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() Data3dResult
normalize(pd: PD)
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.pd_ice_explainer module

class h2o_sonar.explainers.pd_ice_explainer.PdIceArgs(parameters: List[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: List[str] | None = None) dict

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: Optional[dict]

Explainer parameters as dictionary.

class h2o_sonar.explainers.pd_ice_explainer.PdIceExplainer

Bases: Explainer

PD/ICE explainer.

FILE_ICE_JSON = 'h2o_sonar-ice-dai-model.json'
FILE_PD_JSON = 'h2o_sonar-pd-dai-model.json'
FILE_Y_HAT = 'mli_dataset_y_hat.jay'
GRID_RESOLUTION = 20
KEY_BINS = 'bins'
KEY_LABELS_MAP = 'labels_2_pd_map'
MAX_FEATURES = 10
NUMCAT_NUM_CHART = True
NUMCAT_THRESHOLD = 11
OPT_ICE_1_FRAME_ENABLED = True
PARAM_CENTER = 'center'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_HISTOGRAMS = 'histograms'
PARAM_MAX_FEATURES = 'max_features'
PARAM_NUMCAT_NUM_CHART = 'numcat_num_chart'
PARAM_NUMCAT_THRESHOLD = 'numcat_threshold'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_SORT_BINS = 'sort_bins'
PROGRESS_MAX = 0.9
PROGRESS_MIN = 0.1
SAMPLE_SIZE = 25000
UPDATE_PARAM_NUMCAT_OVERRIDE = 'numcat_override'
UPDATE_SCOPE_NUMCAT = 'numcat'
UPDATE_TYPE_ADD_FEATURE = 'add_feature'
UPDATE_TYPE_ADD_NUMCAT = 'add_num_cat'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_global(X, y=None, **e_params) list

Update/re-calculate PD explanation.

Parameters:
X: datatable.Frame

Dataset (whole) as datatable frame (handle).

y: datatable.Frame

Optional predictions as datatable frame (handle).

Returns:
List[OnDemandExplanation]:

Update on-demand explanations.

explain_local(X, y=None, **e_params) list

Execute load-from-cache or calculate on-demand ICE explanations.

Parameters:
X: datatable.Frame

Dataset (whole) as datatable frame (handle).

y: datatable.Frame

Optional predictions as datatable frame (handle).

Returns:
List[OnDemandExplanation]:

On-demand explanations with single row ICE as JSon representation.

get_result() PdResult
normalize_data(explanations, features_meta, is_sampling, pd=None, dataset: Frame | None = None, explainer_data_dir_path: str = '')
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.residual_dt_surrogate_explainer module

class h2o_sonar.explainers.residual_dt_surrogate_explainer.ResidualDecisionTreeSurrogateExplainer

Bases: Explainer

Residual Decision tree surrogate explainer.

PARAM_CAT_ENCODING = 'categorical_encoding'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_DEBUG_RESIDUALS_CLASS = 'debug_residuals_class'
PARAM_DT_DEPTH = 'dt_tree_depth'
PARAM_NFOLDS = 'nfolds'
PARAM_QBIN_COLS = 'qbin_cols'
PARAM_QBIN_COUNT = 'qbin_count'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **explainer_params) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_problems() List[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
List[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() DtResult
static is_enabled() bool

Return True in case that explainer is enabled, else False which will make explainer to be completely ignored (unlisted, not loaded, not executed).

setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.residual_pd_ice_explainer module

class h2o_sonar.explainers.residual_pd_ice_explainer.ResidualPdIceExplainer

Bases: Explainer

Residual PD/ICE explainer.

PARAM_CENTER = 'center'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_HISTOGRAMS = 'histograms'
PARAM_MAX_FEATURES = 'max_features'
PARAM_NUMCAT_NUM_CHART = 'numcat_num_chart'
PARAM_NUMCAT_THRESHOLD = 'numcat_threshold'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_SORT_BINS = 'sort_bins'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **explainer_params) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_problems() List[ProblemAndAction]

Determine (calculate or get persisted problems identified by explain() method) interpreted/evaluated model(s) problems.

Returns:
List[ProblemAndAction]:

Interpreted/evaluated model(s) problems.

get_result() PdResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.segment_performance_explainer module

class h2o_sonar.explainers.segment_performance_explainer.SegmentPerformanceExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Drift detection explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests

/supported-validation-tests/segment-performance/segment-performance

DEFAULT_DROP_COLS = []
DEFAULT_NUMBER_OF_BINS = 5
DEFAULT_PRECISION = 5
PARAM_DROP_COLS = 'drop_cols'
PARAM_NUMBER_OF_BINS = 'number_of_bins'
PARAM_PRECISION = 'precision'
PARAM_WORKER = 'worker_connection_key'
RESULT_FILE_CSV = 'segment-performance.csv'
class Result(persistence: ExplainerPersistence, explainer_id: str = '', h2o_sonar_config=None)

Bases: ExplainerResult

data() Frame
plot(*, feature_1: str = '', feature_2: str = '', file_path: str = '')
summary(**kwargs) Dict
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() Result
static normalize_scatter_plot(x: List, y: List, z: List, colors: List, x_axis_label: str, y_axis_label: str, plot_file_path: str, color_map: str = 'Wistia', figsize=(12, 10), dpi=120) str
normalize_to_gom(pd_result_df: DataFrame)
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.size_dependency_explainer module

class h2o_sonar.explainers.size_dependency_explainer.SizeDependencyExplainer

Bases: Explainer, ExplainerToMvTestAdapter

Size dependency explainer.

@see https://docs.h2o.ai/wave-apps/h2o-model-validation/guide/tests/

/supported-validation-tests/size-dependency/size-dependency

DEFAULT_NUMBER_OF_SPLITS = 2
DEFAULT_TIME_COLUMN = ''
DEFAULT_WORKER_CLEANUP = True
PARAM_NUMBER_OF_SPLITS = 'number_of_splits'
PARAM_PLOT_TYPE = 'plot_type'
PARAM_TIME_COLUMN = 'time_col'
PARAM_WORKER = 'worker_connection_key'
PARAM_WORKER_CLEANUP = 'worker_cleanup'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X, y=None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() Data3dResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

h2o_sonar.explainers.summary_shap_explainer module

class h2o_sonar.explainers.pd_ice_explainer.PdIceArgs(parameters: List[ExplainerParam] = None)

Bases: ExplainerArgs

resolve_params(explainer_params: dict | None = None, erase: List[str] | None = None) dict

Resolve explainer’s self.parameters (arguments) as follows to self.args.

Parameters:
explainer_params: Optional[dict]

Explainer parameters as dictionary.

class h2o_sonar.explainers.pd_ice_explainer.PdIceExplainer

Bases: Explainer

PD/ICE explainer.

FILE_ICE_JSON = 'h2o_sonar-ice-dai-model.json'
FILE_PD_JSON = 'h2o_sonar-pd-dai-model.json'
FILE_Y_HAT = 'mli_dataset_y_hat.jay'
GRID_RESOLUTION = 20
KEY_BINS = 'bins'
KEY_LABELS_MAP = 'labels_2_pd_map'
MAX_FEATURES = 10
NUMCAT_NUM_CHART = True
NUMCAT_THRESHOLD = 11
OPT_ICE_1_FRAME_ENABLED = True
PARAM_CENTER = 'center'
PARAM_DEBUG_RESIDUALS = 'debug_residuals'
PARAM_FEATURES = 'features'
PARAM_GRID_RESOLUTION = 'grid_resolution'
PARAM_HISTOGRAMS = 'histograms'
PARAM_MAX_FEATURES = 'max_features'
PARAM_NUMCAT_NUM_CHART = 'numcat_num_chart'
PARAM_NUMCAT_THRESHOLD = 'numcat_threshold'
PARAM_OOR_GRID_RESOLUTION = 'oor_grid_resolution'
PARAM_QTILE_BINS = 'quantile-bins'
PARAM_QTILE_GRID_RESOLUTION = 'quantile-bin-grid-resolution'
PARAM_SAMPLE_SIZE = 'sample_size'
PARAM_SORT_BINS = 'sort_bins'
PROGRESS_MAX = 0.9
PROGRESS_MIN = 0.1
SAMPLE_SIZE = 25000
UPDATE_PARAM_NUMCAT_OVERRIDE = 'numcat_override'
UPDATE_SCOPE_NUMCAT = 'numcat'
UPDATE_TYPE_ADD_FEATURE = 'add_feature'
UPDATE_TYPE_ADD_NUMCAT = 'add_num_cat'
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

dataset_api: datasets.DatasetApi | None
dataset_meta: datasets.ExplainableDatasetMeta | None
explain(X: Frame, y: Frame | None = None, explanations_types: List = None, **e_params)

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

explain_global(X, y=None, **e_params) list

Update/re-calculate PD explanation.

Parameters:
X: datatable.Frame

Dataset (whole) as datatable frame (handle).

y: datatable.Frame

Optional predictions as datatable frame (handle).

Returns:
List[OnDemandExplanation]:

Update on-demand explanations.

explain_local(X, y=None, **e_params) list

Execute load-from-cache or calculate on-demand ICE explanations.

Parameters:
X: datatable.Frame

Dataset (whole) as datatable frame (handle).

y: datatable.Frame

Optional predictions as datatable frame (handle).

Returns:
List[OnDemandExplanation]:

On-demand explanations with single row ICE as JSon representation.

explainer_deps: dict | None
explainer_params: dict | None
explainer_params_as_str: str | None
get_result() PdResult
is_on_demand_pd: bool
key: str | None
log_name: str
logger: loggers.SonarLogger | None
mli_key: str | None
model: m4s.ExplainableModel | m4s.ExplainableModelHandle | None
model_api: m4s.ModelApi | None
model_meta: m4s.ExplainableModelMeta | None
normalize_data(explanations, features_meta, is_sampling, pd=None, dataset: Frame | None = None, explainer_data_dir_path: str = '')
params: commons.CommonInterpretationParams | None
persistence: persistences.ExplainerPersistence | None
problematic_features: Dict
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

testset_meta: datasets.ExplainableDatasetMeta | None
validset_meta: datasets.ExplainableDatasetMeta | None

h2o_sonar.explainers.transformed_fi_shapley_explainer module

class h2o_sonar.explainers.transformed_fi_shapley_explainer.ShapleyMojoTransformedFeatureImportanceExplainer

Bases: Explainer, AbstractFeatureImportanceExplainer

DEFAULT_FAST_APPROX = True
OPT_BIN_1_CLASS = True
check_compatibility(params: CommonInterpretationParams | None = None, model: ExplainableModel | None = None, **explainer_params) bool

Explainer’s check (based on parameters) verifying that explainer will be able to explain a given model. If this compatibility check returns False or raises error, then it will not be run by the engine. This check may, but does not have to be performed by the execution engine.

explain(X: Frame, y: Frame = None, explanations_types: List = None, **kwargs) List

Invoke this method to calculate and persist global, local or both type of explanation(s) for given data(set). This method implementation to be overridden by child class (this class implementation). This method is responsible for the calculations, build and persistence of explanations.

X: datatable.Frame

Dataset frame.

y: Optional[Union[datatable.Frame, Any]]

Labels.

explanations_types: List[Type[Explanation]]

Optional explanations to be built. All will be built if empty list or None provided. Get all supported types using has_explanation_types().

Returns:
List[Explanation]:

Explanations descriptors.

get_result() FeatureImportanceResult
setup(model: ExplainableModel | None, persistence: ExplainerPersistence, key: str = '', params: CommonInterpretationParams | None = None, **explainer_params)

Set all the parameters needed to execute fit() and explain().

Parameters:
modelOptional[Union[models.ExplainableModel, models.ExplainableModelHandle]]

Explainable model with (fit and) score methods (or None if 3rd party).

models

(Explainable) models.

persistence: ExplainerPersistence

Persistence API allowing (controlled) saving and loading of explanations.

key: str

Optional (given) explainer run key (generated otherwise).

params: CommonInterpretationParams

Common explainers parameters specified on explainer run.

explainer_params_as_str: Optional[str]

Explainer specific parameters in string representation.

dataset_apiOptional[datasets.DatasetApi]

Dataset API to create custom explainable datasets needed by this explainer.

model_apiOptional[models.ModelApi]

Model API to create custom explainable models needed by this explainer.

loggerOptional[loggers.SonarLogger]

Logger.

explainer_params:

Other explainers RUNTIME parameters, options, and configuration.

Module contents