Interpretation Parameters
The following parameters can be specified when starting a new interpretation using the Python API:
dataset : Union[str, Path, ExplainableDataset, datatable.Frame, Any]Dataset source: explainable dataset instance, datatable frame, string (path to CSV, .jay or any other file type supported by datatable), handle to a remote dataset hosted by Driverless AI or a dictionary (used to construct frame).
model : Union[str, Path, ExplainableModel, Any]Path to model (str, Path), explainable model (
ExplainableModel) , handle to a remote model hosted by Driverless AI or an instance of 3rd party model (like Scikit) to interpret.
target_col : strTarget column name - must be valid dataset column name.
explainers : Optional[List[Union[str, commons.ExplainerToRun]]]Explainer IDs to run within the interpretation or
ExplainerToRuninstances with explainer parameters. In case ofNoneor empty list are run all compatible explainers.
explainer_keywords: Optional[List[str]]Run compatible explainers which have given keyword (AND). This setting is used only in case that
explainersparameter is empty list (orNone).
validset : Optional[Union[src, Path, ExplainableDataset, datatable.Frame, Any]]Optional validation dataset - sources might the same as in case of
dataset.
testset : Optional[Union[src, Path, ExplainableDataset, datatable.Frame, Any]]Optional test dataset - sources might the same as in case of
dataset.
use_raw_features : boolTrueto use original features,Falseto use transformed features.
used_features : Optional[List]Optional parameter specifying features (dataset columns) used by the model. This parameter is used in case that an instance of the model (not
ExplainableModel) is provided by the user - thereforeExplainableModel’s metadata are not available.
weight_col : strName of the weight column to be used by explainers.
prediction_col : strName of the predictions column - in case of 3rd party model (standalone MLI).
drop_cols : Optional[List]List of the columns to drop from the interpretation i.e. columns names which should not be explained.
sample_num_rows : Optional[int]Sample the
datasetto given number of rows. By default, the dataset is sampled based on the RAM size (or to 25000 rows). Use0to disable sampling or use an integer value greater than0to sample to the specified number of rows.
sampler : Optional[DatasetSampler]Sampling method (implementation) to be used - see
h2o_sonar.utils.samplingmodule (documentation) for available sampling methods. Use a sampler instance to use the specific sampling method.
container : Optional[Union[str, explainer_container.ExplainerContainer]]Optional explainer container name (str) or container instance to be used to run the interpretation.
results_location : Optional[Union[str, pathlib.Path, Dict, Any]]Where to store interpretation results - filesystem (path as string or
Path), memory (dictionary) or DB. IfNone, then results are stored to the current directory.
persistence_type : persist.PersistenceTypeOptional choice of the persistence type: file-system (default), in-memory or database. This option does not override persistence type in case that container is provided.
args_as_json_location : Optional[Union[str, pathlib.Path]]Load all positional arguments and keyword arguments from JSon file. This is useful when input is generated, persisted, repeated and used from CLI (which doesn’t support all the options). IMPORTANT: if this argument is specified, then all other function parameters are ignored.
upload_to : Union[str, config.ConnectionConfig]Upload the interpretation report to the H2O GPT Enterprise in order to talk to the report.
log_level : intOptional container and explainers log level.
See also: