Resources
This section provides a curated collection of external resources to help you deepen your understanding of machine learning interpretability, explainability, and responsible AI practices. These materials include meta-lists, books, academic articles, and open-source repositories that complement the capabilities of H2O Sonar and provide broader context for predictive AI interpretability.
Whether you’re looking to understand the theoretical foundations of model explainability, implement best practices for responsible machine learning, or explore advanced techniques for debugging and fairness testing, these resources offer valuable insights from leading researchers and practitioners in the field.
Meta-Lists
“Awesome” Machine Learning Interpretability - A comprehensive curated list of machine learning interpretability resources, including tools, papers, and frameworks.
Books
Responsible Machine Learning: Actionable Strategies for Mitigating Risks & Driving Adoption - A practical guide to implementing responsible ML practices in production environments.
An Introduction to Machine Learning Interpretability, 2nd Edition - A comprehensive introduction to interpretability techniques and their applications.
Machine Learning Interpretability with H2O Driverless AI - A focused guide on using interpretability features in H2O Driverless AI.
Fairness and Machine Learning - An in-depth exploration of fairness considerations in machine learning systems.
Interpretable Machine Learning - A comprehensive online book covering interpretability methods for machine learning models.
Articles
A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing - A systematic approach to implementing responsible ML workflows.
Proposed Guidelines for the Responsible Use of Explainable Machine Learning - Evidence-based guidelines for applying explainable ML in practice.
Testing Machine Learning Explanation Techniques - Practical approaches to validating and testing explanation methods.
Debugging the Black-Box COMPAS Risk Assessment Instrument to Diagnose and Remediate Bias - A case study on diagnosing and addressing bias in high-stakes ML systems.
Practical Techniques for Interpreting Machine Learning Models: Introductory Open Source Examples Using Python, H2O, and XGBoost - Hands-on tutorial materials with code examples.
On the Art and Science of Explainable Machine Learning - A comprehensive survey of explainable ML techniques and their theoretical foundations.
Proposals for Model Vulnerability and Security - Security considerations for machine learning models in production.
Real-World Strategies for Model Debugging - Practical strategies for identifying and resolving model issues.
Warning Signs: Security and Privacy in an Age of Machine Learning - A comprehensive report on security and privacy risks in ML systems.
Why You Should Care About Debugging Machine Learning Models - The business case and technical rationale for model debugging.
Repositories
H2O.ai MLI Resources - A collection of tutorials, notebooks, and examples for machine learning interpretability.