We will go through the Why? How? and What? of Model Explainability to build consistent, robust and trustworthy models. We explore the inability of complex models to deliver meaningful insights, cause-effect relationships and inter-connected effects within data and how explainers can empower decision makers with more than just predictions. We evaluate an intuitive game-theory based algorithm, SHAP, with a working implementation in Python. We will also pin-point intersections necessary with domain experts with 2 practical industry applications to facilitate further exploration. |