This repository contains a collection of self-explanatory tutorials for different model-agnostic and model-specific XAI methods. Each tutorial comes in a Jupyter Notebook containing a short video lecture and practical exercises. The material has already been used in the context of two courses: the Zero to Hero Summer Academy (fully online) and ml4hearth (hybrid setting). The course material can be adjusted according to the available time frame and the schedule. The material is self-explanatory and can also be consumed offline.
The learning objectives are:
- understand the importance of interpretability
- discover the existing model-agnostic and model-specific XAI methods
- learn how to interpret the outputs and graphs of those methods with hands-on exercises
- learn to choose which method is suitable for a specific task
- Permutation Feature Importance
- SHapley Additive exPlanations (SHAP)
- Local Interpretable Model-Agnostic Explanations (LIME)
- Forest-Guided Clustering
- Grad-CAM
It is possible to either create an environment and install all the necessary packages locally (using the requirements.txt file) or to execute the notebooks on the browser, by clicking the 'Open in Colab' button. This second option doesn't require any further installation, but the user must have access to a Google account.
Comments and input are very welcome! If you have a suggestion or think something should be changed, please open an issue or submit a pull request.