We’ll go into details about LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), analysing a practical example of using the two explainers on a classification problem.
More about the meetup on community website: https://iasi.ai/meetups/decrypting-a-machine-learning-model/