List of DL papers on Interpretability, Learning from Limited Data and Information Bottleneck
[2] Interpretable and Pedagogical Examples
[3] Interpretable Explanations of Black Boxes by Meaningful Perturbation
[4] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
[5] Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks
[6] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
[1] Compressing Neural Networks using the Variational Information Bottleneck
[2] One Big Net For Everything
[1] Minimal gated unit for recurrent neural networks
[2] Gated Feedback Recurrent Neural Networks
[3] MinimalRNN: Toward More Interpretable and Trainable Recurrent Neural Networks