Skip to content

4. Model Interpretations and Embeddings

Joshua Levy edited this page Aug 14, 2019 · 1 revision

At this point, we would like to see what the model "sees" and also demonstrate cluster separation of embedded patches for maximum validity or to identify systematic bias in the study design. I've recommended a few readings in the manuscript to illustrate these concepts, but for your reference, SHAP https://github.com/slundberg/shap and UMAP https://github.com/lmcinnes/umap are both great places to get started, particularly focus on readings on the SHAP method. In the future, we may also include a Grad-CAM implementation: https://arxiv.org/pdf/1710.11063.pdf

Please note that for now, these tasks can only be completed after classification runs.

Embedding via UMAP The embeddings PKL file is an output from the --prediction phase (--extract_embedding) of the train_model module, which you can explore when Making Predictions.

pathflowai-visualize plot_embeddings -i embeddings.pkl -o predictions/embeddings.html -nn 15 -a 1 --remove_background_annotation "0" -ma 0.005 &

Here, I'm using plotly to derive an interactive 3-d html plot file, where each embedding point is embedded using UMAP with nearest neighbors of 15 (see documentation), and colored using the original coverage of the -a 1 annotation (references column in patch information dataframe, which is also stored in the embedding file for reference). I can remove a few datapoints that add a lot of noise to the data, hiding the learned signal by trimming some of the patches with high prevalence of background noise (--remove_background_annotation, -ma 0.005), which in the case of parenchyma vs portal, focuses on this distinction.

pathflowai-visualize plot_image_umap_embeddings -b A01 --remove_background_annotation "0" -ma 0.01 -i inputs/ -mpl -e embeddings.pkl -o predictions/embeddings.png

This command does the same exact thing as above, the only difference is that with -i inputs/ (-mpl needed to get it to work, as the other method is partially defunct) specified, the original image patches can be plot as the embedding points, this overlays actual images of the patches onto a blank canvas so you can see that patches with differing morphology separate.

Model Interpretation with SHAP

CUDA_VISIBLE_DEVICES=0 pathflowai-visualize shapley_plot -bs 32 -m output_model.pkl -p sigmoid -mth gradient -l 0.8 -ns 400

SHAP will overlay some red/blue heatmap/gradient over each image patch to depict regions most responsible for that classification ("see" what the model "sees") (higher magnitude red/blue indicates importance, red indicates that that pixel/region in the patch increased the probability of predicting that class, while blue is the opposite). The option -mth denotes the method used to train the SHAP model (deeplift or gradient method), -l smooths out the red and blue labels, -p can control whether to also add as a label to each image patch the probability of making that prediction to see how increased probability corresponds to changes in the feature attributions; -ns controls how much to train the SHAP model, a higher value makes the feature attributions more accurate, and batch size -bs represents how many test samples to interpret and how many training patches to train the SHAP model on.

Thank you for reading this short guide on operating the PathFlowAI workflow. This guide will be expanded, and as always, your contributions to the development of this workflow are more than welcome in the form of Issues and Pull Requests. Never hesitate to ask if you need any help setting up and running the workflow. Enjoy!

Clone this wiki locally