Skip to content

Commit

Permalink
Fix typos in README (#1444)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #1444

Correct a couple typos.

Reviewed By: cyrjano

Differential Revision: D66016146

fbshipit-source-id: 335eda3d0cb0c5756a069dc0ce3bd92f0d16d112
  • Loading branch information
craymichael authored and facebook-github-bot committed Nov 15, 2024
1 parent 49b1ed4 commit e13cf8e
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ libraries such as torchvision, torchtext, and others.

#### About Captum

With the increase in model complexity and the resulting lack of transparency, model interpretability methods have become increasingly important. Model understanding is both an active area of research as well as an area of focus for practical applications across industries using machine learning. Captum provides state-of-the-art algorithms such as Integrated Gradients, Testing with Concept Activaton Vectors (TCAV), TracIn influence functions, just to name a few, that provide researchers and developers with an easy way to understand which features, training examples or concepts contribute to a models' predictions and in general what and how the model learns. In addition to that, Captum also provides adversarial attacks and minimal input perturbation capabilities that can be used both for generating counterfactual explanations and adversarial perturbations.
With the increase in model complexity and the resulting lack of transparency, model interpretability methods have become increasingly important. Model understanding is both an active area of research as well as an area of focus for practical applications across industries using machine learning. Captum provides state-of-the-art algorithms such as Integrated Gradients, Testing with Concept Activation Vectors (TCAV), TracIn influence functions, just to name a few, that provide researchers and developers with an easy way to understand which features, training examples or concepts contribute to a models' predictions and in general what and how the model learns. In addition to that, Captum also provides adversarial attacks and minimal input perturbation capabilities that can be used both for generating counterfactual explanations and adversarial perturbations.

<!--For model developers, Captum can be used to improve and troubleshoot models by facilitating the identification of different features that contribute to a model’s output in order to design better models and troubleshoot unexpected model outputs. -->

Expand Down Expand Up @@ -461,7 +461,7 @@ You can watch the recorded talk [here](https://www.youtube.com/watch?v=ayhBHZYje

**ICLR 2021 workshop on Responsible AI**:
- [Paper](https://arxiv.org/abs/2009.07896) on the Captum Library
- [Paper](https://arxiv.org/abs/2106.07475) on Invesitgating Sanity Checks for Saliency Maps
- [Paper](https://arxiv.org/abs/2106.07475) on Investigating Sanity Checks for Saliency Maps


Summer school on medical imaging at University of Lyon. A class on model explainability (link to the video)
Expand Down

0 comments on commit e13cf8e

Please sign in to comment.