diff --git a/README.md b/README.md
index c451584..b171542 100644
--- a/README.md
+++ b/README.md
@@ -226,37 +226,39 @@ You can find all available model IDs in the tables below (note that the full lea
| **23** | **Carmon2019Unlabeled** | *[Unlabeled Data Improves Adversarial Robustness](https://arxiv.org/abs/1905.13736)* | 89.69% | 59.53% | WideResNet-28-10 | NeurIPS 2019 |
| **24** | **Gowal2021Improving_R18_ddpm_100m** | *[Improving Robustness using Generated Data](https://arxiv.org/abs/2110.09468)* | 87.35% | 58.50% | PreActResNet-18 | NeurIPS 2021 |
| **25** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 85.32% | 58.04% | WideResNet-34-10 | OpenReview, Jun 2021 |
-| **26** | **Chen2021LTD_WRN34_20** | *[LTD: Low Temperature Distillation for Robust Adversarial Training](https://arxiv.org/abs/2111.02331)* | 86.03% | 57.71% | WideResNet-34-20 | arXiv, Nov 2021 |
-| **27** | **Rade2021Helper_R18_extra** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 89.02% | 57.67% | PreActResNet-18 | OpenReview, Jun 2021 |
-| **28** | **Jia2022LAS-AT_70_16** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 85.66% | 57.61% | WideResNet-70-16 | arXiv, Mar 2022 |
-| **29** | **Sehwag2020Hydra** | *[HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509)* | 88.98% | 57.14% | WideResNet-28-10 | NeurIPS 2020 |
-| **30** | **Gowal2020Uncovering_70_16** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.29% | 57.14% | WideResNet-70-16 | arXiv, Oct 2020 |
-| **31** | **Rade2021Helper_R18_ddpm** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 86.86% | 57.09% | PreActResNet-18 | OpenReview, Jun 2021 |
-| **32** | **Chen2021LTD_WRN34_10** | *[LTD: Low Temperature Distillation for Robust Adversarial Training](https://arxiv.org/abs/2111.02331)* | 85.21% | 56.94% | WideResNet-34-10 | arXiv, Nov 2021 |
-| **33** | **Gowal2020Uncovering_34_20** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.64% | 56.82% | WideResNet-34-20 | arXiv, Oct 2020 |
-| **34** | **Rebuffi2021Fixing_R18_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 83.53% | 56.66% | PreActResNet-18 | arXiv, Mar 2021 |
-| **35** | **Wang2020Improving** | *[Improving Adversarial Robustness Requires Revisiting Misclassified Examples](https://openreview.net/forum?id=rklOg6EFwS)* | 87.50% | 56.29% | WideResNet-28-10 | ICLR 2020 |
-| **36** | **Jia2022LAS-AT_34_10** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 84.98% | 56.26% | WideResNet-34-10 | arXiv, Mar 2022 |
-| **37** | **Wu2020Adversarial** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 85.36% | 56.17% | WideResNet-34-10 | NeurIPS 2020 |
-| **38** | **Sehwag2021Proxy_R18** | *[Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?](https://arxiv.org/abs/2104.09425)* | 84.59% | 55.54% | ResNet-18 | ICLR 2022 |
-| **39** | **Hendrycks2019Using** | *[Using Pre-Training Can Improve Model Robustness and Uncertainty](https://arxiv.org/abs/1901.09960)* | 87.11% | 54.92% | WideResNet-28-10 | ICML 2019 |
-| **40** | **Pang2020Boosting** | *[Boosting Adversarial Training with Hypersphere Embedding](https://arxiv.org/abs/2002.08619)* | 85.14% | 53.74% | WideResNet-34-20 | NeurIPS 2020 |
-| **41** | **Cui2020Learnable_34_20** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.70% | 53.57% | WideResNet-34-20 | ICCV 2021 |
-| **42** | **Zhang2020Attacks** | *[Attacks Which Do Not Kill Training Make Adversarial Learning Stronger](https://arxiv.org/abs/2002.11242)* | 84.52% | 53.51% | WideResNet-34-10 | ICML 2020 |
-| **43** | **Rice2020Overfitting** | *[Overfitting in adversarially robust deep learning](https://arxiv.org/abs/2002.11569)* | 85.34% | 53.42% | WideResNet-34-20 | ICML 2020 |
-| **44** | **Huang2020Self** | *[Self-Adaptive Training: beyond Empirical Risk Minimization](https://arxiv.org/abs/2002.10319)* | 83.48% | 53.34% | WideResNet-34-10 | NeurIPS 2020 |
-| **45** | **Zhang2019Theoretically** | *[Theoretically Principled Trade-off between Robustness and Accuracy](https://arxiv.org/abs/1901.08573)* | 84.92% | 53.08% | WideResNet-34-10 | ICML 2019 |
-| **46** | **Cui2020Learnable_34_10** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.22% | 52.86% | WideResNet-34-10 | ICCV 2021 |
-| **47** | **Chen2020Adversarial** | *[Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning](https://arxiv.org/abs/2003.12862)* | 86.04% | 51.56% | ResNet-50
(3x ensemble) | CVPR 2020 |
-| **48** | **Chen2020Efficient** | *[Efficient Robust Training via Backward Smoothing](https://arxiv.org/abs/2010.01278)* | 85.32% | 51.12% | WideResNet-34-10 | arXiv, Oct 2020 |
-| **49** | **Addepalli2021Towards_RN18** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 80.24% | 51.06% | ResNet-18 | OpenReview, Jun 2021 |
-| **50** | **Sitawarin2020Improving** | *[Improving Adversarial Robustness Through Progressive Hardening](https://arxiv.org/abs/2003.09347)* | 86.84% | 50.72% | WideResNet-34-10 | arXiv, Mar 2020 |
-| **51** | **Engstrom2019Robustness** | *[Robustness library](https://github.com/MadryLab/robustness)* | 87.03% | 49.25% | ResNet-50 | GitHub,
Oct 2019 |
-| **52** | **Zhang2019You** | *[You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle](https://arxiv.org/abs/1905.00877)* | 87.20% | 44.83% | WideResNet-34-10 | NeurIPS 2019 |
-| **53** | **Andriushchenko2020Understanding** | *[Understanding and Improving Fast Adversarial Training](https://arxiv.org/abs/2007.02617)* | 79.84% | 43.93% | PreActResNet-18 | NeurIPS 2020 |
-| **54** | **Wong2020Fast** | *[Fast is better than free: Revisiting adversarial training](https://arxiv.org/abs/2001.03994)* | 83.34% | 43.21% | PreActResNet-18 | ICLR 2020 |
-| **55** | **Ding2020MMA** | *[MMA Training: Direct Input Space Margin Maximization through Adversarial Training](https://openreview.net/forum?id=HkeryxBtPB)* | 84.36% | 41.44% | WideResNet-28-4 | ICLR 2020 |
-| **56** | **Standard** | *[Standardly trained model](https://github.com/RobustBench/robustbench/)* | 94.78% | 0.00% | WideResNet-28-10 | N/A |
+| **26** | **Addepalli2022Efficient_WRN_34_10** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 88.71% | 57.81% | WideResNet-34-10 | CVPRW 2022 |
+| **27** | **Chen2021LTD_WRN34_20** | *[LTD: Low Temperature Distillation for Robust Adversarial Training](https://arxiv.org/abs/2111.02331)* | 86.03% | 57.71% | WideResNet-34-20 | arXiv, Nov 2021 |
+| **28** | **Rade2021Helper_R18_extra** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 89.02% | 57.67% | PreActResNet-18 | OpenReview, Jun 2021 |
+| **29** | **Jia2022LAS-AT_70_16** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 85.66% | 57.61% | WideResNet-70-16 | arXiv, Mar 2022 |
+| **30** | **Sehwag2020Hydra** | *[HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509)* | 88.98% | 57.14% | WideResNet-28-10 | NeurIPS 2020 |
+| **31** | **Gowal2020Uncovering_70_16** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.29% | 57.14% | WideResNet-70-16 | arXiv, Oct 2020 |
+| **32** | **Rade2021Helper_R18_ddpm** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 86.86% | 57.09% | PreActResNet-18 | OpenReview, Jun 2021 |
+| **33** | **Chen2021LTD_WRN34_10** | *[LTD: Low Temperature Distillation for Robust Adversarial Training](https://arxiv.org/abs/2111.02331)* | 85.21% | 56.94% | WideResNet-34-10 | arXiv, Nov 2021 |
+| **34** | **Gowal2020Uncovering_34_20** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.64% | 56.82% | WideResNet-34-20 | arXiv, Oct 2020 |
+| **35** | **Rebuffi2021Fixing_R18_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 83.53% | 56.66% | PreActResNet-18 | arXiv, Mar 2021 |
+| **36** | **Wang2020Improving** | *[Improving Adversarial Robustness Requires Revisiting Misclassified Examples](https://openreview.net/forum?id=rklOg6EFwS)* | 87.50% | 56.29% | WideResNet-28-10 | ICLR 2020 |
+| **37** | **Jia2022LAS-AT_34_10** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 84.98% | 56.26% | WideResNet-34-10 | arXiv, Mar 2022 |
+| **38** | **Wu2020Adversarial** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 85.36% | 56.17% | WideResNet-34-10 | NeurIPS 2020 |
+| **39** | **Sehwag2021Proxy_R18** | *[Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?](https://arxiv.org/abs/2104.09425)* | 84.59% | 55.54% | ResNet-18 | ICLR 2022 |
+| **40** | **Hendrycks2019Using** | *[Using Pre-Training Can Improve Model Robustness and Uncertainty](https://arxiv.org/abs/1901.09960)* | 87.11% | 54.92% | WideResNet-28-10 | ICML 2019 |
+| **41** | **Pang2020Boosting** | *[Boosting Adversarial Training with Hypersphere Embedding](https://arxiv.org/abs/2002.08619)* | 85.14% | 53.74% | WideResNet-34-20 | NeurIPS 2020 |
+| **42** | **Cui2020Learnable_34_20** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.70% | 53.57% | WideResNet-34-20 | ICCV 2021 |
+| **43** | **Zhang2020Attacks** | *[Attacks Which Do Not Kill Training Make Adversarial Learning Stronger](https://arxiv.org/abs/2002.11242)* | 84.52% | 53.51% | WideResNet-34-10 | ICML 2020 |
+| **44** | **Rice2020Overfitting** | *[Overfitting in adversarially robust deep learning](https://arxiv.org/abs/2002.11569)* | 85.34% | 53.42% | WideResNet-34-20 | ICML 2020 |
+| **45** | **Huang2020Self** | *[Self-Adaptive Training: beyond Empirical Risk Minimization](https://arxiv.org/abs/2002.10319)* | 83.48% | 53.34% | WideResNet-34-10 | NeurIPS 2020 |
+| **46** | **Zhang2019Theoretically** | *[Theoretically Principled Trade-off between Robustness and Accuracy](https://arxiv.org/abs/1901.08573)* | 84.92% | 53.08% | WideResNet-34-10 | ICML 2019 |
+| **47** | **Cui2020Learnable_34_10** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.22% | 52.86% | WideResNet-34-10 | ICCV 2021 |
+| **48** | **Addepalli2022Efficient_RN18** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 85.71% | 52.48% | ResNet-18 | CVPRW 2022 |
+| **49** | **Chen2020Adversarial** | *[Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning](https://arxiv.org/abs/2003.12862)* | 86.04% | 51.56% | ResNet-50
(3x ensemble) | CVPR 2020 |
+| **50** | **Chen2020Efficient** | *[Efficient Robust Training via Backward Smoothing](https://arxiv.org/abs/2010.01278)* | 85.32% | 51.12% | WideResNet-34-10 | arXiv, Oct 2020 |
+| **51** | **Addepalli2021Towards_RN18** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 80.24% | 51.06% | ResNet-18 | OpenReview, Jun 2021 |
+| **52** | **Sitawarin2020Improving** | *[Improving Adversarial Robustness Through Progressive Hardening](https://arxiv.org/abs/2003.09347)* | 86.84% | 50.72% | WideResNet-34-10 | arXiv, Mar 2020 |
+| **53** | **Engstrom2019Robustness** | *[Robustness library](https://github.com/MadryLab/robustness)* | 87.03% | 49.25% | ResNet-50 | GitHub,
Oct 2019 |
+| **54** | **Zhang2019You** | *[You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle](https://arxiv.org/abs/1905.00877)* | 87.20% | 44.83% | WideResNet-34-10 | NeurIPS 2019 |
+| **55** | **Andriushchenko2020Understanding** | *[Understanding and Improving Fast Adversarial Training](https://arxiv.org/abs/2007.02617)* | 79.84% | 43.93% | PreActResNet-18 | NeurIPS 2020 |
+| **56** | **Wong2020Fast** | *[Fast is better than free: Revisiting adversarial training](https://arxiv.org/abs/2001.03994)* | 83.34% | 43.21% | PreActResNet-18 | ICLR 2020 |
+| **57** | **Ding2020MMA** | *[MMA Training: Direct Input Space Margin Maximization through Adversarial Training](https://openreview.net/forum?id=HkeryxBtPB)* | 84.36% | 41.44% | WideResNet-28-4 | ICLR 2020 |
+| **58** | **Standard** | *[Standardly trained model](https://github.com/RobustBench/robustbench/)* | 94.78% | 0.00% | WideResNet-28-10 | N/A |
@@ -304,8 +306,9 @@ You can find all available model IDs in the tables below (note that the full lea
| **12** | **Kireev2021Effectiveness_Gauss50percent** | *[On the effectiveness of adversarial training against common corruptions](https://arxiv.org/abs/2103.02325)* | 93.24% | 85.04% | PreActResNet-18 | arXiv, Mar 2021 |
| **13** | **Kireev2021Effectiveness_RLAT** | *[On the effectiveness of adversarial training against common corruptions](https://arxiv.org/abs/2103.02325)* | 93.10% | 84.10% | PreActResNet-18 | arXiv, Mar 2021 |
| **14** | **Rebuffi2021Fixing_70_16_cutmix_extra_Linf** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 92.23% | 82.82% | WideResNet-70-16 | arXiv, Mar 2021 |
-| **15** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 85.32% | 76.78% | WideResNet-34-10 | arXiv, Apr 2021 |
-| **16** | **Standard** | *[Standardly trained model](https://github.com/RobustBench/robustbench/)* | 94.78% | 73.46% | WideResNet-28-10 | N/A |
+| **15** | **Addepalli2022Efficient_WRN_34_10** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 88.71% | 80.12% | WideResNet-34-10 | CVPRW 2022 |
+| **16** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 85.32% | 76.78% | WideResNet-34-10 | arXiv, Apr 2021 |
+| **17** | **Standard** | *[Standardly trained model](https://github.com/RobustBench/robustbench/)* | 94.78% | 73.46% | WideResNet-28-10 | N/A |
@@ -320,23 +323,25 @@ You can find all available model IDs in the tables below (note that the full lea
| **3** | **Pang2022Robustness_WRN70_16** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 65.56% | 33.05% | WideResNet-70-16 | arXiv, Feb 2022 |
| **4** | **Rebuffi2021Fixing_28_10_cutmix_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 62.41% | 32.06% | WideResNet-28-10 | arXiv, Mar 2021 |
| **5** | **Jia2022LAS-AT_34_20** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 67.31% | 31.91% | WideResNet-34-20 | arXiv, Mar 2022 |
-| **6** | **Sehwag2021Proxy** | *[Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?](https://arxiv.org/abs/2104.09425)* | 65.93% | 31.15% | WideResNet-34-10 | ICLR 2022 |
-| **7** | **Pang2022Robustness_WRN28_10** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 63.66% | 31.08% | WideResNet-28-10 | arXiv, Feb 2022 |
-| **8** | **Jia2022LAS-AT_34_10** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 64.89% | 30.77% | WideResNet-34-10 | arXiv, Mar 2022 |
-| **9** | **Chen2021LTD_WRN34_10** | *[LTD: Low Temperature Distillation for Robust Adversarial Training](https://arxiv.org/abs/2111.02331)* | 64.07% | 30.59% | WideResNet-34-10 | arXiv, Nov 2021 |
-| **10** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 65.73% | 30.35% | WideResNet-34-10 | OpenReview, Jun 2021 |
-| **11** | **Cui2020Learnable_34_20_LBGAT6** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 62.55% | 30.20% | WideResNet-34-20 | ICCV 2021 |
-| **12** | **Gowal2020Uncovering** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 60.86% | 30.03% | WideResNet-70-16 | arXiv, Oct 2020 |
-| **13** | **Cui2020Learnable_34_10_LBGAT6** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 60.64% | 29.33% | WideResNet-34-10 | ICCV 2021 |
-| **14** | **Rade2021Helper_R18_ddpm** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 61.50% | 28.88% | PreActResNet-18 | OpenReview, Jun 2021 |
-| **15** | **Wu2020Adversarial** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 60.38% | 28.86% | WideResNet-34-10 | NeurIPS 2020 |
-| **16** | **Rebuffi2021Fixing_R18_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 56.87% | 28.50% | PreActResNet-18 | arXiv, Mar 2021 |
-| **17** | **Hendrycks2019Using** | *[Using Pre-Training Can Improve Model Robustness and Uncertainty](https://arxiv.org/abs/1901.09960)* | 59.23% | 28.42% | WideResNet-28-10 | ICML 2019 |
-| **18** | **Cui2020Learnable_34_10_LBGAT0** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 70.25% | 27.16% | WideResNet-34-10 | ICCV 2021 |
-| **19** | **Addepalli2021Towards_PARN18** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 62.02% | 27.14% | PreActResNet-18 | OpenReview, Jun 2021 |
-| **20** | **Chen2020Efficient** | *[Efficient Robust Training via Backward Smoothing](https://arxiv.org/abs/2010.01278)* | 62.15% | 26.94% | WideResNet-34-10 | arXiv, Oct 2020 |
-| **21** | **Sitawarin2020Improving** | *[Improving Adversarial Robustness Through Progressive Hardening](https://arxiv.org/abs/2003.09347)* | 62.82% | 24.57% | WideResNet-34-10 | arXiv, Mar 2020 |
-| **22** | **Rice2020Overfitting** | *[Overfitting in adversarially robust deep learning](https://arxiv.org/abs/2002.11569)* | 53.83% | 18.95% | PreActResNet-18 | ICML 2020 |
+| **6** | **Addepalli2022Efficient_WRN_34_10** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 68.75% | 31.85% | WideResNet-34-10 | CVPRW 2022 |
+| **7** | **Sehwag2021Proxy** | *[Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?](https://arxiv.org/abs/2104.09425)* | 65.93% | 31.15% | WideResNet-34-10 | ICLR 2022 |
+| **8** | **Pang2022Robustness_WRN28_10** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 63.66% | 31.08% | WideResNet-28-10 | ICML 2022 |
+| **9** | **Jia2022LAS-AT_34_10** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 64.89% | 30.77% | WideResNet-34-10 | arXiv, Mar 2022 |
+| **10** | **Chen2021LTD_WRN34_10** | *[LTD: Low Temperature Distillation for Robust Adversarial Training](https://arxiv.org/abs/2111.02331)* | 64.07% | 30.59% | WideResNet-34-10 | arXiv, Nov 2021 |
+| **11** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 65.73% | 30.35% | WideResNet-34-10 | OpenReview, Jun 2021 |
+| **12** | **Cui2020Learnable_34_20_LBGAT6** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 62.55% | 30.20% | WideResNet-34-20 | ICCV 2021 |
+| **13** | **Gowal2020Uncovering** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 60.86% | 30.03% | WideResNet-70-16 | arXiv, Oct 2020 |
+| **14** | **Cui2020Learnable_34_10_LBGAT6** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 60.64% | 29.33% | WideResNet-34-10 | ICCV 2021 |
+| **15** | **Rade2021Helper_R18_ddpm** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 61.50% | 28.88% | PreActResNet-18 | OpenReview, Jun 2021 |
+| **16** | **Wu2020Adversarial** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 60.38% | 28.86% | WideResNet-34-10 | NeurIPS 2020 |
+| **17** | **Rebuffi2021Fixing_R18_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 56.87% | 28.50% | PreActResNet-18 | arXiv, Mar 2021 |
+| **18** | **Hendrycks2019Using** | *[Using Pre-Training Can Improve Model Robustness and Uncertainty](https://arxiv.org/abs/1901.09960)* | 59.23% | 28.42% | WideResNet-28-10 | ICML 2019 |
+| **19** | **Addepalli2022Efficient_RN18** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 65.45% | 27.67% | ResNet-18 | CVPRW 2022 |
+| **20** | **Cui2020Learnable_34_10_LBGAT0** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 70.25% | 27.16% | WideResNet-34-10 | ICCV 2021 |
+| **21** | **Addepalli2021Towards_PARN18** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 62.02% | 27.14% | PreActResNet-18 | OpenReview, Jun 2021 |
+| **22** | **Chen2020Efficient** | *[Efficient Robust Training via Backward Smoothing](https://arxiv.org/abs/2010.01278)* | 62.15% | 26.94% | WideResNet-34-10 | arXiv, Oct 2020 |
+| **23** | **Sitawarin2020Improving** | *[Improving Adversarial Robustness Through Progressive Hardening](https://arxiv.org/abs/2003.09347)* | 62.82% | 24.57% | WideResNet-34-10 | arXiv, Mar 2020 |
+| **24** | **Rice2020Overfitting** | *[Overfitting in adversarially robust deep learning](https://arxiv.org/abs/2002.11569)* | 53.83% | 18.95% | PreActResNet-18 | ICML 2020 |
#### Corruptions
@@ -349,10 +354,11 @@ You can find all available model IDs in the tables below (note that the full lea
| **5** | **Diffenderfer2021Winning_Binary** | *[A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness](https://arxiv.org/abs/2106.09129)* | 77.69% | 65.26% | WideResNet-18-2 | NeurIPS 2021 |
| **6** | **Hendrycks2020AugMix_ResNeXt** | *[AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty](https://arxiv.org/abs/1912.02781)* | 78.90% | 65.14% | ResNeXt29_32x4d | ICLR 2020 |
| **7** | **Hendrycks2020AugMix_WRN** | *[AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty](https://arxiv.org/abs/1912.02781)* | 76.28% | 64.11% | WideResNet-40-2 | ICLR 2020 |
-| **8** | **Gowal2020Uncovering_extra_Linf** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 69.15% | 56.00% | WideResNet-70-16 | arXiv, Oct 2020 |
-| **9** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 65.73% | 54.88% | WideResNet-34-10 | OpenReview, Jun 2021 |
-| **10** | **Addepalli2021Towards_PARN18** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 62.02% | 51.77% | PreActResNet-18 | OpenReview, Jun 2021 |
-| **11** | **Gowal2020Uncovering_Linf** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 60.86% | 49.46% | WideResNet-70-16 | arXiv, Oct 2020 |
+| **8** | **Addepalli2022Efficient_WRN_34_10** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 68.75% | 56.95% | WideResNet-34-10 | CVPRW 2022 |
+| **9** | **Gowal2020Uncovering_extra_Linf** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 69.15% | 56.00% | WideResNet-70-16 | arXiv, Oct 2020 |
+| **10** | **Addepalli2021Towards_WRN34** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 65.73% | 54.88% | WideResNet-34-10 | OpenReview, Jun 2021 |
+| **11** | **Addepalli2021Towards_PARN18** | *[Towards Achieving Adversarial Robustness Beyond Perceptual Limits](https://openreview.net/forum?id=SHB_znlW5G7)* | 62.02% | 51.77% | PreActResNet-18 | OpenReview, Jun 2021 |
+| **12** | **Gowal2020Uncovering_Linf** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 60.86% | 49.46% | WideResNet-70-16 | arXiv, Oct 2020 |
diff --git a/model_info/cifar10/corruptions/Addepalli2022Efficient_WRN_34_10.json b/model_info/cifar10/corruptions/Addepalli2022Efficient_WRN_34_10.json
index ee9a19c..174d2e7 100644
--- a/model_info/cifar10/corruptions/Addepalli2022Efficient_WRN_34_10.json
+++ b/model_info/cifar10/corruptions/Addepalli2022Efficient_WRN_34_10.json
@@ -10,5 +10,5 @@
"eps": null,
"clean_acc": "88.71",
"reported": "80.12",
- "corruptions_acc": "80.12",
+ "corruptions_acc": "80.12"
}
\ No newline at end of file
diff --git a/model_info/cifar100/Linf/Addepalli2022Efficient_WRN_34_10.json b/model_info/cifar100/Linf/Addepalli2022Efficient_WRN_34_10.json
index 3e3a1a8..85984d3 100644
--- a/model_info/cifar100/Linf/Addepalli2022Efficient_WRN_34_10.json
+++ b/model_info/cifar100/Linf/Addepalli2022Efficient_WRN_34_10.json
@@ -11,5 +11,5 @@
"clean_acc": "68.75",
"reported": "31.85",
"autoattack_acc": "31.85",
- "unreliable": false,
+ "unreliable": false
}
\ No newline at end of file
diff --git a/model_info/cifar100/corruptions/Addepalli2022Efficient_WRN_34_10.json b/model_info/cifar100/corruptions/Addepalli2022Efficient_WRN_34_10.json
index 8f7a9d8..641522f 100644
--- a/model_info/cifar100/corruptions/Addepalli2022Efficient_WRN_34_10.json
+++ b/model_info/cifar100/corruptions/Addepalli2022Efficient_WRN_34_10.json
@@ -10,5 +10,5 @@
"eps": null,
"clean_acc": "68.75",
"reported": "56.95",
- "corruptions_acc": "56.95",
+ "corruptions_acc": "56.95"
}
\ No newline at end of file