Skip to content

Commit

Permalink
update venues in readme
Browse files Browse the repository at this point in the history
  • Loading branch information
fra31 committed Jun 7, 2022
1 parent 64686d4 commit 17b6515
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,14 +208,14 @@ You can find all available model IDs in the tables below (note that the full lea
| <sub>**5**</sub> | <sub><sup>**Rebuffi2021Fixing_70_16_cutmix_ddpm**</sup></sub> | <sub>*[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)*</sub> | <sub>88.54%</sub> | <sub>64.20%</sub> | <sub>WideResNet-70-16</sub> | <sub>arXiv, Mar 2021</sub> |
| <sub>**6**</sub> | <sub><sup>**Kang2021Stable**</sup></sub> | <sub>*[Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks](https://arxiv.org/abs/2110.12976)*</sub> | <sub>93.73%</sub> | <sub>64.20%</sub> | <sub>WideResNet-70-16, Neural ODE block</sub> | <sub>NeurIPS 2021</sub> |
| <sub>**7**</sub> | <sub><sup>**Gowal2021Improving_28_10_ddpm_100m**</sup></sub> | <sub>*[Improving Robustness using Generated Data](https://arxiv.org/abs/2110.09468)*</sub> | <sub>87.50%</sub> | <sub>63.38%</sub> | <sub>WideResNet-28-10</sub> | <sub>NeurIPS 2021</sub> |
| <sub>**8**</sub> | <sub><sup>**Pang2022Robustness_WRN70_16**</sup></sub> | <sub>*[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)*</sub> | <sub>89.01%</sub> | <sub>63.35%</sub> | <sub>WideResNet-70-16</sub> | <sub>arXiv, Feb 2022</sub> |
| <sub>**8**</sub> | <sub><sup>**Pang2022Robustness_WRN70_16**</sup></sub> | <sub>*[Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)*</sub> | <sub>89.01%</sub> | <sub>63.35%</sub> | <sub>WideResNet-70-16</sub> | <sub>ICML 2022</sub> |
| <sub>**9**</sub> | <sub><sup>**Rade2021Helper_extra**</sup></sub> | <sub>*[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)*</sub> | <sub>91.47%</sub> | <sub>62.83%</sub> | <sub>WideResNet-34-10</sub> | <sub>OpenReview, Jun 2021</sub> |
| <sub>**10**</sub> | <sub><sup>**Sehwag2021Proxy_ResNest152**</sup></sub> | <sub>*[Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?](https://arxiv.org/abs/2104.09425)*</sub> | <sub>87.30%</sub> | <sub>62.79%</sub> | <sub>ResNest152</sub> | <sub>ICLR 2022</sub> |
| <sub>**11**</sub> | <sub><sup>**Gowal2020Uncovering_28_10_extra**</sup></sub> | <sub>*[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)*</sub> | <sub>89.48%</sub> | <sub>62.76%</sub> | <sub>WideResNet-28-10</sub> | <sub>arXiv, Oct 2020</sub> |
| <sub>**12**</sub> | <sub><sup>**Huang2021Exploring_ema**</sup></sub> | <sub>*[Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks](https://arxiv.org/abs/2110.03825)*</sub> | <sub>91.23%</sub> | <sub>62.54%</sub> | <sub>WideResNet-34-R</sub> | <sub>NeurIPS 2021</sub> |
| <sub>**13**</sub> | <sub><sup>**Huang2021Exploring**</sup></sub> | <sub>*[Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks](https://arxiv.org/abs/2110.03825)*</sub> | <sub>90.56%</sub> | <sub>61.56%</sub> | <sub>WideResNet-34-R</sub> | <sub>NeurIPS 2021</sub> |
| <sub>**14**</sub> | <sub><sup>**Dai2021Parameterizing**</sup></sub> | <sub>*[Parameterizing Activation Functions for Adversarial Robustness](https://arxiv.org/abs/2110.05626)*</sub> | <sub>87.02%</sub> | <sub>61.55%</sub> | <sub>WideResNet-28-10-PSSiLU</sub> | <sub>arXiv, Oct 2021</sub> |
| <sub>**15**</sub> | <sub><sup>**Pang2022Robustness_WRN28_10**</sup></sub> | <sub>*[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)*</sub> | <sub>88.61%</sub> | <sub>61.04%</sub> | <sub>WideResNet-28-10</sub> | <sub>arXiv, Feb 2022</sub> |
| <sub>**15**</sub> | <sub><sup>**Pang2022Robustness_WRN28_10**</sup></sub> | <sub>*[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)*</sub> | <sub>88.61%</sub> | <sub>61.04%</sub> | <sub>WideResNet-28-10</sub> | <sub>ICML 2022</sub> |
| <sub>**16**</sub> | <sub><sup>**Rade2021Helper_ddpm**</sup></sub> | <sub>*[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)*</sub> | <sub>88.16%</sub> | <sub>60.97%</sub> | <sub>WideResNet-28-10</sub> | <sub>OpenReview, Jun 2021</sub> |
| <sub>**17**</sub> | <sub><sup>**Rebuffi2021Fixing_28_10_cutmix_ddpm**</sup></sub> | <sub>*[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)*</sub> | <sub>87.33%</sub> | <sub>60.73%</sub> | <sub>WideResNet-28-10</sub> | <sub>arXiv, Mar 2021</sub> |
| <sub>**18**</sub> | <sub><sup>**Sridhar2021Robust_34_15**</sup></sub> | <sub>*[Improving Neural Network Robustness via Persistency of Excitation](https://arxiv.org/abs/2106.02078)*</sub> | <sub>86.53%</sub> | <sub>60.41%</sub> | <sub>WideResNet-34-15</sub> | <sub>ACC 2022</sub> |
Expand Down Expand Up @@ -320,7 +320,7 @@ You can find all available model IDs in the tables below (note that the full lea
|:---:|---|---|:---:|:---:|:---:|:---:|
| <sub>**1**</sub> | <sub><sup>**Gowal2020Uncovering_extra**</sup></sub> | <sub>*[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)*</sub> | <sub>69.15%</sub> | <sub>36.88%</sub> | <sub>WideResNet-70-16</sub> | <sub>arXiv, Oct 2020</sub> |
| <sub>**2**</sub> | <sub><sup>**Rebuffi2021Fixing_70_16_cutmix_ddpm**</sup></sub> | <sub>*[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)*</sub> | <sub>63.56%</sub> | <sub>34.64%</sub> | <sub>WideResNet-70-16</sub> | <sub>arXiv, Mar 2021</sub> |
| <sub>**3**</sub> | <sub><sup>**Pang2022Robustness_WRN70_16**</sup></sub> | <sub>*[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)*</sub> | <sub>65.56%</sub> | <sub>33.05%</sub> | <sub>WideResNet-70-16</sub> | <sub>arXiv, Feb 2022</sub> |
| <sub>**3**</sub> | <sub><sup>**Pang2022Robustness_WRN70_16**</sup></sub> | <sub>*[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)*</sub> | <sub>65.56%</sub> | <sub>33.05%</sub> | <sub>WideResNet-70-16</sub> | <sub>ICML 2022</sub> |
| <sub>**4**</sub> | <sub><sup>**Rebuffi2021Fixing_28_10_cutmix_ddpm**</sup></sub> | <sub>*[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)*</sub> | <sub>62.41%</sub> | <sub>32.06%</sub> | <sub>WideResNet-28-10</sub> | <sub>arXiv, Mar 2021</sub> |
| <sub>**5**</sub> | <sub><sup>**Jia2022LAS-AT_34_20**</sup></sub> | <sub>*[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)*</sub> | <sub>67.31%</sub> | <sub>31.91%</sub> | <sub>WideResNet-34-20</sub> | <sub>arXiv, Mar 2022</sub> |
| <sub>**6**</sub> | <sub><sup>**Addepalli2022Efficient_WRN_34_10**</sup></sub> | <sub>*[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)*</sub> | <sub>68.75%</sub> | <sub>31.85%</sub> | <sub>WideResNet-34-10</sub> | <sub>CVPRW 2022</sub> |
Expand Down

0 comments on commit 17b6515

Please sign in to comment.