From 17b6515712f0fbcef834cd112e165bcef6e1a645 Mon Sep 17 00:00:00 2001 From: fra31 Date: Tue, 7 Jun 2022 16:34:14 +0000 Subject: [PATCH] update venues in readme --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b171542..1358faf 100644 --- a/README.md +++ b/README.md @@ -208,14 +208,14 @@ You can find all available model IDs in the tables below (note that the full lea | **5** | **Rebuffi2021Fixing_70_16_cutmix_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 88.54% | 64.20% | WideResNet-70-16 | arXiv, Mar 2021 | | **6** | **Kang2021Stable** | *[Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks](https://arxiv.org/abs/2110.12976)* | 93.73% | 64.20% | WideResNet-70-16, Neural ODE block | NeurIPS 2021 | | **7** | **Gowal2021Improving_28_10_ddpm_100m** | *[Improving Robustness using Generated Data](https://arxiv.org/abs/2110.09468)* | 87.50% | 63.38% | WideResNet-28-10 | NeurIPS 2021 | -| **8** | **Pang2022Robustness_WRN70_16** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 89.01% | 63.35% | WideResNet-70-16 | arXiv, Feb 2022 | +| **8** | **Pang2022Robustness_WRN70_16** | *[Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 89.01% | 63.35% | WideResNet-70-16 | ICML 2022 | | **9** | **Rade2021Helper_extra** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 91.47% | 62.83% | WideResNet-34-10 | OpenReview, Jun 2021 | | **10** | **Sehwag2021Proxy_ResNest152** | *[Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?](https://arxiv.org/abs/2104.09425)* | 87.30% | 62.79% | ResNest152 | ICLR 2022 | | **11** | **Gowal2020Uncovering_28_10_extra** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 89.48% | 62.76% | WideResNet-28-10 | arXiv, Oct 2020 | | **12** | **Huang2021Exploring_ema** | *[Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks](https://arxiv.org/abs/2110.03825)* | 91.23% | 62.54% | WideResNet-34-R | NeurIPS 2021 | | **13** | **Huang2021Exploring** | *[Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks](https://arxiv.org/abs/2110.03825)* | 90.56% | 61.56% | WideResNet-34-R | NeurIPS 2021 | | **14** | **Dai2021Parameterizing** | *[Parameterizing Activation Functions for Adversarial Robustness](https://arxiv.org/abs/2110.05626)* | 87.02% | 61.55% | WideResNet-28-10-PSSiLU | arXiv, Oct 2021 | -| **15** | **Pang2022Robustness_WRN28_10** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 88.61% | 61.04% | WideResNet-28-10 | arXiv, Feb 2022 | +| **15** | **Pang2022Robustness_WRN28_10** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 88.61% | 61.04% | WideResNet-28-10 | ICML 2022 | | **16** | **Rade2021Helper_ddpm** | *[Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off](https://openreview.net/forum?id=BuD2LmNaU3a)* | 88.16% | 60.97% | WideResNet-28-10 | OpenReview, Jun 2021 | | **17** | **Rebuffi2021Fixing_28_10_cutmix_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 87.33% | 60.73% | WideResNet-28-10 | arXiv, Mar 2021 | | **18** | **Sridhar2021Robust_34_15** | *[Improving Neural Network Robustness via Persistency of Excitation](https://arxiv.org/abs/2106.02078)* | 86.53% | 60.41% | WideResNet-34-15 | ACC 2022 | @@ -320,7 +320,7 @@ You can find all available model IDs in the tables below (note that the full lea |:---:|---|---|:---:|:---:|:---:|:---:| | **1** | **Gowal2020Uncovering_extra** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 69.15% | 36.88% | WideResNet-70-16 | arXiv, Oct 2020 | | **2** | **Rebuffi2021Fixing_70_16_cutmix_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 63.56% | 34.64% | WideResNet-70-16 | arXiv, Mar 2021 | -| **3** | **Pang2022Robustness_WRN70_16** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 65.56% | 33.05% | WideResNet-70-16 | arXiv, Feb 2022 | +| **3** | **Pang2022Robustness_WRN70_16** | *[ Robustness and Accuracy Could Be Reconcilable by (Proper) Definition](https://arxiv.org/pdf/2202.10103.pdf)* | 65.56% | 33.05% | WideResNet-70-16 | ICML 2022 | | **4** | **Rebuffi2021Fixing_28_10_cutmix_ddpm** | *[Fixing Data Augmentation to Improve Adversarial Robustness](https://arxiv.org/abs/2103.01946)* | 62.41% | 32.06% | WideResNet-28-10 | arXiv, Mar 2021 | | **5** | **Jia2022LAS-AT_34_20** | *[LAS-AT: Adversarial Training with Learnable Attack Strategy](https://arxiv.org/abs/2203.06616)* | 67.31% | 31.91% | WideResNet-34-20 | arXiv, Mar 2022 | | **6** | **Addepalli2022Efficient_WRN_34_10** | *[Efficient and Effective Augmentation Strategy for Adversarial Training](https://artofrobust.github.io/short_paper/31.pdf)* | 68.75% | 31.85% | WideResNet-34-10 | CVPRW 2022 |