Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is the number of perturbed pixels greater than the specified value k in Sparse-RS algorithm #7

Open
BaiDingHub opened this issue Nov 29, 2022 · 6 comments

Comments

@BaiDingHub
Copy link

Hi, I am still having issues replicating your results by following the code.

When we reproduce the code to our framework, we find that the number of perturbed pixels is greater than the specified sparsity value eps.

Following the implementation of this code, we find a further perturbation on the sample x_best when randomly searching for a new sample. At line 315, the algorithm perturbs the sample x_best outside the range [0, eps] because the parameter eps_it is always larger than 1. Therefore, the number of perturbed pixels is greater than eps following the iteration. But according to the paper, the number of perturbed pixels should not exceed eps.

There may be something wrong with my understanding of your code, and I hope for your kindful help.

sparse-rs/rs_attacks.py

Lines 304 to 323 in 21d8759

# build new candidate
x_new = x_best_curr.clone()
eps_it = max(int(self.p_selection(it) * eps), 1)
ind_p = torch.randperm(eps)[:eps_it]
ind_np = torch.randperm(n_pixels - eps)[:eps_it]
for img in range(x_new.shape[0]):
p_set = b_curr[img, ind_p]
np_set = be_curr[img, ind_np]
x_new[img, :, p_set // w, p_set % w] = x_curr[img, :, p_set // w, p_set % w].clone()
if eps_it > 1:
x_new[img, :, np_set // w, np_set % w] = self.random_choice([c, eps_it]).clamp(0., 1.)
else:
# if update is 1x1 make sure the sampled color is different from the current one
old_clr = x_new[img, :, np_set // w, np_set % w].clone()
assert old_clr.shape == (c, 1), print(old_clr)
new_clr = old_clr.clone()
while (new_clr == old_clr).all().item():
new_clr = self.random_choice([c, 1]).clone().clamp(0., 1.)
x_new[img, :, np_set // w, np_set % w] = new_clr.clone()

@fra31
Copy link
Owner

fra31 commented Nov 29, 2022

Hi,

I'm not sure to understand what you mean: are you re-implementing the algorithm or just using the available code? As you mentioned, at L315 new pixels are perturbed, but at L313 the same number of perturbed pixels have been reset to the original values (i.e. are now unperturbed) so that the perturbation size is preserved.

Hope this helps!

@BaiDingHub
Copy link
Author

Thanks for you answer.

I mean that it only ensures that the number of perturbed pixels in the current iteration does not exceed eps at L313-L315. However, since the perturbed pixels outside the range [0, eps] at L313 will accumulate with the update x_best = x_new, the number of perturbed pixels will exceed eps.

For example, the sample x_new to be perturbed is a copy of the best sample x_best_curr at i-th iteration at L305. Then we perturb the pixels outside the range [0, eps] of x_new . We hypothesize that the resulting new sample x_new outperforms the current best samples. So, we update x_best_curr = x_new.
At the i+1-th iteration, we set x_new to the updated sample x_best_curr. When we continue to perturb x_new, the number of perturbed pixels outside the range [0, eps] would be greater than the number of reset pixels. So, the number of perturbed pixels in the resulted adversarial example would excedd eps.

Should the code at L305 be changed to x_new = x_curr.clone() like the code at L429 in Patch-RS algorithm?

@fra31
Copy link
Owner

fra31 commented Nov 29, 2022

We have that x_curr is a copy of the original images, i.e. without perturbations. L313 and L315 make sure that x_new has at most eps perturbed pixels: in fact, first eps_it pixels which are initially perturbed are reset to clean values, then a new set of eps_it are randomly perturbed. Since x_best collects the best images in the x_new batch, it will also have images with eps perturbed pixels, and the procedure is repeated in the next iteration.

For patches the perturbations are built differently: the perturbations (the patches and locations) are stored as independent tensors, i.e. not applied on the images. Then, the perturbations are applied at each iteration on a copy of the original images here. Therefore, unlike for L0, x_new is a copy of the original images without perturbations.

@BaiDingHub
Copy link
Author

But the range of [0, eps] is limited.

For example, if we set eps=150, we reset 10 pixels the original values, and perturb 10 pixels outside the range of [0, eps] each iteration. We assume that all perturbations outside the range of [0, eps] do not overlap. So in the 16th iteration, we perturbed 160 pixelx, but we can only reset 150 pixelx to the original value at most in the range of [0, eps]. In subsequent iterations, the number of perturbed pixels gets larger and larger, to more than 150.

@fra31
Copy link
Owner

fra31 commented Nov 29, 2022

The range of [0, eps] just means that at most eps pixels among all the pixels in the image e.g. around 50k for ImageNet are perturbed, doesn't indicate the indices of the pixels which can be perturbed (if that's what you meant).

In your example, at each iteration we start with a set A of 150 perturbed pixels (they can be any of the 50k pixels in the image), randomly sample 10 elements of A and reset them to the original value (now only 140 pixels are perturbed), randomly sample 10 pixels which were not in A and perturb them (again 150 pixels perturbed). This preserves the number of perturbed pixels.

@BaiDingHub
Copy link
Author

Thank you for your answer, which completely solves my question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants