Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TF2] Carlini Wagner L2 #1156

Merged
merged 17 commits into from
Mar 3, 2021
Merged

[TF2] Carlini Wagner L2 #1156

merged 17 commits into from
Mar 3, 2021

Conversation

Joool
Copy link
Contributor

@Joool Joool commented Apr 4, 2020

I ported the carlini wagner attack to tensorflow 2. I discovered that this is already been attempted in #1083. However, I needed the implementation for a current project, so I decided to still complete it and open a pull request, so other people can use it in the meantime.

Difference to tf1 version:

  • The variables for storing intermediate values during the optimization/binary search are now being updated in a vectorized fashion
  • Since the tf2 conversion did not provide a logger I didn't include any logging yet; I could include print logging
  • I dropped the targeted flag, if the y variable is set, we assume its a targeted attack
  • the user can provide the target labels like so: [target_label] * len(x). I found that more intuitive since tf2 seems to prefer that for loss functions these days
  • no testing, same reason as for the logging, I did thoroughly test it with a modified version of the mnist tf2 script

Note:
This pull request includes the bugfix in #1151.

@googlebot
Copy link

All (the pull request submitter and all commit authors) CLAs are signed, but one or more commits were authored or co-authored by someone other than the pull request submitter.

We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that by leaving a comment that contains only @googlebot I consent. in this pull request.

Note to project maintainer: There may be cases where the author cannot leave a comment, or the comment is not properly detected as consent. In those cases, you can manually confirm consent of the commit author(s), and set the cla label to yes (if enabled on your project).

ℹ️ Googlers: Go here for more info.

@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@v-i-s-h
Copy link

v-i-s-h commented Aug 23, 2020

Hi @Joool

I am trying to use this as

x_cwl2 = carlini_wagner_l2(model, x)

where x is my image and model is the trained model. I am getting the following error:

    ValueError: tf.function-decorated function tried to create variables on non-first call.
Detailed Error ``` ValueError Traceback (most recent call last) in () 3 for (idx, (x, y)) in enumerate(test_dataset): 4 ----> 5 x_cwl2 = carlini_wagner_l2(model, x) 6 y_pred = model(x_cwl2) 7 test_cwl2_acc(y, y_pred)

11 frames

/content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py in carlini_wagner_l2(model_fn, x, **kwargs)
     12   For more details on the attack and the parameters see the corresponding class.
     13   """
---> 14   return CarliniWagnerL2(model_fn, **kwargs).attack(x)
     15 
     16 

/content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py in attack(self, x)
     99     for i in range(0, len(x), self.batch_size):
    100       adv_ex[i:i +
--> 101              self.batch_size] = self._attack(x[i:i+self.batch_size]).numpy()
    102 
    103     return adv_ex

/content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py in _attack(self, x)
    169       for iteration in range(self.max_iterations):
    170         x_new, loss, preds, l2_dist = self.attack_step(
--> 171             x, y, modifier, const)
    172 
    173         # check if we made progress, abort otherwise

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    778       else:
    779         compiler = "nonXla"
--> 780         result = self._call(*args, **kwds)
    781 
    782       new_tracing_count = self._get_tracing_count()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    805       # In this case we have created variables on the first call, so we run the
    806       # defunned version which is guaranteed to never create variables.
--> 807       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    808     elif self._stateful_fn is not None:
    809       # Release the lock early so that multiple threads can perform the call

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
   2826     """Calls a graph function specialized to the inputs."""
   2827     with self._lock:
-> 2828       graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
   2829     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   2830 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3211 
   3212       self._function_cache.missed.add(call_context_key)
-> 3213       graph_function = self._create_graph_function(args, kwargs)
   3214       self._function_cache.primary[cache_key] = graph_function
   3215       return graph_function, args, kwargs

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3073             arg_names=arg_names,
   3074             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3075             capture_by_value=self._capture_by_value),
   3076         self._function_attributes,
   3077         function_spec=self.function_spec,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    984         _, original_func = tf_decorator.unwrap(python_func)
    985 
--> 986       func_outputs = python_func(*func_args, **func_kwargs)
    987 
    988       # invariant: `func_outputs` contains only Tensors, CompositeTensors,
```
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    598         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    599         # the function a weak reference to itself to avoid a reference cycle.
--> 600         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    601     weak_wrapped_fn = weakref.ref(wrapped_fn)
    602 
```
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in bound_method_wrapper(*args, **kwargs)
   3733     # However, the replacer is still responsible for attaching self properly.
   3734     # TODO(mdan): Is it possible to do it here instead?
-> 3735     return wrapped_fn(*args, **kwargs)
   3736   weak_bound_method_wrapper = weakref.ref(bound_method_wrapper)
   3737 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    971           except Exception as e:  # pylint:disable=broad-except
    972             if hasattr(e, "ag_error_metadata"):
--> 973               raise e.ag_error_metadata.to_exception(e)
    974             else:
    975               raise

ValueError: in user code:```

    /content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py:316 attack_step  *
        self.optimizer.apply_gradients([(grads, modifier)])
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:519 apply_gradients  **
        self._create_all_weights(var_list)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:704 _create_all_weights
        self._create_slots(var_list)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots
        self.add_slot(var, 'm')
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:764 add_slot
        initial_value=initial_value)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:262 __call__
        return cls._variable_v2_call(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:256 _variable_v2_call
        shape=shape)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:67 getter
        return captured_getter(captured_previous, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:702 invalid_creator_scope
        "tf.function-decorated function tried to create "

    ValueError: tf.function-decorated function tried to create variables on non-first call.
```
</details>

@Joool
Copy link
Contributor Author

Joool commented Sep 21, 2020

Hi @vipinpillai

sorry for the late response. I have tested code with the mnist tutorial for Tensorflow 2.3 with both GPU and CPU:

x_cw2 = carlini_wagner_l2(model, x)

I might need some more details to diagnose your error.

@v-i-s-h
Copy link

v-i-s-h commented Sep 27, 2020

@Joool Hi, I found the problem was related to batchsize. I was having x with a big number than the default bacth_size (128). So it raised an error. For now, I can use batch_size=<sample_in_x> as a work around. But it may be useful if the actual attack can be done in batches.

@v-i-s-h
Copy link

v-i-s-h commented Oct 17, 2020

Hi @Joool, the problem again comes up when I try to use an object created for CarliniWagnerL2 and then tried to call attack() multiple times with it. Below is a sample code which I use to run benchmark on multiple models:

attack_log = []
test_cwl2_acc = tf.metrics.SparseCategoricalAccuracy()
tf.compat.v1.reset_default_graph()
model = # load model here
this_epoch = {}  
for iter in iter_range:
    test_cwl2_acc.reset_states()
    cwl2 = CarliniWagnerL2(model, clip_min=-1.0, clip_max=+1.0, batch_size=BATCH_SIZE, abort_early=False, max_iterations=iter)
    for (x, y) in test_dataset:
        x_cwl2 = cwl2.attack(x)
        y_pred = model(x_cwl2)
        test_cwl2_acc(y, y_pred)

The error log is given below:

/content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py in attack(self, x)
     99     for i in range(0, len(x), self.batch_size):
    100       adv_ex[i:i +
--> 101              self.batch_size] = self._attack(x[i:i+self.batch_size]).numpy()
    102 
    103     return adv_ex

/content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py in _attack(self, x)
    169       for iteration in range(self.max_iterations):
    170         x_new, loss, preds, l2_dist = self.attack_step(
--> 171             x, y, modifier, const)
    172 
    173         # check if we made progress, abort otherwise

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    778       else:
    779         compiler = "nonXla"
--> 780         result = self._call(*args, **kwds)
    781 
    782       new_tracing_count = self._get_tracing_count()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    805       # In this case we have created variables on the first call, so we run the
    806       # defunned version which is guaranteed to never create variables.
--> 807       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    808     elif self._stateful_fn is not None:
    809       # Release the lock early so that multiple threads can perform the call

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
   2826     """Calls a graph function specialized to the inputs."""
   2827     with self._lock:
-> 2828       graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
   2829     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   2830 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3211 
   3212       self._function_cache.missed.add(call_context_key)
-> 3213       graph_function = self._create_graph_function(args, kwargs)
   3214       self._function_cache.primary[cache_key] = graph_function
   3215       return graph_function, args, kwargs

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3073             arg_names=arg_names,
   3074             override_flat_arg_shapes=override_flat_arg_shapes,
-> 3075             capture_by_value=self._capture_by_value),
   3076         self._function_attributes,
   3077         function_spec=self.function_spec,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    984         _, original_func = tf_decorator.unwrap(python_func)
    985 
--> 986       func_outputs = python_func(*func_args, **func_kwargs)
    987 
    988       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    598         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    599         # the function a weak reference to itself to avoid a reference cycle.
--> 600         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    601     weak_wrapped_fn = weakref.ref(wrapped_fn)
    602 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in bound_method_wrapper(*args, **kwargs)
   3733     # However, the replacer is still responsible for attaching self properly.
   3734     # TODO(mdan): Is it possible to do it here instead?
-> 3735     return wrapped_fn(*args, **kwargs)
   3736   weak_bound_method_wrapper = weakref.ref(bound_method_wrapper)
   3737 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    971           except Exception as e:  # pylint:disable=broad-except
    972             if hasattr(e, "ag_error_metadata"):
--> 973               raise e.ag_error_metadata.to_exception(e)
    974             else:
    975               raise

ValueError: in user code:

    /content/cleverhans/cleverhans/future/tf2/attacks/carlini_wagner_l2.py:316 attack_step  *
        self.optimizer.apply_gradients([(grads, modifier)])
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:519 apply_gradients  **
        self._create_all_weights(var_list)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:704 _create_all_weights
        self._create_slots(var_list)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots
        self.add_slot(var, 'm')
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:764 add_slot
        initial_value=initial_value)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:262 __call__
        return cls._variable_v2_call(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:256 _variable_v2_call
        shape=shape)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/variables.py:67 getter
        return captured_getter(captured_previous, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py:702 invalid_creator_scope
        "tf.function-decorated function tried to create "

    ValueError: tf.function-decorated function tried to create variables on non-first call.

@hfeniser
Copy link

I also encounter the same problem with @v-i-s-h

@Joool
Copy link
Contributor Author

Joool commented Nov 25, 2020

Hi @v-i-s-h and @hasanferit sorry for the long delay. I had several deadlines and had to setup a proper testing environment first.

The batch mode should now work properly.

@hfeniser
Copy link

@Joool Thanks!
Do you know why this pull request is blocked to merge?

@Joool
Copy link
Contributor Author

Joool commented Nov 27, 2020

It has to be approved by one of the maintainers.

@jonasguan jonasguan assigned jonasguan, alkaet and dhalf and unassigned jonasguan Jan 20, 2021
@Joool
Copy link
Contributor Author

Joool commented Feb 9, 2021

Hi, seems there is more movement on this project. I resolved the conflicts with the current master branch.

Implementing this I also set up a testing environment (essentially a port of pytorch tests).
My branch is similar to #1084, however, I also have tests for the carlini wagner method.
I would also be willing to extend the tests to the other attacks which are available in tf2.

Since #1190 just got opened, should I open a pull request now or wait until the 4.0.0 release?

@dhalf
Copy link
Collaborator

dhalf commented Feb 24, 2021

Thank you @Joool for porting this feature to TF2!
Apart from some minor remarks (see in the files), we're looking forward to merge this soon.

@dhalf dhalf merged commit e7c0032 into cleverhans-lab:master Mar 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants