We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Noting down the list of tasks to be completed for the tutorial. The implementation for the attack is in #65 .
Paper Link
For the sake of simplicity right now focusing on one model and one dataset. (VGG16 , CIFAR10(less noisy than kaggle cifar10) )
The text was updated successfully, but these errors were encountered:
The following can be used to get useful stats for the tutorial `
attack_image = obj.perturbation_image(attack_result.x, image) prior_probs = obj.model_predict(image) predicted_probs = self.model_predict(attack_image) predicted_class = np.argmax(predicted_probs) actual_class = original_label success = predicted_class != actual_class cdiff = prior_probs[actual_class] - predicted_probs[actual_class]
`
Sorry, something went wrong.
Hey, I would like to work on this
great @Shreyas-Bhat you can take it up, comment on #78 and #79 too so we can assign it to you
Shreyas-Bhat
No branches or pull requests
Noting down the list of tasks to be completed for the tutorial. The implementation for the attack is in #65 .
Paper Link
For the sake of simplicity right now focusing on one model and one dataset. (VGG16 , CIFAR10(less noisy than kaggle cifar10) )
The text was updated successfully, but these errors were encountered: