-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low accuracy: Even without poisoning, the accuracy is only about 10% #15
Comments
try to run it with “python training.py --params = utils/params.yaml" after changing the "is_poison" option to True in the .yaml file, the test accuracy on main task reaches 90% on my machine. However, i am having another problem. The accuracy of backdoor task cant reach 100%, it always remain to be around 70%. Do you have any ideas how to fix this? |
Have you solved this problem? I had a similar problem with cifar10 dataset not converging under resnet10 without poisoning |
我也是准确率只有10%,请问是什么问题呢? |
Have you solved the problem? I found a very strange problem, in param.yaml where is_posion is false, the global model accuracy is always 10%, but its loss is increasing upward to stop convergence. This is a very strange phenomenon. |
请问大概多少epoch之后可以跑到90%的准确率呢 |
Hi. I can hardly remember all the details when i try to reproduce the repo as its like two years ago. Maybe you can resort to my repo https://github.com/ybdai7/Chameleon-durable-backdoor, which is the official implementation of an ICML paper and the code is organized based on this repo. |
Thank you so much for your help.
…---Original---
From: "Yanbo ***@***.***>
Date: Sat, Apr 13, 2024 14:03 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [ebagdasa/backdoor_federated_learning] Low accuracy: Even withoutpoisoning, the accuracy is only about 10% (Issue #15)
Hi. I can hardly remember all the details when i try to reproduce the repo as its like two years ago. Maybe you can resort to my repo https://github.com/ybdai7/Chameleon-durable-backdoor, which is the official implementation of an ICML paper and the code is organized based on this repo.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
我跑的大概在1800轮左右 |
按照源代码,应该是指定一个攻击者并进行随机采样的,这部分代码对应的应该是paper里的重复攻击部分,70%左右的结果没有问题。 |
您好,请问您还记得最后的后门准确率是如何达到100%的吗 |
average_shrink_models function in helper.py have bug. Specifically, data.add_(update_per_layer) have bug. If use type of int cant make model converge. So you should change it into data.float().add_(update_per_layer). |
Even with "is_poison"=false, the accuracy is only about 10%. When "is_poison"=true and batch_size=264, I get the results as follows:
When there are adversaries, accuracy of backdoor is about 100%,accuracy without backdoor is increasing as epoch increases. But when there are not adversaries, accuracy is always about 10%
The text was updated successfully, but these errors were encountered: