Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using external CMAES library #103

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
Open

Using external CMAES library #103

wants to merge 14 commits into from

Conversation

Chrism7
Copy link
Collaborator

@Chrism7 Chrism7 commented Nov 23, 2017

We implemented a learner, experiment and console program concerning a reliability based CMAES attack on Arbiter PUFs. For optimality purposes we use an external library for applying a CMAES algorithm.
Up to now, only particular Arbiter PUFs can be learned.

@contextlib.contextmanager
def avoid_printing():
save_stdout = sys.stdout
sys.stdout = open('trash', 'w')
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

open('/dev/null', 'w')

Copy link
Owner

@nils-wisiol nils-wisiol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple minor annotations in the code, but overall very nice!!

Also, what's up with test/test.py?

@@ -0,0 +1,123 @@
"""This module provides an experiment class which learns an instance
of LTFArray with reliability based CMAES learner.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include reference to paper

seed_model, pop_size, step_size_limit, iteration_limit,
):
"""Initialize an Experiment using the Reliability based CMAES Learner for modeling LTF Arrays
:param log_name: Log name, Prefix of the path or name of the experiment log file
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe a path prefix is not good here

):
"""Initialize an Experiment using the Reliability based CMAES Learner for modeling LTF Arrays
:param log_name: Log name, Prefix of the path or name of the experiment log file
:param seed_instance: PRNG seed used to create LTF array instances
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only init once, i.e. one instance (singular)

:param n: Length, the number stages within the LTF array
:param noisiness: Noisiness, the relative scale of noise of instance compared to the scale of weights
:param seed_challenges: PRNG seed used to sample challenges
:param num: Challenge number, the number of binary inputs for the LTF array
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's this?

:param noisiness: Noisiness, the relative scale of noise of instance compared to the scale of weights
:param seed_challenges: PRNG seed used to sample challenges
:param num: Challenge number, the number of binary inputs for the LTF array
:param reps: Repetitions, the frequency of evaluations of every challenge (part of training_set)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use repetitions instead of frequency, as the latter suggests that this is an ongoing process (which it is not)

reliabilities = np.zeros(np.shape(delay_diffs))
indices_of_reliable = np.abs(delay_diffs[:]) > epsilon
reliabilities[indices_of_reliable] = 1
correlation = this.calc_corr(reliabilities, measured_rels)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use __class__ here?

return False

@staticmethod
def common_responses(responses):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to majority

pypuf/tools.py Outdated
self.challenges = array(list(sample_inputs(instance.n, N)))
self.responses = instance.eval(self.challenges)
self.challenges = array(list(sample_inputs(instance.n, N, random_instance=random_instance)))
if reps is None:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move that to the function definition

pypuf/tools.py Outdated
reps = 1
self.responses = zeros((reps, N))
for i in range(reps):
self.challenges, cs = itertools.tee(self.challenges)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are use assigning a value to self.challenges here? I think it is unused and later overwritten. (Use _ instead?)

requirements.txt Outdated
@@ -4,3 +4,4 @@ polymath
pylint
pep8
pypuf_helper
cma
Copy link
Owner

@nils-wisiol nils-wisiol Dec 6, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

include an exact version here, e.g. 2.3.* (this will update bug fixes but no major changes)

@nils-wisiol
Copy link
Owner

nils-wisiol commented Jan 17, 2018

current unit test errors, also the tests were running for 12hrs+

@Chrism7 Chrism7 force-pushed the cma_hansen branch 2 times, most recently from 65ee569 to 5551579 Compare January 24, 2018 12:03
@Chrism7
Copy link
Collaborator Author

Chrism7 commented Jan 24, 2018

@nils-wisiol , please review my last commit within this pull request.
Btw: Do you have any idea about how to solve these 3 remaining linter-errors?

@nils-wisiol
Copy link
Owner

nils-wisiol commented Jan 25, 2019

I updated this branch to match up with the updated master branch. It is now, as far as I can tell, running without errors. However, I could not figure out parameters to learn an (relatively easy) instance with n=64 and k=2. For example,

python3 sim_rel_cmaes_attack.py 64 2 1 10000 30 20 20 500 TEST 

i.e., n=64, k=2, nosiness 1, 10,000 CPRs, 30 samples per CRP and population size of 20 will stop after the maximum iteration count (500) with an accuracy of around 56%.

seed for instance:      0x157aab5e
seed for challenges:    0xeb576e0a
seed for model:         0x2f8cfd54

Variations of these parameters yield similar results. Could you include a couple of examples that give high accuracy results?

This was referenced Jan 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants