Skip to content
This repository has been archived by the owner on Apr 18, 2018. It is now read-only.

Jarvis Reproduce #15

Open
rlnsanz opened this issue Feb 20, 2018 · 4 comments
Open

Jarvis Reproduce #15

rlnsanz opened this issue Feb 20, 2018 · 4 comments
Assignees
Labels

Comments

@rlnsanz
Copy link
Collaborator

rlnsanz commented Feb 20, 2018

Reproduce an experiment or trial by materializing it on a user-specified directory. The execution of a python script enables the data scientist to re-run some experiment, or some trial within the experiment.

@rlnsanz
Copy link
Collaborator Author

rlnsanz commented Feb 20, 2018

  • Experiment artifacts and version information, data context exist in a hidden location. User can “materialize” these contents to facilitate exploration, but changes to these contents do not update any artifacts.
  • Trials replaced with constants. Trials don’t look like general experiment.
    Iteration semantics
    • Pick one aggregates (min/max)
    • Sweep vs. for each.
      • Alpha can have one value
      • For each value of alpha, I may want to export a trial that iterates over every value of beta. This needs to be expressed somehow.

@malharpatel
Copy link
Collaborator

28504390_10208989822582112_314899711_o
28547872_10208989821342081_534545728_o
28579895_10208989821102075_973293686_o
28547370_10208989821142076_427410176_o

@dcrankshaw
Copy link

Glancing through those notes on the board, I noticed it says that if we re-run an experiment it does not get versioned. What's the reasoning behind that? I think we should be very careful about deciding to not version things.

@rlnsanz
Copy link
Collaborator Author

rlnsanz commented Feb 26, 2018

What we meant is that if you reproduce an experiment it does not get versioned. Re-running an experiment always gets versioned.

The argument was that if you checkout some past experiment-run to some target directory, move to that directory, and call python reproduce.py -- if the run-reproduction is a "true" one -- then the results would already have been versioned, and don't need to be versioned again.

That said, it's unlikely that the experiment-run reproduction will be a "true" one, sources of randomness could alter some result, and we may have good reasons for tracking how many times an experiment has been reproduced.

Still... what it means to version something well when you re-run an experiment vs. when you reproduce a past one are something that we wanted to call to the attention of the group, and something we could discuss in meeting.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants