You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As an experimenter, I don't want to deal with setting up an experiment-specific metric and carefully wiring it up to my mooclet experiment. I want to be able set just the mooclet policy parameters and that will be the only difference between setting up a mooclet experiment vs a native upgrade experiment.
As an experimenter create my mooclet experiment, when I view the Metrics page, I will see a simple metric and query (percent SUCCESS) set up for me automatically, and I will see the graph in the data tab.
Frontend:
Would probably be good to put a message on the Metrics tab explaining that we will be creating a special metric and attaching this query after the experiment is created.
Backend:
On experiment create, if is a Mooclet experiment, we will create a simple metric and attach one query to the experiment automatically so that the user doesn't have to.
On experiment delete, we should remove this metric.
Implementation details:
create a rewardMetricKey with this format: experimentName.toLocaleUpperCase() + '_REWARD';
Note: This needs to be unique because it will be used as a lookup, but this will actually suffice in effect, because Mooclet experiments cannot be created with a name that is not unique (it will 400 in Mooclet server).
Save rewardMetricKey to the MoocletExperimentRef, which will mean adding this as a property column in the model. (This will be used later for looking up the experiment during the /log call in separate story.)
Create a simple metric with this exact format and save to db.
danoswaltCL
changed the title
mooclet backend: metric logs sync to rewards
mooclet backend: create magic reward metric and query when creating a mooclet experiment.
Jan 31, 2025
As an experimenter, I don't want to deal with setting up an experiment-specific metric and carefully wiring it up to my mooclet experiment. I want to be able set just the mooclet policy parameters and that will be the only difference between setting up a mooclet experiment vs a native upgrade experiment.
See
demo-branch
for how I cobbled this together for EASI meeting demo (https://github.com/CarnegieLearningWeb/UpGrade/tree/demo-branch).https://carnegielearning.slack.com/archives/C08A2LSPM5J/p1738243122240099 for demo and diagrams.
Requirements:
Frontend:
Backend:
On experiment create, if is a Mooclet experiment, we will create a simple metric and attach one query to the experiment automatically so that the user doesn't have to.
On experiment delete, we should remove this metric.
Implementation details:
rewardMetricKey
with this format:experimentName.toLocaleUpperCase() + '_REWARD';
Note: This needs to be unique because it will be used as a lookup, but this will actually suffice in effect, because Mooclet experiments cannot be created with a name that is not unique (it will 400 in Mooclet server).
Save
rewardMetricKey
to theMoocletExperimentRef
, which will mean adding this as a property column in the model. (This will be used later for looking up the experiment during the/log
call in separate story.)Create a simple metric with this exact format and save to db.
The text was updated successfully, but these errors were encountered: