Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mooclet backend: create magic reward metric and query when creating a mooclet experiment. #1934

Open
danoswaltCL opened this issue Sep 12, 2024 · 2 comments

Comments

@danoswaltCL
Copy link
Collaborator

danoswaltCL commented Sep 12, 2024

As an experimenter, I don't want to deal with setting up an experiment-specific metric and carefully wiring it up to my mooclet experiment. I want to be able set just the mooclet policy parameters and that will be the only difference between setting up a mooclet experiment vs a native upgrade experiment.

See demo-branch for how I cobbled this together for EASI meeting demo (https://github.com/CarnegieLearningWeb/UpGrade/tree/demo-branch).

https://carnegielearning.slack.com/archives/C08A2LSPM5J/p1738243122240099 for demo and diagrams.

Requirements:

  • As an experimenter create my mooclet experiment, when I view the Metrics page, I will see a simple metric and query (percent SUCCESS) set up for me automatically, and I will see the graph in the data tab.

Frontend:

  • Would probably be good to put a message on the Metrics tab explaining that we will be creating a special metric and attaching this query after the experiment is created.

Backend:

  • On experiment create, if is a Mooclet experiment, we will create a simple metric and attach one query to the experiment automatically so that the user doesn't have to.

  • On experiment delete, we should remove this metric.

Implementation details:

  1. create a rewardMetricKey with this format: experimentName.toLocaleUpperCase() + '_REWARD';

Note: This needs to be unique because it will be used as a lookup, but this will actually suffice in effect, because Mooclet experiments cannot be created with a name that is not unique (it will 400 in Mooclet server).

  1. Save rewardMetricKey to the MoocletExperimentRef, which will mean adding this as a property column in the model. (This will be used later for looking up the experiment during the /log call in separate story.)

  2. Create a simple metric with this exact format and save to db.

{
      metric: <rewardMetricKey>,
      datatype: IMetricMetaData.CATEGORICAL,
      allowedValues: [ "SUCCESS", "FAILURE" ]
};
  1. Create a query of this metric and attach it to the experiment (before saving the experiment and its queries):
{
        id: uuid(),
        name: "Success Rate",
        query: {
            operationType: "percentage",
            compareFn: "=",
            compareValue: "SUCCESS"
        },
        metric: {
            key: rewardMetricKey
        },
        repeatedMeasure: REPEATED_MEASURE.mostRecent
    }
@danoswaltCL danoswaltCL added this to the Program Increment PI13 milestone Sep 12, 2024
@danoswaltCL danoswaltCL changed the title mooclet backend: metric logs sync to rewards mooclet backend: create magic reward metric and query when creating a mooclet experiment. Jan 31, 2025
@zackcl
Copy link
Collaborator

zackcl commented Feb 5, 2025

Here is the example design of the read-only Mooclet Reward Metric section:

Image

@danoswaltCL danoswaltCL moved this to Refined ToDo in UpGrade Project Feb 6, 2025
@danoswaltCL danoswaltCL self-assigned this Feb 6, 2025
@danoswaltCL danoswaltCL moved this from Refined ToDo to In Progress in UpGrade Project Feb 7, 2025
@danoswaltCL
Copy link
Collaborator Author

@zackcl looks good, I created a separate story for the stepper part so it can be worked in parallel #2203

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Code Review
Development

No branches or pull requests

3 participants