-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add additional step to pipeline to generate a metrics report #241
base: main
Are you sure you want to change the base?
Conversation
cad432f
to
8a6be72
Compare
@tumido updated the PR to use PVC's instead of interim output artifacts. |
8a6be72
to
efe0ed0
Compare
Signed-off-by: Michael Clifford <[email protected]>
efe0ed0
to
70fe60c
Compare
1f4bd14
to
06ea8aa
Compare
06ea8aa
to
522f053
Compare
Signed-off-by: Michael Clifford <[email protected]>
522f053
to
f829641
Compare
Signed-off-by: Michael Clifford <[email protected]>
2508289
to
898b6f7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have one last suggestion, otherwise LGTM. 👍 🙌
@@ -463,6 +484,21 @@ def pipeline( | |||
model_pvc_delete_task = DeletePVC(pvc_name=model_pvc_task.output) | |||
model_pvc_delete_task.after(final_eval_task) | |||
|
|||
generate_metrics_report_task = generate_metrics_report_op() | |||
generate_metrics_report_task.after(output_mt_bench_task, final_eval_task) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this requires to be run after output_mt_bench_task
. That task only uploads artifacts. In fact both output_mt_bench_task
and generate_metrics_report_task
can run in parallel, they only read the same data.
This PR adds a new step to the end of the pipeline,
generate_metrics_report_op
. The purpose of this step is to create a number of kfpMertic
artifacts that can help users easily compare the performance between different runs using thecompare runs
feature of the Data Science Pipeline UI.