-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Add system metrics collected during evaluation to eval_output #280
Comments
@acere Thanks for bringing it up. Just to clarify,
Thank you! |
From my perspective, I'd like fmeval to help more with profiling model latency and cost - so the most interesting metrics to store would be:
It's important to consider and compare model quality in the context of cost to run and response latency when making selection decisions. Although these factors are workload-sensitive, fmeval is at least running a dataset of representative examples through the model at speed: So while it's no substitute for a dedicated performance test, it could give a very useful initial indication of trade-offs between output quality and speed/cost. |
Thanks for you feedback! We will add it to our roadmap and prioritize it. |
Exactly as @athewsey indicates. |
It would useful to collect system metrics, e.g. latency, during the evaluation and to provide a summary in the evaluation output.
The text was updated successfully, but these errors were encountered: