Replies: 4 comments 1 reply
-
Hi @leepand, Thank you for testing it! You can use the streaming API to monitor metrics in real-time: https://mltraq.com/advanced/datastream/ . Let me know if it works for your use case, happy to delve deeper into it. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your prompt response. However, the streaming API may not align with our use case. Our project aims to provide services in the following manner. Here is a simple example: from flask import Flask, request prediction functiondef ValuePredictor(to_predict_list): @app.route('/result', methods = ['POST'])
|
Beta Was this translation helpful? Give feedback.
-
Currently, we achieve real-time persistence of our key-value (KV) streaming data using this method (https://github.com/grantjenks/python-diskcache), which meets our performance requirements. It would be perfect if we could also utilize MLtraq to handle the corresponding table data. |
Beta Was this translation helpful? Give feedback.
-
Hah! MLtraq targets experimentation, and it's not designed to serve models/tracking around it. It seems you're only lacking its serialization capabilities. You can keep using
The supported data types are documented at https://mltraq.com/advanced/storage |
Beta Was this translation helpful? Give feedback.
-
This is a great project, but how can we effectively track model metrics in real-time projects? I have noticed that using experiment.persist(if_exists="replace") during persistence can be time-consuming, which may cause delays in real-time model services.
Beta Was this translation helpful? Give feedback.
All reactions