-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SUPPORT] Should we introduce partition-level metrics ? #12197
Comments
Only downside is - users partitioning things too granular, leading to bombardment of metrics systems downstream.. I see how its useful though. |
Is it feasible to extend the compaction metrics a little bit, maybe just represent the latecy metrics in another level: aggregated by partitions. |
Yes, we also need to consider the case of too many partitions, I think we can provide this ability, by the actual user to consider whether to turn on. |
I plan to provide a
|
Can we provide partition-level metrics? In many scenarios where partitions are used, such as
p_date
andp_product
, which are separated by time or type, the data before the partition is quite different. Can we provide a partition dimension metrics to reflect some metrics, for example, p99 latency of compaction operation for specified partition? This will help a lot when doing performance optimization.Tips before filing an issue
Have you gone through our FAQs?
Join the mailing list to engage in conversations and get faster support at [email protected].
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
A clear and concise description of the problem.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Environment Description
Hudi version :
Spark version :
Hive version :
Hadoop version :
Storage (HDFS/S3/GCS..) :
Running on Docker? (yes/no) :
Additional context
Add any other context about the problem here.
Stacktrace
Add the stacktrace of the error.
The text was updated successfully, but these errors were encountered: