You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This will aim to consolidate the different types of cube materialization into a single job that chooses the most efficient way to materialize the cube for downstream use. This will be a new materialization job, to avoid conflict with existing jobs. The job will take advantage of the ability to build queries that compute pre-aggregated measures for a set of metrics and dimensions from #1242
The core DJ API will need to build the following pieces of materialization job metadata:
A list of measures queries: Each query computes the pre-aggregated measures for a specific subset of metrics and dimensions. For each measures query, we will additionally keep track of:
The node it was generated for
A list of measures and dimensions that it provides
Spark configuration (it is unclear how we would configure this at the moment)
A combiner query: This query merges the results of the above measures queries into a single dataset.
Druid ingestion spec: Druid-specific configuration for the combined dataset
The temporal partition: This will need to be a shared dimension for all metrics in the cube.
The text was updated successfully, but these errors were encountered:
This will aim to consolidate the different types of cube materialization into a single job that chooses the most efficient way to materialize the cube for downstream use. This will be a new materialization job, to avoid conflict with existing jobs. The job will take advantage of the ability to build queries that compute pre-aggregated measures for a set of metrics and dimensions from #1242
The core DJ API will need to build the following pieces of materialization job metadata:
The text was updated successfully, but these errors were encountered: