-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VL] TPCDS Performance drop after new operator "VeloxAppendBatches" #6694
Comments
Thank you for reporting. IIUC The operator itself doesn't seem to be the one that slows down your query? Say the slowest tasks just took 438ms and 166ms. Would you like to share more parts of comparisons of the DAGs? Especially, would you check shuffle write time as well? |
So Q64 elapsed time is 22841 -> 50712. Append operator's input is 4.15 row per batch, output is 3070 row per batch, which benefit performance in our test. The operator's overhead is a sequential memcpy which definitely can't directly cause the 2x elapsed time increase. There should be some side-effect caused this. @zhli1142015 Do you still have the tool I shared, Can you get the chart of each stage in traceview? Let's see which stage caused the issue and reproduce the stage in native. |
Thanks @FelixYBW for explaining. I am trying to come up with a minimal query to showcase the impact. If I do not achieve the same then will post the detailed analysis of Q64 and Q24. |
@zhztheplayer @marin-ma Why the patch caused the plan changing? Looks a bug. Do we use batch numbers instead of row numbers as the plan creating? |
Hi! Could you share the profiling tool in this image? Thanks! |
Was there a large difference on shuffle write size? |
@Surbhi-Vijay Could you share the metrics details of |
Below "ColumnarExchange" is for join (store_sales join customer) which converted to BHJ from SHJ in q24b when veloxAppendBatches is enabled. Left Side => With CoalesceBatch enabled |
@Surbhi-Vijay The "data size" metric changed from 58.7M to 5.2M. This could cause a plan change since the Join operation relies on this value to decide whether to use BHJ. However, a 10x reduction in "data size" seems unreasonable to me. Could you also share the spark configurations? I've compared TPCDS q24b with enable/disable VeloxResizeBatches, but I don't see such a stage producing different data size. |
@Surbhi-Vijay Any update? It doesn't make sense the merge batch operators impact the shuffle data size. |
@FelixYBW @marin-ma I see this behavior of reduced data size wherever All other metrics (apart from The shuffle stage also shows the almost exact same metrices. At this point, I am suspecting if there is any bug in populating data size when this feature is enabled. |
Do we have a solution for this? Does #6670 solve this issue? |
I think so. Would you like to help have a try? If it works then we can close this issue. |
Backend
VL (Velox)
Bug description
We have observed performance drop in TPCDS runs after the patch #6009.
Top regressing Queries
QueryId New runtime Previous Runtime
query64 50712 22841
query24a 44883 27452
query24b 45003 28742
When we disabled the feature using
"spark.gluten.sql.columnar.backend.velox.coalesceBatchesBeforeShuffle": "false"
. We see the same runtime as previous runs.We are using azure cluster and reading data from remote storage account. The regression is seen in
VeloxAppendBatches
where in some instances, it is taking a lot of timeBelow are the plan snippets from query64
Spark version
Spark-3.4.x
Spark configurations
No response
System information
No response
Relevant logs
No response
The text was updated successfully, but these errors were encountered: