-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We need to limit the number of squashed commands when running in pipeline mode #4128
Comments
@romange We have discussed with @adiholden simply limiting the number of squashing commands and to my mind it doesn't give us any significant benefits. I have suggested breaking the HOP if we have enough output, sending data, and then starting a new HOP. What do you think about it? @adiholden told me that I can do it in the next way: |
@BorysTheDev why limiting the squashing commands does not provide benefits? |
@romange because we really depends on the command response and we can have 30 commands squashed with the response of 3GB or 1000 commands squashed with a 50KB response |
So the global limit will be beneficial for some cases and for some exteme cases it won't be very efficient. I agree. But since limiting squashing post factum, when you already performed operations is more complicated than limiting commands in advance, I prefer we do the simple approach first and quickly and then improving it if needed. We have datastore in prod that needs fixing fast. |
The current limitation is 32 squashed commands per shard. So the only enhancement we can do is breaking the HOP if we have enough output. I've removed the urgent label because it doesn't look like high priority task now |
#4002 depends on the number of squashed commands because we store temporary results of the squashed commands until we execute them all.
The text was updated successfully, but these errors were encountered: