You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have read through the documentation and for me, it is not clear what exactly "batchingLimit" applies to. I am searching for a way to limit the number of entries pushed down to a server. Our problem is, that sometimes thousands of ids are batched into one request and thus the subgraphs are overwhelmed by this. Additionally, if it was distributed into different calls via a limit, we could load balance it. Setting a batchingLimit of 1 though does not seem to do this. My concern is, that this is used to push down different queries at once and has nothing to do with the amount of objects I am loading from subgraphs.
Is there such a feature, which is capable of limiting the amount of items fetched in one request? If no, which other ways could there be to achieve this?
We are using programmatic additional resolvers with argsFromKeys.
Best regards and thank you for your response.
The text was updated successfully, but these errors were encountered:
Hey,
I have read through the documentation and for me, it is not clear what exactly "batchingLimit" applies to. I am searching for a way to limit the number of entries pushed down to a server. Our problem is, that sometimes thousands of ids are batched into one request and thus the subgraphs are overwhelmed by this. Additionally, if it was distributed into different calls via a limit, we could load balance it. Setting a batchingLimit of 1 though does not seem to do this. My concern is, that this is used to push down different queries at once and has nothing to do with the amount of objects I am loading from subgraphs.
Is there such a feature, which is capable of limiting the amount of items fetched in one request? If no, which other ways could there be to achieve this?
We are using programmatic additional resolvers with argsFromKeys.
Best regards and thank you for your response.
The text was updated successfully, but these errors were encountered: