You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 11, 2023. It is now read-only.
There are cases in which you need to control the maximum number of concurrent in-flight events being sent to a target.
For example, 1000 messages land on an SQS queue in one go. The SQS source consumes these messages as fast as it can. The goal is to deliver them to a Service, but the service can only consume at most 10 messages in parallel.
Solution
One idea is to implement a concurrency control on Triggers, such that I can specify the maximum number of non-acknowledged in-flight requests at any given time. Once the limit is reached, the Trigger waits for an event to either be acknowledged by the target or dropped before starting to deliver a new event. This would apply to MemoryBroker and RedisBroker, with different fault tolerance guarantees for each.
The benefit of this solution is that by being on the broker, it can be used regardless of which source connector you're using. But, it requires necessarily using a TriggerMesh broker.
An example of the Trigger configuration could be something like this, which includes the new maxConcurrency parameter that I've tentatively added to the delivery section, alongside retries and dead-letter sink:
An alternative solution would be to implement this on the source connectors themselves, or both, but this is more costly as it requires updates to all source components to reach a consistent feature set.
The text was updated successfully, but these errors were encountered:
and also found some discussion where people were leveraging a Kafka Broker with an SQS Source so that they can play around with Kafka parallelism configs (e.g. partitioning) to mimic concurrency settings - but seems much more complex
Is there any timeframe for when we could expect to see this feature implemented and available?
Use case
There are cases in which you need to control the maximum number of concurrent in-flight events being sent to a target.
For example, 1000 messages land on an SQS queue in one go. The SQS source consumes these messages as fast as it can. The goal is to deliver them to a Service, but the service can only consume at most 10 messages in parallel.
Solution
One idea is to implement a concurrency control on Triggers, such that I can specify the maximum number of non-acknowledged in-flight requests at any given time. Once the limit is reached, the Trigger waits for an event to either be acknowledged by the target or dropped before starting to deliver a new event. This would apply to MemoryBroker and RedisBroker, with different fault tolerance guarantees for each.
The benefit of this solution is that by being on the broker, it can be used regardless of which source connector you're using. But, it requires necessarily using a TriggerMesh broker.
An example of the Trigger configuration could be something like this, which includes the new maxConcurrency parameter that I've tentatively added to the delivery section, alongside retries and dead-letter sink:
An alternative solution would be to implement this on the source connectors themselves, or both, but this is more costly as it requires updates to all source components to reach a consistent feature set.
The text was updated successfully, but these errors were encountered: