You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we already have a general at-least-once processing which a good foundation
batching sends at the akka level should be given a though first in case it constraints this one
exactly-once would be nice but
going the route of deduplication in-memory sets is general but is a big hit on throughput and memory
going the route of kafka transactions performs well but is not general, i.e:
a) it will only work with kafka as a log storage (not a problem as long as the implementation doesn't prescribe kafka even if exactly-once is not required)
b) kafka transactions work only on the same cluster which is acceptable
is there a third way ?
In some other design work related to commits there were 2 points considered:
a) being able to do flushAll() on or produce operations
b) have a global context of all operations, i.e. their futures, which were executed
The text was updated successfully, but these errors were encountered:
a) it will only work with kafka as a log storage (not a problem as long as the implementation doesn't prescribe kafka even if exactly-once is not required)
b) kafka transactions work only on the same cluster which is acceptable
In some other design work related to commits there were 2 points considered:
a) being able to do flushAll() on or produce operations
b) have a global context of all operations, i.e. their futures, which were executed
The text was updated successfully, but these errors were encountered: