Skip to content
This repository has been archived by the owner on Dec 7, 2018. It is now read-only.

Question > How Kafka partitions are managed by Axon #86

Open
ghilainm opened this issue Aug 16, 2018 · 1 comment
Open

Question > How Kafka partitions are managed by Axon #86

ghilainm opened this issue Aug 16, 2018 · 1 comment

Comments

@ghilainm
Copy link

ghilainm commented Aug 16, 2018

Currently it seems there is no documentation on how Kafka partitions are managed Axon.

Example of questions:

  • what is the strategy to assign an event to a Kafka partition inside a topic (producer side)? How to override this? Is it advised to override this?
  • Where does Axon store the offset for a given partition? Does it uses Kafka in order to store the offsets or does it store it internally?
  • How are Kafka partitions related to Axon segments?
  • Is it possible to have a dynamic number of segments? I would like the number of segments to be automatically aligned with the number of partitions.
  • How is this related to the SequencingPolicy? Is the result of the SequencingPolicy used in order to assign a message to a partition?
  • How are segments assigned to node? It is written in the documentation that nodes compete for segments. Are the segments automatically rebalanced when new node join the group?
  • How to increase the number of segments after it has been defined? (create new tracking event processor with a different initialSegmentCount?)

Could you please provide some insight on this? Would be nice to have a paragraph in the documentation explaining the general integration of Axon with Kafka.

@altroy75
Copy link

If I'm not mistaken, it looks like having a Kafka topic with several partitions and at the same time more than one Axon segment can potentially lead to skipped, not processed events.
Consider a situation with a topic with two partitions and two configured Axon processor's segments. Also let's assume two JVMs trying to process events concurrently. In such scenario kafka consumer in each JVM will be assigned with one of the two partitions. Then the event processor in each JVM will try to lock a segment. Now it might happen that an event processor's segment doesn't match some or all events in its kafka consumer's partition (those events are in the second partition), in which case we will have skipped event messages.
Is the above scenario a valid one or I'm missing something?

Thanks,
Alex

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants