-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Routing Table maintenance #10
Comments
Hello Aleksandar, 1/ From the paper,
My understand of it is that messages have to be sent in order to be known on the network. 2/ Yes, when a response is received with a random <request id> to an <old node id>, the 3/ My understanding of the paper is that peer are lazy purged. When a lookup is performed, peers are contacted, peers that fail to respond in a timely manner are purged at this moment. This is not implemented ATM but isn't a big deal. I believe the sweet spot would be |
Hi David, thanks for answering. Please see my replies below.
Yes, since the IDs sent are not the real peer ID, and only IDs are checked to be unique (and not IP:PORT entries), the result is that the ID uniqueness check finds no duplicates during the discovery process (but then, again - what sense would it make to send multiple real peer IDs anyway); so, bootstrap ends up with a bunch of peers with different (non-existent) IDs, but with same IP:PORT. I could be missing something important but I can't make sense of the discovery process logic.
The only thing I see is check for the non-existent message ID. Duplicate nonexistent peer IDs remain in the bootstrap routing table and keep propagating back to the peer that faked them (and to other peers) - every save/load adds duplicate peers to peers' routing tables.
OK, makes sense.
Speaking generally, I think I understand the purpose of this, but before we go there, I'd like to first clarify and properly understand the neighbor discovery on boot. |
1/ There is two kind of IDs involved:
Each time a peer p contacts a node n (request & response), n store the p Peer ID within its routing table The purpose of 2/ Yes, this is something I should implement. That's clearly a bug because it makes the library kind of memleak. */ On a side note, I was wondering if rewriting the library to play with C++20 coroutine would be fun and could simplify the code. Would it be a problem for you to require a C++20 compiler ? |
OK, I understand this, I was mixing peer and key IDs
Implementing this is simple. Implementing it efficiently, not so much without
Coroutines would definitely make sense. However, at this point I'm working on an entirely divergent port of your library, completely decoupled from boost and asio, using POCO and gtest instead. In case you want to join that effort, let me know, and we can explore the coroutines possibility there (but it really depends on the folks requesting this port and how they feel about coroutines and the compiler support thereof) |
I have some questions related the routing table:
There's a lot of messages exchanged on initial peer connection to the bootstrap, and multiple entries for the peer are inserted in the routing table. Is this necessary or is one entry per peer enough?
Since every peer generates a new ID for itself on every run, when a peer with different ID but the same IP:PORT as an existing routing table entry (ie. a restarted peer) shows up, shouldn't the previous entry for that IP:PORT be purged? Otherwise, the table will grow infinitely.
Stale peers should obviously be purged at some point. Since this is UDP, the only time we know a peer exists is when it broadcasts a new entry. Is a peer required/expected to ping all peers at certain interval to prevent its purging from the routing tables of its peers?
Thanks!
The text was updated successfully, but these errors were encountered: