-
Notifications
You must be signed in to change notification settings - Fork 238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adjust block resolved log levels #4175
base: unstable
Are you sure you want to change the base?
Conversation
scripts/launch_local_testnet.sh
Outdated
LOG_LEVEL="DEBUG; TRACE:networking" | ||
LOG_LEVEL="TRACE" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TRACE
can be very spammy inside libp2p, should double-check that log sizes are roughly similar to before
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either this change occurs or logtrace
can't trace sync committee messages, because it will only see one side of the send/receive. But it's also not something one has to immediately have working -- logtrace is currently set only to run on aggregate attestations for this, so it's fine to sacrifice this.
Before and after this PR. that's enough of a change that I'd want to address that separately, if someone does want to use logtrace
to track sync committee messages in the local testnet. It'll be an obvious, nonsubtle, easily reproducible issue, so it can be addressed later if people think it's worth solving.
c509035 reverts this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, or add TRACE
specifically for beacval
?
debug "Block resolved", | ||
info "Block resolved", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is very useful for debugging, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, on sync, this will be quite noisy (ie millions of lines).. should it be limited to gossip blocks?
I find the inconsistency created here a bit hard to justify. What is the evidence that we have users reluctant to use the DEBUG log level specifically due to the volume of sync committee messages? |
Sure, it's worth getting data from a large network, to make sure it captures scaling corrrectly. But the volume of We do need, I would argue, something that's more useful for debugging than |
Because that specific inconsistency isn't worthwhile, a0ae823 removes it |
77dc30c
to
b5e4f1f
Compare
Been thinking a bit about this message, and here's a bit of a wishlist I would want from "positive" logging, I think:
I think this by a single "Slot updated" message which happens at the first of:
Apart from this, we would log "Block resolved" from gossip and introduce a new "Syncing" log that replaces "Slot start" but only prints when we're actually syncing. With this logging, I expect a synced node to display 2 messages:
An unsynced node would similarly display 2 messages / slot:
Potentially, we would highlight oddities with additional logging:
|
57ed7b2
to
df3b76a
Compare
c936c29
to
6a4f83c
Compare
More thoughts on this and how to manage log volume and make the message useful during syncing as well: Every slot, we wait for the expected block of that slot to arrive, or slot+4s to happen - these are the general conditions for attesting anyway. We then break down logging the following way:
This is inspired by nethermind which logs something like the following when the client is synced:
On the other hand, when it's bulk-processing blocks, it logs this instead:
The strategy of logging details on the expected block if it arrives and a summary of any syncing blocks gives us the best of both worlds - it also covers the case where someone sends "rogue" blocks that drop in too late and become potential but not really viable heads. |
This would make managing the volume of logs that got via Logstash into our ElasticSearch cluster much easier. |
#4162