You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We reuse the HTTP2 Connection created by hyperium/h2.
However, During the stress testing, an issue of memory leakage occurred, causing a crash and reboot of the system. So I begin the troubleshooting.
1. Start Troubleshooting with the memory growth chart
First, using Jemalloc and Jeprof, I generated memory growth chart at intervals. jeprof_memory_leak.pdf
The largest memory block was finish_grow, which occupied 88.8% of the total allocation on its own. The finish_grow function is related to Vec reallocating new memory when expanding. This may indicate that the memory leak occurred during the process of writing to Vec, where memory was allocated to these Vecs but was not released in a timely manner.
I found the function call chain upstream of finish_grow is tracing_core -> event -> Event -> dispatch, which is related to the subscriber. In our business code, the subscriber specified for tracing_core to use is opentelemetry.
In opentelemetry_api, I found the function call chain finish_grow <- reserve_for_push <- push at function on_event, which includes operations for pushing events onto the events array in a span. tracing_opentelemetry/layer.rs
fn on_event(&self, event: &Event<'_>, ctx: Context<'_, S>) {
if let Some(span) = ctx.lookup_current() {
...
if let Some(ref mut events) = builder.events {
events.push(otel_event);
} else {
builder.events = Some(vec![otel_event]);
}
...
}
2. Start Troubleshooting with Jaeger on the other side
On the other hand, starting from Jaeger, I observed a span that is as long as the pressure test duration. This span is a trace_debug_span that was created when establishing a Connection.h2/src/proto/connection.rs
I've observed that there are many events under the trace_debug_span, as if an event is added for every request that comes in. h2/src/codec/framed_read.rs h2/src/codec/framed_write.rs
To disable the trace_debug_span in the h2 library, you can change the RUST_LOG logging level from debug to info. After doing this and running a stress test again, you can observe that the memory no longer leaks.
In summary
I used the h2 library to create a persistent Connection for reuse, which results in the creation of a trace_debug_span that lasts as long as the lifespan of the connection. When a stress test is conducted with continuous requests, events are constantly added to the events array inside the trace_debug_span. As long as the persistent Connection is not closed, the trace_debug_span will not terminate and be destroyed after reporting. Therefore, the memory used by the trace_debug_span keeps increasing, leading to a memory leak.
Although we normally don't enable RUST_LOG: "debug" in production, sometimes it's necessary to turn it on for troubleshooting business issues based on debug logs. This memory leak issue has been troublesome.
Therefore, when creating a connection by default, is it possible to allow users to set trace_debug_span as an optional feature?
The text was updated successfully, but these errors were encountered:
I don't think this is h2's fault, it rather the subscriber which is growing the events forever. Any subscriber implementation will eventually need to cap its own growth.
We reuse the HTTP2 Connection created by hyperium/h2.
However, During the stress testing, an issue of memory leakage occurred, causing a crash and reboot of the system. So I begin the troubleshooting.
1. Start Troubleshooting with the memory growth chart
First, using
Jemalloc
andJeprof
, I generated memory growth chart at intervals. jeprof_memory_leak.pdfThe largest memory block was
finish_grow
, which occupied 88.8% of the total allocation on its own. Thefinish_grow
function is related to Vec reallocating new memory when expanding. This may indicate that the memory leak occurred during the process of writing to Vec, where memory was allocated to these Vecs but was not released in a timely manner.I found the function call chain upstream of
finish_grow
istracing_core -> event -> Event -> dispatch
, which is related to thesubscriber
. In our business code, thesubscriber
specified for tracing_core to use isopentelemetry
.In
opentelemetry_api
, I found the function call chainfinish_grow <- reserve_for_push <- push
at functionon_event
, which includes operations for pushing events onto theevents
array in a span. tracing_opentelemetry/layer.rs2. Start Troubleshooting with Jaeger on the other side
On the other hand, starting from
Jaeger
, I observed a span that is as long as the pressure test duration. This span is atrace_debug_span
that was created when establishing a Connection.h2/src/proto/connection.rsI've observed that there are many
events
under thetrace_debug_span
, as if anevent
is added for every request that comes in.h2/src/codec/framed_read.rs
h2/src/codec/framed_write.rs
To disable the
trace_debug_span
in theh2
library, you can change theRUST_LOG
logging level fromdebug
toinfo
. After doing this and running a stress test again, you can observe that the memory no longer leaks.In summary
I used the
h2
library to create a persistentConnection
for reuse, which results in the creation of atrace_debug_span
that lasts as long as the lifespan of the connection. When a stress test is conducted with continuous requests, events are constantly added to theevents
array inside thetrace_debug_span
. As long as the persistentConnection
is not closed, thetrace_debug_span
will not terminate and be destroyed after reporting. Therefore, the memory used by thetrace_debug_span
keeps increasing, leading to a memory leak.Although we normally don't enable
RUST_LOG: "debug"
in production, sometimes it's necessary to turn it on for troubleshooting business issues based on debug logs. This memory leak issue has been troublesome.Therefore, when creating a connection by default, is it possible to allow users to set
trace_debug_span
as an optional feature?The text was updated successfully, but these errors were encountered: