Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(inst-fetch,inst-xhr) Ignore network events with zero-timing #5332

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

chancancode
Copy link
Contributor

@chancancode chancancode commented Jan 11, 2025

Background:

  1. For historical reasons, the perf/resource timing spec uses 0 as a special value to denote when some timing information is either not applicable (e.g. no redirect occured) or not available (when producing an opaque resource timing object for CORS resources).

  2. However, in some limited cases, 0 can also be a legitimate value for these timing events. Note that this is rare in real life – these fields are high-res performance timestamp relative to the performance time-origin, which is typically the navigation event for the initial page load.

    For a resource timing to have a 0 timestamp, it would have to be initiated simultaneously with page load, it is unclear if this can actually happen IRL. Reportedly (Allow zero/negative performance timings #1769), at one point this was observed in some browsers during automated tests where things happen very fast and the browser artifically supress the timing resolution. It was unclear if the report was about the navigation timing entry or resource timing entries.

    It is also unclear if these utilities are intended for anything other than the internal fetch/XHR instrumentation, but they are public API, so if someone uses these functions on the initial page navigation event, then it is possible for the function to see legitimate 0-value inputs.

  3. When creating span events, we do not use the timeOrigin-relative perf timestamps as-is. Rather, we convert them back to the UNIX epoch timestamps. When this conversion is applied to those 0 timestamps, it creates nonsensical/misleading events that are quite difficult to distinguish for downstream consumers.

It would be nice if the W3C specs would have left the N/A values as undefined, but that's not the world we live in and so we have to work with what we've got.

History:

  1. Initially, the code ignored 0-value timestamps.
  2. Allow zero/negative performance timings #1769 identified cases of valid 0-value timestamps and removed the check.
  3. This causes the other category of bugs where we created the nonsensical events (feat(sdk-trace-web): ignore unavailable span network events #2457, Wrong time reported for secure connection start #3848, Invalid timings sent by instrumentations (xhr,fetch,resource) #4478)
  4. chore(deps): update dependency babel-loader to v8.3.0 #3897 added a special-case for secureConnectionStart by way of tapping into auxiliary metadata
  5. That appoach cannot be generalized for the other cases, so Fix invalid timings in span events #4486 added some rather convoluted checks.
  6. As part of refactoring the tests to use service workers, a new bug [inst-xhr/fetch] Network events incorrectly dropped #5314 was identified.

Presumably, the patch in #4486 was writen that way (as opposed to just checking for 0) to avoid breaking #1769, but I suspect it ends up breaking some of those use cases anyway.

Options:

  1. Keep the patch from Fix invalid timings in span events #4486 largely as-is but change the reference time from fetchStart to startTime.
  2. Revert to the very original intent of the code and drop everything with 0-values.
  3. This commit attempts a compromise position, when startTime === 0 (which probably means we are called with the initial navigation event), 0-values are plausibly valid so we include them by default, but in any other cases (startTime > 0), legitimate 0-values should not be possible so we drop them by default.

Part of the issue here is that it's unclear how these utilities are being used since they are public APIs. In core, these functions are only called by the fetch/xhr instrumentation with resource timing entries, where legitimate 0-value seems extremely unlikely.

In my opinion, the rarity of legitimate 0-value timing values in the real world (recall that it doesn't just mean something happened very fast, but something happened very fast at the same instant that the page loaded) should yield to the conflicting interst (to avoid non-sensical events that are difficult to process downstream) here, especially when it appears that the only observed cases are from automated testsing.

Personally I would be in favor of the stronger position (2), but attempted to strike a balance here to keep things useful the other cases.

In #2457, @johnbley argued:

I think that for this piece of tech, our telemetry should report whatever the browser says with minimal processing, and let downstream/backend/easier-to-modify components deal with normalization/truncation/processing if desired (e.g., throwing away absurd times like "that page took 6 years or -5 hours to load", or declaring that "no redirects happened so the redirect timing is not 0 but non-existent"). Does that not work in your world for some reason?

I could see this perspective if the "minimal processing" means that we sends the 0-values to the backends as 0s. It's still unfortunate but at least it would be easy to write a query/filter to process them away.

However, given we are actively normalizes the 0-values into the absolute epoch timestamp of performance.timeOrigin, I think this becomes actively hostile to backend/downstream processing, as it turns a relatively clear signal (value=0) into something that would require stateful machinary to reverse engineer (infer the timeOrigin from a trace), or imperfect huristics (ignore things that seems to take way too long).

It also takes up bytesize/bandwidth to transmit in the vast majority of cases where it would be actively unhelpful.

Instead, I'd argue it creates way less harm to drop the 0-values. When a 0-value is expected but not sent, it may skew the distrubtion on some aggregrate metrics slightly (e.g. making "average time for SSL handshake" appears large), but probably nothing would break. If we are expecting backends to do work to normalize data anyway, then IMO it would also be way easier for them to notice the missing items and insert it back than the other way around.

Fixes #3199
Fixes #5314

See also #1769, #2457, #3848, #3879, #4478, #4486

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Unit tests

Checklist:

  • Followed the style guidelines of this project
  • Unit tests have been added updated
  • Documentation has been updated

@chancancode chancancode requested a review from a team as a code owner January 11, 2025 00:23
@@ -124,7 +124,7 @@ function createResource(resource = {}): PerformanceResourceTiming {
function createMainResource(resource = {}): PerformanceResourceTiming {
const mainResource: any = createResource(resource);
Object.keys(mainResource).forEach((key: string) => {
if (typeof mainResource[key] === 'number') {
if (typeof mainResource[key] === 'number' && mainResource[key] !== 0) {
mainResource[key] = mainResource[key] + 30;
Copy link
Contributor Author

@chancancode chancancode Jan 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is intended to "tweak" the timing but ends up affecting the body size, hence the changes to some of the expectations below (0/30/60). It's not great but #5130 and #5282 is actively working on unwinding these

@@ -166,7 +175,7 @@ function createResource(resource = {}): PerformanceResourceTiming {
function createMainResource(resource = {}): PerformanceResourceTiming {
const mainResource: any = createResource(resource);
Object.keys(mainResource).forEach((key: string) => {
if (typeof mainResource[key] === 'number') {
if (typeof mainResource[key] === 'number' && mainResource[key] !== 0) {
mainResource[key] = mainResource[key] + 30;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is intended to "tweak" the timing but ends up affecting the body size, hence the changes to some of the expectations below. It's not great but #5130 and #5282 is actively working on unwinding these

Copy link

codecov bot commented Jan 11, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 94.56%. Comparing base (6864c2f) to head (95cdabc).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5332   +/-   ##
=======================================
  Coverage   94.56%   94.56%           
=======================================
  Files         322      322           
  Lines        8132     8132           
  Branches     1715     1715           
=======================================
  Hits         7690     7690           
  Misses        442      442           

Background:

1. For historical reasons, the perf/resource timing spec uses 0 as a
   special value to denote when some timing information is either
   not applicable (e.g. no redirect occured) or not available (when
   producing an opaque resource timing object for CORS resources).

2. However, in some limited cases, 0 can also be a legitimate value
   for these timing events. Note that this is rare in real life –
   these fields are high-res performance timestamp relative to the
   performance time-origin, which is typically the navigation event
   for the initial page load.

   For a _resource_ timing to have a 0 timestamp, it would have to
   be initiated simultaneously with page load, it is unclear if this
   can actually happen IRL. Reportedly (open-telemetry#1769), at one point this
   was observed in some browsers during automated tests where things
   happen very fast and the browser artifically supress the timing
   resolution. It was unclear if the report was about the navigation
   timing entry or resource timing entries.

   It is also unclear if these utilities are intended for anything
   other than the internal fetch/XHR instrumentation, but they are
   public API, so if someone uses these functions on the initial
   page navigation event, then it is possible for the function to
   see legitimate 0-value inputs.

3. When creating span events, we do not use the timeOrigin-relative
   perf timestamps as-is. Rather, we convert them back to the UNIX
   epoch timestamps. When this conversion is applied to those 0
   timestamps, it creates nonsensical/misleading events that are
   quite difficult to distinguish for downstream consumers.

It would be nice if the W3C specs would have left the N/A values as
`undefined`, but that's not the world we live in and so we have to
work with what we've got.

History:

1. Initially, the code ignored 0-value timestamps.
2. open-telemetry#1769 identified cases of valid 0-value timestamps and removed
   the check.
3. This causes the other category of bugs where we created the
   nonsensical events (open-telemetry#2457, open-telemetry#3848, open-telemetry#4478)
4. open-telemetry#3897 added a special-case for `secureConnectionStart` by way
   of tapping into auxiliary metadata
5. That appoach cannot be generalized for the other cases, so
   open-telemetry#4486 added some rather convoluted checks.
6. As part of refactoring the tests to use service workers, a new
   bug open-telemetry#5314 was identified.

Presumably, the patch in open-telemetry#4486 was writen that way (as opposed to
just checking for 0) to avoid breaking open-telemetry#1769, but I suspect it ends
up breaking some of those use cases anyway.

Options:

1. Keep the patch from open-telemetry#4486 largely as-is but change the reference
   time from `fetchStart` to `startTime`.
2. Revert to the very original intent of the code and drop everything
   with 0-values.
3. This commit attempts a compromise position, when `startTime === 0`
   (which probably means we are called with the initial navigation
   event), 0-values are plausibly valid so we include them by default,
   but in any other cases (`startTime > 0`), legitimate 0-values
   should not be possible so we drop them by default.

Part of the issue here is that it's unclear how these utilities are
being used since they are public APIs. In core, these functions are
only called by the fetch/xhr instrumentation with resource timing
entries, where legitimate 0-value seems extremely unlikely.

In my opinion, the rarity of legitimate 0-value timing values in the
real world (recall that it doesn't just mean something happened very
fast, but something happened very fast _at the same instant that the
page loaded_) should yield to the conflicting interst (to avoid
non-sensical events that are difficult to process downstream) here,
especially when it appears that the only observed cases are from
automated testsing.

Personally I would be in favor of the stronger position (2), but
attempted to strike a balance here to keep things useful the other
cases.

In open-telemetry#2457, @johnbley argued:

> I think that for this piece of tech, our telemetry should report
> whatever the browser says with minimal processing, and let
> downstream/backend/easier-to-modify components deal with
> normalization/truncation/processing if desired (e.g., throwing
> away absurd times like "that page took 6 years or -5 hours to
> load", or declaring that "no redirects happened so the redirect
> timing is not 0 but non-existent"). Does that not work in your
> world for some reason?

I could see this perspective if the "minimal processing" means that
we sends the 0-values to the backends as 0s. It's still unfortunate
but at least it would be easy to write a query/filter to process
them away.

However, given we are actively normalizes the 0-values into the
absolute epoch timestamp of `performance.timeOrigin`, I think this
becomes actively hostile to backend/downstream processing, as it
turns a relatively clear signal (value=0) into something that would
require stateful machinary to reverse engineer (infer the timeOrigin
from a trace), or imperfect huristics (ignore things that seems to
take way too long).

It also takes up bytesize/bandwidth to transmit in the vast majority
of cases where it would be actively unhelpful.

Instead, I'd argue it creates way less harm to drop the 0-values.
When a 0-value is expected but not sent, it may skew the distrubtion
on some aggregrate metrics slightly (e.g. making "average time for
SSL handshake" appears large), but probably nothing would break. If
we are expecting backends to do work to normalize data anyway, then
IMO it would also be way easier for them to notice the missing items
and insert it back than the other way around.

Fixes open-telemetry#3199
Fixes open-telemetry#5314

See also open-telemetry#1769, open-telemetry#2457, open-telemetry#3848, open-telemetry#3879, open-telemetry#4478, open-telemetry#4486
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[inst-xhr/fetch] Network events incorrectly dropped Handling of cross-origin request network timings
1 participant