Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define observability requirements for stable components #11772

Merged
merged 12 commits into from
Dec 16, 2024

Conversation

jade-guiton-dd
Copy link
Contributor

@jade-guiton-dd jade-guiton-dd commented Nov 28, 2024

Description

This PR defines observability requirements for components at the "Stable" stability levels. The goal is to ensure that Collector pipelines are properly observable, to help in debugging configuration issues.

Approach

  • The requirements are deliberately not too specific, in order to be adaptable to each specific component, and so as to not over-burden component authors.
  • After discussing it with @mx-psi, this list of requirements explicitly includes things that may end up being emitted automatically as part of the Pipeline Instrumentation RFC (RFC - Pipeline Component Telemetry #11406), with only a note at the beginning explaining that not everything may need to be implemented manually.

Feel free to share if you don't think this is the right approach for these requirements.

Link to tracking issue

Resolves #11581

Important note regarding the Pipeline Instrumentation RFC

I included this paragraph in the part about error count metrics:

The goal is to be able to easily pinpoint the source of data loss in the Collector pipeline, so this should either:

  • only include errors internal to the component, or;
  • allow distinguishing said errors from ones originating in an external service, or propagated from downstream Collector components.

The Pipeline Instrumentation RFC (hereafter abbreviated "PI"), once implemented, should allow monitoring component errors via the outcome attribute, which is either success or failure, depending on whether the Consumer API call returned an error.

Note that this does not work for receivers, or allow differentiating between different types of errors; for that reason, I believe additional component-specific error metrics will often still be required, but it would be nice to cover as many cases as possible automatically.

However, at the moment, errors are (usually) propagated upstream through the chain of Consume calls, so in case of error the failure state will end up applied to all components upstream of the actual source of the error. This means the PI metrics do not fit the first bullet point.

Moreover, I would argue that even post-processing the PI metrics does not reliably allow distinguishing the ultimate source of errors (the second bullet point). One simple idea is to compute consumed.items{outcome:failure} - produced.items{outcome:failure} to get the number of errors originating in a component. But this only works if output items map one-to-one to input items: if a processor or connector outputs fewer items than it consumes (because it aggregates them, or translates to a different signal type), this formula will return false positives. If these false positives are mixed with real errors from the component and/or from downstream, the situation becomes impossible to analyze by just looking at the metrics.

For these reasons, I believe we should do one of four things:

  1. Change the way we use the Consumer API to no longer propagate errors, making the PI metric outcomes more precise.
    We could catch errors in whatever wrapper we already use to emit the PI metrics, log them for posterity, and simply not propagate them.
    Note that some components already more or less do this, such as the batchprocessor, but this option may in principle break components which rely on downstream errors (for retry purposes for example).
  2. Keep propagating errors, but modify or extend the RFC to require distinguishing between internal and propagated errors (maybe add a third outcome value, or add another attribute).
    This could be implemented by somehow propagating additional state from one Consume call to another, allowing us to establish the first appearance of a given error value in the pipeline.
  3. Loosen this requirement so that the PI metrics suffice in their current state.
  4. Leave everything as-is and make component authors implement their own somewhat redundant error count metrics.

@jade-guiton-dd jade-guiton-dd added discussion-needed Community discussion needed Skip Changelog PRs that do not require a CHANGELOG.md entry Skip Contrib Tests labels Nov 28, 2024
Copy link

codecov bot commented Nov 28, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 91.59%. Comparing base (cef6ce5) to head (1b9b3a9).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main   #11772   +/-   ##
=======================================
  Coverage   91.59%   91.59%           
=======================================
  Files         449      449           
  Lines       23761    23761           
=======================================
  Hits        21763    21763           
  Misses       1623     1623           
  Partials      375      375           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jade-guiton-dd jade-guiton-dd marked this pull request as ready for review November 28, 2024 16:39
@jade-guiton-dd jade-guiton-dd requested a review from a team as a code owner November 28, 2024 16:39
@mx-psi mx-psi requested a review from djaglowski November 29, 2024 10:20
Copy link
Member

@mx-psi mx-psi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Left a few comments. I think we also want to clarify this is not an exhaustive list: components may want to add other telemetry if it makes sense

docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
docs/component-stability.md Show resolved Hide resolved
docs/component-stability.md Outdated Show resolved Hide resolved
For other components, this would typically be the number of items forwarded to the next
component through the `Consumer` API.

3. How much data is dropped because of errors.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @djaglowski 's RFC here I think this would just be an attribute on the output metric called outcome? I can see the value in having a separate metric for errors, but want to be sure we don't create a divergence from the RFC. Separately, should we link the RFC from Dan to also specify the previously agreed upon naming conventions?

Copy link
Contributor Author

@jade-guiton-dd jade-guiton-dd Dec 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We probably want to require the use of the RFC's conventions for component-identifying attributes, and I will definitely include using an outcome attribute on the input metric instead of a separate error metric as a recommended implementation for processors.

However, if we want to incentivize contributing external components, I don't think we want to require strict adherence to all of the RFC's choices, so divergences are somewhat inevitable. Relatedly, have you read the "Important note" about the RFC in the PR description? I'm interested in hearing what you think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per @djaglowski 's RFC here I think this would just be an attribute on the output metric called outcome?

That would be the most natural way to go about this. I feel like this document should not be too prescriptive as to how to accomplish the requirements listed, but making a recommendation like this would make sense to me to ensure consistency across components.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jade-guiton-dd I hadn't seen that yet, thanks for bringing my attention to it, i think i had only seen the initial description.

I would strongly advise against option 1 as error back propagation is key if you are running a collector in gateway mode and you want to propagate backpressure to an agent. I think options 2 or 3 are sufficient, option 4 feels not prescriptive enough IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given this is the recommendation for a component, it makes sense to have the component author use a custom error metric that they can decide to either include or exclude any downstream errors as part of it. (this is what you have written, and i agree with it 😄)

Copy link
Contributor Author

@jade-guiton-dd jade-guiton-dd Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they can decide to either include or exclude any downstream errors as part of it. (this is what you have written, and i agree with it 😄)

To be clear, the current requirements allow including downstream errors in a custom error metric, but only if there is a way to distinguish them from internal errors.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, this makes sense to me. Thank you! 🙇

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be sure @djaglowski, do you support option 2 I detailed in the PR description, ie. amending the Pipeline Instrumentation RFC to require the implementation to distinguish errors coming directly from the next pipeline component from errors propagated from components further downstream, in order to fit the last paragraph of point 3?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense in principal, as long as there is a clear mechanism for communicating this information so that instrumentation that is automatically wrapped around components can unambiguously know the correct outcome.

@mx-psi mx-psi enabled auto-merge December 16, 2024 09:03
@mx-psi mx-psi added this pull request to the merge queue Dec 16, 2024
Merged via the queue into open-telemetry:main with commit 8ac40a0 Dec 16, 2024
37 checks passed
@github-actions github-actions bot added this to the next release milestone Dec 16, 2024
HongChenTW pushed a commit to HongChenTW/opentelemetry-collector that referenced this pull request Dec 19, 2024
…ry#11772)

## Description

This PR defines observability requirements for components at the
"Stable" stability levels. The goal is to ensure that Collector
pipelines are properly observable, to help in debugging configuration
issues.

#### Approach

- The requirements are deliberately not too specific, in order to be
adaptable to each specific component, and so as to not over-burden
component authors.
- After discussing it with @mx-psi, this list of requirements explicitly
includes things that may end up being emitted automatically as part of
the Pipeline Instrumentation RFC (open-telemetry#11406), with only a note at the
beginning explaining that not everything may need to be implemented
manually.

Feel free to share if you don't think this is the right approach for
these requirements.

#### Link to tracking issue
Resolves open-telemetry#11581

## Important note regarding the Pipeline Instrumentation RFC

I included this paragraph in the part about error count metrics:
> The goal is to be able to easily pinpoint the source of data loss in
the Collector pipeline, so this should either:
>   - only include errors internal to the component, or;
> - allow distinguishing said errors from ones originating in an
external service, or propagated from downstream Collector components.

The [Pipeline Instrumentation
RFC](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/rfcs/component-universal-telemetry.md)
(hereafter abbreviated "PI"), once implemented, should allow monitoring
component errors via the `outcome` attribute, which is either `success`
or `failure`, depending on whether the `Consumer` API call returned an
error.

Note that this does not work for receivers, or allow differentiating
between different types of errors; for that reason, I believe additional
component-specific error metrics will often still be required, but it
would be nice to cover as many cases as possible automatically.

However, at the moment, errors are (usually) propagated upstream through
the chain of `Consume` calls, so in case of error the `failure` state
will end up applied to all components upstream of the actual source of
the error. This means the PI metrics do not fit the first bullet point.

Moreover, I would argue that even post-processing the PI metrics does
not reliably allow distinguishing the ultimate source of errors (the
second bullet point). One simple idea is to compute
`consumed.items{outcome:failure} - produced.items{outcome:failure}` to
get the number of errors originating in a component. But this only works
if output items map one-to-one to input items: if a processor or
connector outputs fewer items than it consumes (because it aggregates
them, or translates to a different signal type), this formula will
return false positives. If these false positives are mixed with real
errors from the component and/or from downstream, the situation becomes
impossible to analyze by just looking at the metrics.

For these reasons, I believe we should do one of four things:
1. Change the way we use the `Consumer` API to no longer propagate
errors, making the PI metric outcomes more precise.
We could catch errors in whatever wrapper we already use to emit the PI
metrics, log them for posterity, and simply not propagate them.
Note that some components already more or less do this, such as the
`batchprocessor`, but this option may in principle break components
which rely on downstream errors (for retry purposes for example).
3. Keep propagating errors, but modify or extend the RFC to require
distinguishing between internal and propagated errors (maybe add a third
`outcome` value, or add another attribute).
This could be implemented by somehow propagating additional state from
one `Consume` call to another, allowing us to establish the first
appearance of a given error value in the pipeline.
5. Loosen this requirement so that the PI metrics suffice in their
current state.
6. Leave everything as-is and make component authors implement their own
somewhat redundant error count metrics.

---------

Co-authored-by: Pablo Baeyens <[email protected]>
Co-authored-by: Pablo Baeyens <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion-needed Community discussion needed Skip Changelog PRs that do not require a CHANGELOG.md entry Skip Contrib Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Document observability requirements for stable components
4 participants