Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pkg/stanza/fileconsumer] Emit logs in batches #35455

Open
andrzej-stencel opened this issue Sep 27, 2024 · 5 comments · May be fixed by #36663
Open

[pkg/stanza/fileconsumer] Emit logs in batches #35455

andrzej-stencel opened this issue Sep 27, 2024 · 5 comments · May be fixed by #36663
Assignees
Labels

Comments

@andrzej-stencel
Copy link
Member

andrzej-stencel commented Sep 27, 2024

Component(s)

pkg/stanza/fileconsumer

Is your feature request related to a problem? Please describe.

This issue is created as a result of discussion in #31074.

The Stanza adapter's LogEmitter has a 100-log buffer that is a source of data loss during non-graceful collector shutdown. One solution is to remove this buffer, but this would cause a severe performance impact (see #35454). This performance impact could be alleviated in case of the File Log receiver by implementing batching earlier in the Stanza pipeline - in the File consumer.

Describe the solution you'd like

From #31074 (comment):

Consider if it's feasible to modify the File consumer to send logs in batches. It seems natural (to me) for the File consumer to emit all logs read from one file in one pass as a single batch. The file consumer being able to emit batches of entries as opposed to single entries could alleviate the performance impact of removing batching from LogEmitter (possibly even improving the situation, as the "natural" batches could be larger than the current artificial 100 log limit). This would require the modification of all Stanza operators to be able to operate on batches of Stanza entries as opposed to single entry at a time. It seems a simple change to add a ProcessBatch method beside the existing Process method, with a for loop calling Process inside the ProcessBatch function.

and further in #31074 (comment):

In my opinion this could be a really nice improvement regardless of any other changes. The data coming out of the receiver will be more organized and intuitive. In terms of implementation, I think we can add a []string buffer to the Reader struct, and then emit the entire buffer, either when it reaches a max size (maybe configurable at a later time) or whenever we hit EOF. We can maintain a tentative offset which is saved only when this buffer is actually emitted.

Describe alternatives you've considered

  • Remove buffering in LogEmitter and live with the performance impact it brings.
  • Instead of removing the buffering, have it persisted so that the data is not lost during non-graceful shutdown.
  • Change the logic of marking logs as sent, so that logs are only marked as sent when they are actually successfully sent out to next consumer in the collector pipeline.

Additional context

See #31074 (comment) and the following comments.

@andrzej-stencel andrzej-stencel added enhancement New feature or request needs triage New item requiring triage labels Sep 27, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@andrzej-stencel
Copy link
Member Author

Removing needs triage label as this was discussed with the code owner.

@andrzej-stencel andrzej-stencel self-assigned this Oct 9, 2024
andrzej-stencel added a commit to andrzej-stencel/opentelemetry-collector-contrib that referenced this issue Oct 18, 2024
This refactors the `Reader::ReadToEnd` method
by separating reading the file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents` methods,
which was previosly deduplicated at the cost of slightly higher complexity.
The bug could be fixed without separating header reading from contents reading,
but I hope this separation will make it easier to implement content batching
in the Reader (open-telemetry#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the code.
@andrzej-stencel
Copy link
Member Author

The pull request #35870 is a first step: it refactors the code of the Reader to make the introduction of batching easier.

djaglowski pushed a commit that referenced this issue Oct 28, 2024
#### Description

Fixes
#35869
by refactoring of the `Reader::ReadToEnd` method.

This refactors the `Reader::ReadToEnd` method by separating reading the
file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents`
methods, which was previously deduplicated at the cost of slightly
higher complexity.
The bug could be fixed without separating header reading from contents
reading, but I hope this separation will make it easier to implement
content batching in the Reader
(#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the
code.

#### Link to tracking issue

Fixes
#35869

#### Testing

In the first commit I have added tests that document the erroneous
behavior. In the second commit I have fixed the bug and corrected the
tests.

#### Documentation

Added changelog entry.
jpbarto pushed a commit to jpbarto/opentelemetry-collector-contrib that referenced this issue Oct 29, 2024
)

#### Description

Fixes
open-telemetry#35869
by refactoring of the `Reader::ReadToEnd` method.

This refactors the `Reader::ReadToEnd` method by separating reading the
file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents`
methods, which was previously deduplicated at the cost of slightly
higher complexity.
The bug could be fixed without separating header reading from contents
reading, but I hope this separation will make it easier to implement
content batching in the Reader
(open-telemetry#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the
code.

#### Link to tracking issue

Fixes
open-telemetry#35869

#### Testing

In the first commit I have added tests that document the erroneous
behavior. In the second commit I have fixed the bug and corrected the
tests.

#### Documentation

Added changelog entry.
@andrzej-stencel
Copy link
Member Author

@andrzej-stencel
Copy link
Member Author

andrzej-stencel commented Nov 11, 2024

sbylica-splunk pushed a commit to sbylica-splunk/opentelemetry-collector-contrib that referenced this issue Dec 17, 2024
)

#### Description

Fixes
open-telemetry#35869
by refactoring of the `Reader::ReadToEnd` method.

This refactors the `Reader::ReadToEnd` method by separating reading the
file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents`
methods, which was previously deduplicated at the cost of slightly
higher complexity.
The bug could be fixed without separating header reading from contents
reading, but I hope this separation will make it easier to implement
content batching in the Reader
(open-telemetry#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the
code.

#### Link to tracking issue

Fixes
open-telemetry#35869

#### Testing

In the first commit I have added tests that document the erroneous
behavior. In the second commit I have fixed the bug and corrected the
tests.

#### Documentation

Added changelog entry.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment