Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compactor set new retention period doesn't work #4336

Open
petkovp opened this issue Nov 17, 2024 · 4 comments
Open

Compactor set new retention period doesn't work #4336

petkovp opened this issue Nov 17, 2024 · 4 comments

Comments

@petkovp
Copy link

petkovp commented Nov 17, 2024

I installed grafana tempo with helm and then I set the new retention period of the compactor 360h (15 days).
The all setting were applied correctly (configMap, deployment), the tempo.conf file has new set 360h, the pod loaded the retention period.

Traces store in S3 bucket, but compactor delete traces older than 48 hours.

Could you please some help for this issue, because I am going to implement this product and I need for longer retention period that 48 hours.

compactor: config: compaction: block_retention: 360h

@joe-elliott
Copy link
Member

Traces store in S3 bucket, but compactor delete traces older than 48 hours.

I would check the following:

  • confirm it is the compactor deleting the block by checking for a log message that shows the block deleted:

Image

  • confirm the retention value is set as expected by curling the config endpoint on any compactors that are deleting blocks. /status/config

  • confirm that you have no overrides for the tenant whose blocks are being deleted. /status/overrides. also needs to be curled against

  • are you using s3? or an s3 "compliant" backend?

  • confirm that the system time of the compactor pods that are deleting blocks is correct.

@petkovp
Copy link
Author

petkovp commented Nov 18, 2024

Hello again,

I am replying for question on previous comment.

  1. Screenshot for compactor's log

Image

  1. Screenshot of config file section compactor retention

Image

  1. There aren't any overrides records or settings

  2. S3 bucket - traces

Image

Everything looks like fine.

@joe-elliott
Copy link
Member

2 Screenshot of config file section compactor retention

Not the config file. The output of the /status/config endpoint. We want to confirm the config file is being applied correctly. Please make sure you curl the endpoint of one of the compactors that's incorrectly removing blocks.

3 There aren't any overrides records or settings

Please curl the endpoint on a compactor to confirm.

confirm that the system time of the compactor pods that are deleting blocks is correct.

Did you check this? Also it would be interesting to check if all compactors or only some compactors are incorrectly removing blocks.

@petkovp
Copy link
Author

petkovp commented Nov 25, 2024

I confirm that endpoint result is correct value

compaction:
    v2_in_buffer_bytes: 5242880
    v2_out_buffer_bytes: 20971520
    v2_prefetch_traces_count: 1000
    compaction_window: 1h0m0s
    max_compaction_objects: 6000000
    max_block_bytes: 107374182400
    **block_retention: 360h0m0s**
    compacted_block_retention: 1h0m0s
    retention_concurrency: 10
    max_time_per_tenant: 5m0s
    compaction_cycle: 30s
override_ring_key: compactor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants