-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/prometheusremotewrite] Fix data race in batch series state if called concurrently #36524
Closed
ArthurSens
wants to merge
2
commits into
open-telemetry:main
from
ArthurSens:prwexporter-batchSeries-concurrencybug
Closed
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
# Use this changelog template to create an entry for release notes. | ||
|
||
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' | ||
change_type: bug_fix | ||
|
||
# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver) | ||
component: prometheusremotewriteexporter | ||
|
||
# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). | ||
note: "Fix data race in batch series state." | ||
|
||
# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists. | ||
issues: [36524] | ||
|
||
# (Optional) One or more lines of additional information to render under the primary note. | ||
# These lines will be padded with 2 spaces and then inserted directly into the document. | ||
# Use pipe (|) for multiline entries. | ||
subtext: "" | ||
# If your change doesn't affect end users or the exported elements of any package, | ||
# you should instead start your pull request title with [chore] or use the "Skip Changelog" label. | ||
# Optional: The change log or logs in which this entry should be included. | ||
# e.g. '[user]' or '[user, api]' | ||
# Include 'user' if the change is relevant to end users. | ||
# Include 'api' if there is a change to a library API. | ||
# Default: '[user]' | ||
change_logs: [user] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
138 changes: 138 additions & 0 deletions
138
exporter/prometheusremotewriteexporter/exporter_concurrency_test.go
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,138 @@ | ||
// Copyright The OpenTelemetry Authors | ||
// SPDX-License-Identifier: Apache-2.0 | ||
|
||
package prometheusremotewriteexporter | ||
|
||
import ( | ||
"context" | ||
"io" | ||
"net/http" | ||
"net/http/httptest" | ||
"strconv" | ||
"sync" | ||
"testing" | ||
"time" | ||
|
||
"github.com/gogo/protobuf/proto" | ||
"github.com/golang/snappy" | ||
"github.com/prometheus/prometheus/prompb" | ||
"github.com/stretchr/testify/assert" | ||
"github.com/stretchr/testify/require" | ||
"go.opentelemetry.io/collector/component/componenttest" | ||
"go.opentelemetry.io/collector/config/confighttp" | ||
"go.opentelemetry.io/collector/config/configretry" | ||
"go.opentelemetry.io/collector/config/configtelemetry" | ||
"go.opentelemetry.io/collector/exporter/exportertest" | ||
"go.opentelemetry.io/collector/pdata/pmetric" | ||
|
||
"github.com/open-telemetry/opentelemetry-collector-contrib/internal/coreinternal/testdata" | ||
) | ||
|
||
// Test everything works when there is more than one goroutine calling PushMetrics. | ||
// Today we only use 1 worker per exporter, but the intention of this test is to future-proof in case it changes. | ||
func Test_PushMetricsConcurrent(t *testing.T) { | ||
n := 1000 | ||
ms := make([]pmetric.Metrics, n) | ||
testIDKey := "test_id" | ||
for i := 0; i < n; i++ { | ||
m := testdata.GenerateMetricsOneMetric() | ||
dps := m.ResourceMetrics().At(0).ScopeMetrics().At(0).Metrics().At(0).Sum().DataPoints() | ||
for j := 0; j < dps.Len(); j++ { | ||
dp := dps.At(j) | ||
dp.Attributes().PutInt(testIDKey, int64(i)) | ||
} | ||
ms[i] = m | ||
} | ||
received := make(map[int]prompb.TimeSeries) | ||
var mu sync.Mutex | ||
|
||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | ||
body, err := io.ReadAll(r.Body) | ||
if err != nil { | ||
t.Fatal(err) | ||
} | ||
assert.NotNil(t, body) | ||
// Receives the http requests and unzip, unmarshalls, and extracts TimeSeries | ||
assert.Equal(t, "0.1.0", r.Header.Get("X-Prometheus-Remote-Write-Version")) | ||
assert.Equal(t, "snappy", r.Header.Get("Content-Encoding")) | ||
var unzipped []byte | ||
|
||
dest, err := snappy.Decode(unzipped, body) | ||
assert.NoError(t, err) | ||
|
||
wr := &prompb.WriteRequest{} | ||
ok := proto.Unmarshal(dest, wr) | ||
assert.NoError(t, ok) | ||
assert.Len(t, wr.Timeseries, 2) | ||
ts := wr.Timeseries[0] | ||
foundLabel := false | ||
for _, label := range ts.Labels { | ||
if label.Name == testIDKey { | ||
id, err := strconv.Atoi(label.Value) | ||
assert.NoError(t, err) | ||
mu.Lock() | ||
_, ok := received[id] | ||
assert.False(t, ok) // fail if we already saw it | ||
received[id] = ts | ||
mu.Unlock() | ||
foundLabel = true | ||
break | ||
} | ||
} | ||
assert.True(t, foundLabel) | ||
w.WriteHeader(http.StatusOK) | ||
})) | ||
|
||
defer server.Close() | ||
|
||
// Adjusted retry settings for faster testing | ||
retrySettings := configretry.BackOffConfig{ | ||
Enabled: true, | ||
InitialInterval: 100 * time.Millisecond, // Shorter initial interval | ||
MaxInterval: 1 * time.Second, // Shorter max interval | ||
MaxElapsedTime: 2 * time.Second, // Shorter max elapsed time | ||
} | ||
clientConfig := confighttp.NewDefaultClientConfig() | ||
clientConfig.Endpoint = server.URL | ||
clientConfig.ReadBufferSize = 0 | ||
clientConfig.WriteBufferSize = 512 * 1024 | ||
cfg := &Config{ | ||
Namespace: "", | ||
ClientConfig: clientConfig, | ||
MaxBatchSizeBytes: 3000000, | ||
RemoteWriteQueue: RemoteWriteQueue{NumConsumers: 1}, | ||
TargetInfo: &TargetInfo{ | ||
Enabled: true, | ||
}, | ||
CreatedMetric: &CreatedMetric{ | ||
Enabled: false, | ||
}, | ||
BackOffConfig: retrySettings, | ||
} | ||
|
||
assert.NotNil(t, cfg) | ||
set := exportertest.NewNopSettings() | ||
set.MetricsLevel = configtelemetry.LevelBasic | ||
|
||
prwe, nErr := newPRWExporter(cfg, set) | ||
|
||
require.NoError(t, nErr) | ||
ctx, cancel := context.WithCancel(context.Background()) | ||
defer cancel() | ||
require.NoError(t, prwe.Start(ctx, componenttest.NewNopHost())) | ||
defer func() { | ||
require.NoError(t, prwe.Shutdown(ctx)) | ||
}() | ||
|
||
var wg sync.WaitGroup | ||
wg.Add(n) | ||
for _, m := range ms { | ||
go func() { | ||
err := prwe.PushMetrics(ctx, m) | ||
assert.NoError(t, err) | ||
wg.Done() | ||
}() | ||
} | ||
wg.Wait() | ||
assert.Len(t, received, n) | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this actually fix the problem? If the single
*batchTimeSeriesState
value is used by multiple goroutines callingprwe.PushMetrics()
concurrently it seems that moving to atomic integer access will certainly avoid data races that would be detected by the runtime but wouldn't necessarily make the changes to that state valid. What happens if there are multiple batches processed concurrently that have significantly different sizes?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point; they would still share the same state, and their results would be conflicting. I think I need to go back to the drawing board and think a bit more about how to solve this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm rereading the code, and my understanding is that concurrent requests with very distinct batch sizes would constantly fight for the size of the subsequent request.
Can we even do something useful with the batchStateSize if we allow multiple workers? It sounds like this optimization only works for a single worker scenario 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's how I understand it, as well. I also think it likely that each of the three sizes tracked by this state would be decorrelated, though I'm not sure that's any more problematic.
I'm not sure this optimization is safe with multiple workers. Would it make more sense to use a
sync.Pool
of backing stores that can eventually grow to the needed size and get periodically reaped to avoid one-off large batches causing leaks? Similar to what is done in #35184?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the sync.Pool sounds like worth exploring! We could also remove the state altogether and see how bad the benchmarks will look.
I'm trying things out and running benchmarks, I'll open new PRs once I have something to show :)