-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instrumentation produces false measurements for NodeJS app using NAPI #96
Comments
#2724 added CodSpeed benchmarks for NodeJS `oxc-parser`. Unfortunately it turns out CodSpeed's results are wildly inaccurate. Unclear why, but have raised an issue with CodSpeed (CodSpeedHQ/action#96). In meantime it seems best to remove the benchmarks as they're not useful at present.
Hey @overlookmotel, The slowdown of the whole code execution in the action is to be expected since the whole command is run under Regarding the problem of the relative performance gains/regressions, this might be linked with file system accesses and their handling in our measurement. I will take a look at the execution profiles to try and sort that out. I will keep you posted here. |
Thanks loads for coming back. I did expect there'd be some overhead, but x8 slowdown was more than I expected. I guess it doesn't matter much, as long as it affects everything equally, so can make comparisons. The relative performance measure is more problematic. The benchmark where I saw weird results does not involve any file system access. It's basically comparing 2 methods of transferring data from Rust to JS.
So the main differences between the two are:
|
Add NodeJS parser to benchmarks. Previous attempt #2724 did not work due CodSpeed producing very inaccurate results (CodSpeedHQ/action#96). This version runs the actual benchmarks without CodSpeed's instrumentation. Then another faux-benchmark runs within Codspeed's instrumented action and just performs meaningless calculations in a loop for as long as is required to take same amount of time as the original uninstrumented benchmarks took. It's unfortunate that we therefore don't get flame graphs on CodSpeed, but this seems to be the best we can do for now.
Not sure how codspeed is implemented internally. I'm using codspeed real benchmark data(ms) async function sleep(ms: number) {
await new Promise((resolve) => globalThis.setTimeout(resolve, ms))
}
function main() {
const realBenchData = JSON.parse(
nodeFs.readFileSync(
nodePath.join(PROJECT_ROOT, 'dist/ci-bench-data.json'),
'utf8',
),
)
console.log('realBenchData:')
console.table(realBenchData)
for (const suite of suitesForCI) {
const realData = realBenchData[suite.title]
const realDataSourceMap = realBenchData[`${suite.title}-sourcemap`]
nodeAssert(realData != null)
nodeAssert(realDataSourceMap != null)
bench(suite.title, async () => {
await sleep(realData.mean)
})
bench(`${suite.title}-sourcemap`, async () => {
await sleep(realDataSourceMap.mean)
})
}
}
main() Not sure if it helps. This is a reproduction branch. |
hyf0 I've replied on rolldown/rolldown#706 (which I think is what prompted your comment above). |
I'm trying to benchmark a NodeJS app which calls into a NodeJS native module (built from Rust via
napi-rs
).CodSpeed's instrumentation appears to slow down the app very considerably, making it hard to accurately measure performance.
To get a fair comparison, I'm running the same benchmark on Github Actions with and without CodSpeed.
Without CodSpeed:
With CodSpeed:
Github Action results: https://github.com/oxc-project/oxc/actions/runs/8308255059
Benchmark task: .github/workflows/benchmark.yml
Benchmark code: napi/parser/parse.bench.mjs
NB: The 2nd set of results are not from within the
withCodspeed()
-wrappedBench
object, because CodSpeeds's wrapper disables output of the results table. It appears that CodSpeed's instrumentation slows down everything in the process, not just the code which runs within thewithCodspeed()
wrapper.These benchmarks use tinybench. I have also tried Vitest and benchmark.js, but they produce similar results.
Worse, CodSpeed's instrumentation appears to be preventing accurate assessment of the relative speed-up/down of changes. When run locally (Macbook Pro M1) or in CI without CodSpeed, this benchmark shows "napi raw" to be about 3x faster than "napi". But under CodSpeed, the results show only a marginal improvement, or in some cases show what's in reality a 3x speed-up as a slight slow-down.
I suspect the overhead CodSpeed introduces (and variance within in) is masking the actual performance gain.
https://github.com/oxc-project/oxc/actions/runs/8308560738
I don't know if this problem is related to the use of native addons within the NodeJS code under test, or a more general problem.
I would be more than happy to help in any way I can to diagnose/fix this. CodSpeed is absolutely brilliant for Rust code, and would very much like to expand our usage to NodeJS code too. Please just let me know what (if anything) I can do.
The text was updated successfully, but these errors were encountered: