-
Notifications
You must be signed in to change notification settings - Fork 842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracing misses sometimes the click event #1493
Comments
It would be hard to spot this just by looking at the code, most probably it would require some debug output here and there to narrow it down. if (clicks.length !== 1) {
console.log("exactly one click event is expected", fileName, events);
throw "exactly one click event is expected";
} we can suspect 2 causes:
Don't you see in the console at least the framework and scenario where this is triggered, or it is a random event? |
Such a trace might then look like that: This feels so odd, since the browser is in a defined state after initBenchmark. The test driver has verified that the warmup was successful and the dom elements are present. Then tracing starts before runBenchmak whose actions are also validated (i.e. it's neither case 1 or 2). The tracing stops after successful verification that the dom state was modified according to the benchmark. Everything is in its place, but the trace contains almost nothing (no click event, no layout, no paint, ...) As a workaround I built a retry for that particular error. @syduki I'll close your PR since I played with that retry on the master branch. |
Those 1 & 2 were too optimistic, from your capture looks it is a worse case, more likely tied to CDP itself. Let's see if it improves with the webdriver. |
Currently it happens rarely that the error "failed: exactly one click event is expected" is reported. In this case the event trace contains no click event and thus isn't valid. It's pretty unclear under what circumstances this happens. Currently I get about 3 such errors for a complete run.
In #1428 we're considering opening a new tab for each benchmark iteration and this makes the situation worse. With puppeteer I get about 30 errors, thus I switched to the puppeteer test driver, but it also fails too often. Currently running with each iteration in a new tab is not practicable due to this issue.
I'm running out of ideas what to do about this. This is the current code:
So there are already a few (successful) operations and a sleep between the start of the trace and the click event in runBenchmark...
The text was updated successfully, but these errors were encountered: