-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(web): test skipped prediction round handling #12169
Conversation
User Test ResultsTest specification and instructions User tests are not required Test Artifacts
|
@@ -118,6 +118,67 @@ describe("PredictionContext", () => { | |||
assert.equal(suggestions.find((obj) => obj.transform.deleteLeft != 0).displayAs, 'apps'); | |||
}); | |||
|
|||
it('ignores outdated predictions', async function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compare and contrast with the test defined immediately before this one.
|
||
// Mocking: corresponds to the second set of mocked predictions - round 2 of | ||
// 'apple', 'apply', 'apples'. | ||
const skippedPromise = langProcessor.predict(baseTranscription, kbdProcessor.layerId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In retrospect, I didn't end up using this value. Not sure what the best way to test with it would be, so maybe I should just...
const skippedPromise = langProcessor.predict(baseTranscription, kbdProcessor.layerId); | |
langProcessor.predict(baseTranscription, kbdProcessor.layerId); |
and disregard its returned value completely... which is what happens in production, anyway.
This spot does have a notable contrast with the (previous) test that it's based upon, so I'll leave it in for the moment in case the distinction helps with the code review.
// This does re-use the apply-revert oriented mocking. | ||
// Should skip the (second) "apple", "apply", "apps" round, as it became outdated | ||
// by its following request before its response could be received. | ||
assert.deepEqual(suggestions.map((obj) => obj.displayAs), ['“apple”', 'applied']); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we get a different set of suggestions than in the previous test? Ok, the difference is the additional predict call, but why/how would that get triggered in production?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but why/how would that get triggered in production?
- Suppose I quickly double-tap a key on the OSK. Prediction may be on the order of tens of milliseconds, but it still takes time.
- Or that I am using multiple fingers to type and type two keys near-simultaneously - especially if using a hardware keyboard (which we do support on Android)
- Or that I am using an older Android device where we previously had rapid-typing bugs due to a lower-end CPU on the device - the predictive-text processing would likely also be slow in such a scenario, with both competing for CPU cycles.
- Of particular note - the JS task queue and microtask queue tend to be FIFO in my experience, and the implications of this are significant here. (Just as they were for resolving the rapid-typing bugs from 17.0-beta.)
- All pending keys to be processed would each trigger their own
predict
call when processed, each one likely cancelling the previous one.- Even if the worker does get processing time in parallel, the worker's response would only fulfill the prediction
Promise
once the suggestions are actually ready - a time much later than the triggering key. - Thus, the response's
Promise
fulfillment would be queued after fulfillment forPromise
s for the pending keys - which could even include keys queued since the originalpredict()
call! - This, in turn, would cause
PredictionContext
to skip the predictions - which is a good thing here. We don't want to rapidly rotate through numerous invalid suggestions while we're already lagging - updating the DOM does take up CPU time, after all. A smoother suggestion-bar flow would also likely be preferable to users (over one rapidly changing while not meaningfully interactive)
- Even if the worker does get processing time in parallel, the worker's response would only fulfill the prediction
That list isn't necessarily exhaustive, but highlights the most common cases I'd expect to trigger this.
I got a bit curious what the average delay between keystrokes might be. It's naturally going to be subject to variability, but one paper states that, for PIN input on touchscreen devices, roughly 270 ms is the average delay between keystrokes... but with a nearly equal standard deviation of 267ms. Note the paper's "Figure 3", which has a graph of their data for this.
Based on the graph data, it's certainly not rare to have too short an amount of time between keystrokes for a prediction-search to use all of its allotted time. If both happen at the same time, we'd trigger a new predict
request before the first completes, triggering such a "skip".
Changes in this pull request will be available for download in Keyman version 18.0.89-alpha |
Fixes: #11624
@keymanapp-test-bot skip