-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Entirety of speculation rules should be filterable, not just the paths to exclude #1156
Comments
I had thought that only one |
It would be great to have the ability to set a limit when using wildcards. For example, eagerly prerender /product/* up to 8. In this case, given the limit of 10, we would still have 2 available on hover "slots" provided by the |
For context, my original use case was validation in unit tests, but runtime validation is interesting too. To make sure that's not too slow, I'd transform the JSON schema into a PHP file and then use that for the validation. Then there's no need to do any JSON parsing.
This sounds like a feature request for https://github.com/WICG/nav-speculation/issues. |
These are already separate limits. so you can prerender up to 10 eagerly, and 2 moderately. |
Check https://altsetup.com. I have: and
The 10 eager prerenders happen, but the 2 moderate ones (on hover) don't. Why? |
I believe because Chrome has a limit of 10 prerenders. |
According to @tunetheweb, the limits are 10 eagerly + 2 moderately, and I can confirm it works like that on another site I'm testing it on. The issue is on https://altsetup.com and some other sites having similar stack. I can provide admin access to check it. EDIT |
I see Chrome is reporting this error with the prerenders:
I see there is only one URL which is "Not triggered", and that is the Contact page: However, that Did you try excluding <script type="speculationrules">
{
"prerender": [{
"where": {
"and": [
{ "href_matches": "/*" },
{ "not": { "href_matches": "/contact/" }},
{ "not": {"selector_matches": ".do-not-prerender"}},
{ "not": {"selector_matches": "[rel=nofollow]"}}
]
},
"eagerness": "eager"
}]
}
</script> |
Actually all URLs having
I just replaced the script with yours, and now |
This is basically an extension of this bug: https://issues.chromium.org/issues/335277576. Once a speculation fails it is not retried. But to be honest a high eagerness is best used with a targeted list of URLs where you KNOW there is a high probability the link will be used. The lower eagerness (moderate or conservative) is for URLs where this cannot be predicted. The scattergun approach of trying to eagerly fetch ALL URLs, with the low eagerness backup for the same URLs is not recommended. Speculating 12 URLs per page load is a LOT of potential wastage which will affect both your hosting costs (every visitor is now effectively 12 visitors) and, more importantly, your visitors bandwidth and CPU resources. This is precisely why we have this 10 URL limit. And it be honest I think that's a little high and may be lowered if we see a lot of this scattergun approach. I would suggest a more targeted approach for eagerly fetched URLs, or just using less eagerly fetched URLs. |
What's the status here, are we planning on adding a filter at least in WordPress/wordpress-develop#7860? It would really be a best practice to give developers full control. |
cc @felixarntz |
@swissspidy @westonruter I agree a filter for the entirety of rules would be useful. However, the shape of that data is quite complex due to the deep nesting, and dealing with this in raw array shape is quite error-prone. If we want to expose a filter for the entire thing, I think we should introduce some helpers to make the filtering a bit more intuitive. For example:
Alternatively, we could have two separate filters for the
So in simplistic code this could be something like this: $speculation_rules = array(
$mode => array(
new WP_Speculation_Rule( /* ... */ ), // The one and only speculation rule that WP Core would add by default, with all the exclusions from the existing filter.
),
);
$speculation_rules = apply_filters( 'wp_speculation_rules', $speculation_rules, $mode, $eagerness ); Obviously all of the above uses the WDYT? |
Just for illustrative purposes, my alternative idea from above with the 2 filters would look something like this: $speculation_rules = array_filter(
array(
$mode => apply_filters( "wp_speculation_rules_{$mode}", array(
new WP_Speculation_Rule( /* ... */ ), // The one and only speculation rule that WP Core would add by default, with all the exclusions from the existing filter.
), $mode, $eagerness ),
$other_mode => apply_filters( "wp_speculation_rules_{$other_mode}", array(), $mode, $eagerness ),
)
); |
Hmm that seems a bit overly complex at first glance. While I didn't originally propose this filter, I would assume it could be useful for performance plugins who want complete control over the configuration. Sure, a raw array is error prone there, but that shouldn't be core's concern. And if we're concerned about error proneness, then schema validation would be an interesting path to explore.
Overall I'd say before going down this route I would rather not add such a filter in 6.8. Let's ship the feature as-is and then iterate in 6.9 if there is a need. Otherwise it looks like a solution searching for a problem. |
Why not? Shouldn't core's APIs be intuitive to use?
I'm on board with that. I'm anyway a strong proponent of adding filters based on demand, not make everything filterable just because we can without knowing whether it's useful. Every filter is an API that makes changing things more difficult, so I think we should add them when we know they're useful. |
I wonder if this could be implemented more easily with a more complex rule and the use of classes, that the site owner should use to decorate their links with. For example, the current prerender rule is basically this: {
"prerender": [
{
"source": "document",
"where": {
"and": [
{
"href_matches": "\/*"
},
{
"not": {
"href_matches": [
"\/wp-login.php",
"\/wp-admin\/*",
"\/*\\?*(^|&)_wpnonce=*",
"\/wp-content\/uploads\/*",
"\/wp-content\/*",
"\/wp-content\/plugins\/*",
"\/wp-content\/themes\/dynamik-gen\/*",
"\/wp-content\/themes\/genesis\/*"
]
}
},
{
"not": {
"selector_matches": "a[rel~=\"nofollow\"]"
}
},
{
"not": {
"selector_matches": ".no-prerender"
}
}
]
},
"eagerness": "moderate"
}
]
} Something like adding an additional document {
"prerender": [
{
"where": {"selector_matches": ".prerender-eagerly-block a"},
"eagerness": "eager",
},
{
"source": "document",
"where": {
"and": [
{
"href_matches": "\/*"
... rest of the old rule as per above The This could potentially be much more powerful and maintainable than static lists of URLs that differ by page, or will get out of date. |
@tunetheweb That's an interesting idea for sure. I definitely like relying on classes to decide how to prefetch or prerender a resource. We already have the From the perspective of WordPress Core's philosophy of "decisions, not options" one could argue that relying on classes specifically on the block (rather than on the actual Let's think about how this could work. The current How about this:
Question: How would the main rule generated by WordPress need to cater for these? This is partially a question of how the spec works, and based on that how can we avoid conflicts. For example, if your WordPress site is configured to
|
Basically the a matching rule will cause a speculation. And the API has deduplication logic so if you eager prerender a link, and then hovering it also causes a prerender, then it won’t prerender again (unless the previous prerender has been cancelled or failed). Additionally, it makes full use of the HTTP Cache. So if a link has been prerendered before, and then that prerender is cancelled, and then it’s re-prerendered then it will use the HTTP Cache, so with the right cache-control The main thing to be aware of the Chrome limits (10 prerenders for eager, 2 for moderate and conservative). And also to write the rule (or rules - multiple rules per page are supported) to ensure, for example, no-prerender classes as excluded in every rule (so the rule cannot match these). To give some examples:
We’d need to test eagerly prerendering and moderately prefetching (especially without no caching). In theory it shouldn’t prefetch (as it’s already got an active prerender speculation), but not tried that to confirm. But, as I said in example 3, I have done the opposite and that’s a common pattern I see. |
Thanks @tunetheweb for the detailed explanation, the examples help a lot. One of my previous points I'm still not clear about though, I think we'll need to carefully think about what should happen when multiple rules apply. Consider this example:
How do we interpret that If the first, then the site would eagerly prefetch the URL and later potentially upgrade it to a prerender. This is like in the example you mentioned. But if the latter, then the site would eagerly prefetch the URL, but do nothing else. In order to technically achieve that, the general rule would need to have the My question here is: We could make both approaches work - but which one makes most sense from a DX perspective? |
I would consider this and opt-in to both eagerly prefetch and then upgrade to prerender on hover. I don’t see those as conflicting instructions. As I say that’s a fairly common pattern. The only conflicts I see are if you say to prerender but also not to prerender (or similar to prefetch but also not to prefetch). In which case the “not” so take precedence. This is easily achieved with a rule that always includes a |
Makes sense. So in other words, going back to my example: If someone wants to have a link to be prefetched eagerly, but opt out from prerendering, they would need to use both classes |
Correct. If the default behaviour is to prerender and they want to do what you want, then to me they are asking for two extra things from the default:
Hence two classes. I don't think we need to create one class for all combinations just to avoid the need for two classes as they'll just get confusing. |
Thanks! Regarding the implementation of this, I think we should implement support for these classes as a follow-up enhancement via separate WordPress Trac ticket, once the initial feature has been merged. That keeps PRs manageable, and since this works well as a standalone enhancement, it makes sense. It could still be merged into the same WordPress Core release. |
SGTM. We could also experiment with it in the Plugin in the meantime if we think that would be useful and/or the core implementation is still some way off. |
@tunetheweb I agree with starting this as an experiment in the plugin, let's do that. If we want to do that, we will need to be able to modify the overall speculation rules once speculative loading is in WordPress Core, so that brings me back to the original purpose of this issue. For the previously mentioned reasons in #1156 (comment) I don't think a simple filter for the entire complex array is the right approach. Thinking about this further, I think a more explicit API would help reduce that complexity and also guide it to do the right thing. Instead of a filter, we could fire an action that receives a This action could be used something like this (taking one of the examples discussed above): add_action(
'wp_set_speculation_rules',
function ( WP_Speculation_Rules $speculation_rules ) {
$speculation_rules->add_rule(
// The `$mode` parameter would be either 'prefetch' or 'prerender'.
'prefetch',
// The `$rule` parameter would be an array representing a single rule.
array(
'source' => 'document',
'where' => array(
'selector_matches' => '.eager-prefetch, .eager-prefetch a',
),
'eagerness' => 'eager',
)
);
}
); The
WDYT about this idea @swissspidy @westonruter? |
Feature Description
Currently the Speculative Loading plugin provides a
plsr_speculation_rules_href_exclude_paths
filter to exclude URLs (and URL patterns) from being speculatively loaded. However, two situations came up recently where this was not sufficient.rel=nofollow
to the links. However, there is no way to exclude links via such an attribute without manually modifying the default rules, which was done in Exclude rel=nofollow links from prefetch/prerender #1142. (This was to avoid having to add a WooCommerce-specific URLPattern to exclude URLs with anadd-to-cart
query param, since excluding links withrel=nofollow
may be generally advisable: Should we excluderel=nofollow
by default? WICG/nav-speculation#309).To account for these two use cases, I suggest that the entire set of speculation rules (speculation ruleset?) be filterable, doing something like this:
Also, @swissspidy suggested in #1144 (comment) that a JSON Schema could be written which could validate whatever is being returned by the filter. If not valid, it could trigger a
_doing_it_wrong()
.The text was updated successfully, but these errors were encountered: