-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Implementation of June 2023 incremental delivery format with @defer
#1074
base: benjie/incremental-common
Are you sure you want to change the base?
[RFC] Implementation of June 2023 incremental delivery format with @defer
#1074
Conversation
Just checking — does this algorithm handles the test case in graphql/graphql-js#3997 correctly? Can inclusion of a field in a nested deferred fragment — where that field is present in a parent result and so will never be delivered with the child — muck with how the delivery groups are created? |
It shouldn't cause an issue because it's based on field collection, so both of the (Note this may not actually be the case in the current algorithm because it may have bugs, but this is the intent.) query HeroNameQuery {
... @defer {
hero {
id
}
}
... @defer {
hero {
name
shouldBeWithNameDespiteAdditionalDefer: name
... @defer {
shouldBeWithNameDespiteAdditionalDefer: name
}
}
}
} First group does nothing, but notes that Next is creates two new groups for the defers, and a "shared" group. The shared group executes the When grouping the subfields on the second of these groups it's noted that |
In my spec and TS implementation, we handle this by having each DeferUsage save its parent DeferUsage if it exists, and then performing some filtering downstream. I have the sense that your current algorithm does not correctly handle this case — but I am hoping that it does, because if it does, it manages to do so without that tracking, which I would want to emulate if possible. |
b342b58
to
a52310e
Compare
This RFC introduces an alternative solution to incremental delivery,
implementing the
June 2023 response format.
This solution aims to minimize changes to the existing execution algorithm, when
comparing you should compare against
benjie/incremental-common
(#1039) to make the diff easier to
understand. I've raised this PR against that branch to make it clearer.
The RFC aims to avoid mutations and side effects across algorithms, so as to fit
with the existing patterns in the GraphQL spec. It also aims to leverage the
features we already have in the spec to minimize the introduction of new
concepts.
WORK IN PROGRESS: there's likely mistakes all over this currently; and a lot
will need to be done to maintain consistency of the prose and algorithms.
This RFC works by adjusting the execution algorithms in a few small ways:
Previously GraphQL can be thought of has having just a single delivery group
(called the "root delivery group" in this RFC) - everything was delivered at
once. With "incremental delivery", we're deliverying the data in multiple
phases, or groups. A "delivery group" keeps track of which fields belong to
which
@defer
, such that we can complete one delivery group before moving onto its children.
CollectFields()
now returns a map of "field digests" rather than justfields.
CollectFields()
used to generate a map between response key and fieldselection (
Record<string, FieldNode>
), but now it creates a map betweenresponse key and a "field digest", an object which contains both the field
selection and the delivery group to which it belongs
(
Record<string, { field: FieldNode, deliveryGroup: DeliveryGroup }>
). Assuch,
CollectFields()
is now passed the current path and delivery group asarguments.
ExecuteRootSelectionSet()
may return an "incremental event stream".If there's no
@defer
thenExecuteRootSelectionSet()
will returndata
/errors
as before. However, if there are active
@defer
s then it will instead returnan event stream which will consist of multiple incremental delivery payloads.
ExecuteGroupedFieldSet()
runs against a set of "current deliverygroups".
If multiple sibling delivery groups overlap, the algorithm will first run the
fields common to all the overlapping delivery groups, and only when these are
complete will it execute the remaining fields in each delivery group (in
parallel). This might happen over multiple layers. This is tracked via a set
of "current delivery groups", and only fields which exist in all of these
current delivery groups will be executed by
ExecuteGroupedFieldSet()
.ExecuteGroupedFieldSet()
returns the currently executed data, as before,plus details of incremental fields yet to be delivered.
When there exists fields not executed in
ExecuteGroupedFieldSet()
(because they aren't in every one of the "current delivery groups"), we store
"incremental details" of the current grouped field set (by its path), for
later execution. The incremental details consists of:
objectType
- the type of the concrete object the field exists on (i.e.the object type passed to
ExecuteGroupedFieldSet()
)objectValue
- the value of this object (as would be passed as the firstargument to the resolver for the field)
groupedFieldSet
- similar to the result ofCollectFields()
, but onlycontaining the response keys that have not yet been executed
CompleteValue()
continues execution in the "current delivery groups".We must pass the path and current delivery groups so that we can execute the
current delivery groups recursively.
CompleteValue()
returns the field data, as before, plus details ofincremental subfields yet to be delivered.
As with
ExecuteGroupedFieldSet()
,CompleteValue()
must pass down detailsof any incremental subfields that need to be executed later.
At a
@defer
boundary, a new DeliveryGroup is created, and field collectionthen happens within this new delivery group. This can happen multiple times in
the same level, for example:
If no
@defer
exists then no new delivery groups are created, and thus therequest executes as it would have done previously. However, if there is at least
one active
@defer
then the client will be sent the initial response along witha list of
pending
delivery groups. We will then commence executing thedelivery groups, delivering them as they are ready.
Note: when an error occurs in a non-null field, the incremental details gathered
in that selection set will be blown up alongside the sibling fields - we use the
existing error handling mechanisms for this.
This PR is nowhere near complete. I've spent 2 days on this latest iteration
(coming up with the new stream and partition approach as the major breakthrough)
but I've had to stop and I'm not sure if I've left gaps. Further, I need to
integrate Rob's hard work in #742 into it.
To make life a bit easier on myself, I've written some TypeScript-style
declarations of the various algorithms used in execute, according to this RFC.
This may not be correct and is definitely non-normative, but might be useful to
ease understanding.