Skip to content

RFC: incremental delivery with deduplication + concurrent execution #1034

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 65 commits into from
Closed
Changes from 1 commit
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
fca1c5d
Introduce @defer and @stream.
robrichard Aug 18, 2022
43e9997
fix typos
robrichard Feb 17, 2021
cb5a3f4
clear up that it is legal to support either defer or stream individually
robrichard Feb 17, 2021
0eb4426
Add sumary of arguments to Type System
robrichard Feb 17, 2021
43bfe01
Update Section 3 -- Type System.md
robrichard May 15, 2021
acb5bf0
clarification on defer/stream requirement
robrichard Nov 19, 2021
abea59b
clarify negative values of initialCount
robrichard Nov 20, 2021
139d69f
allow extensions only subsequent payloads
robrichard Nov 25, 2021
de5004b
fix typo
robrichard Nov 26, 2021
9e89f42
Raise a field error if initialCount is less than zero
robrichard Aug 18, 2022
f894ba3
data is not necessarily an object in subsequent payloads
robrichard Dec 6, 2021
08053d7
add Defer And Stream Directives Are Used On Valid Root Field rule
robrichard Dec 6, 2021
e19246b
wait for parent async record to ensure correct order of payloads
robrichard Aug 18, 2022
2ecd0af
Simplify execution, payloads should begin execution immediately
robrichard Dec 20, 2021
337bb87
Clarify error handling
robrichard Dec 20, 2021
2982dec
add isCompletedIterator to AsyncPayloadRecord to track completed iter…
robrichard Dec 30, 2021
32fb73b
fix typo
robrichard Jan 21, 2022
1ff999e
deferDirective and visitedFragments
robrichard Feb 2, 2022
270b409
stream if argument, indexPath -> itemPath
robrichard Feb 7, 2022
75f2258
Clarify stream only applies to outermost list of multi-dimensional ar…
robrichard Feb 7, 2022
d8c28d1
add validation “Defer And Stream Directive Labels Are Unique”
robrichard Mar 7, 2022
eb3a4e3
Clarification on labels
robrichard Mar 8, 2022
f2b50bf
fix wrong quotes
robrichard Mar 23, 2022
92f02f3
remove label/path requirement
robrichard Mar 23, 2022
049bce8
add missing line
robrichard Jun 9, 2022
9a07500
fix ExecuteRequest
robrichard Jun 9, 2022
7c5e1da
fix response
robrichard Jun 9, 2022
19cb9c3
Align deferred fragment field collection with reference implementation
robrichard Aug 3, 2022
c747f61
spec updates to reflect latest discussions
robrichard Aug 18, 2022
6f3c715
Note about mutation execution order
robrichard Aug 18, 2022
7c9ea0a
minor change for uniqueness
robrichard Aug 18, 2022
d84939e
fix typos
robrichard Aug 18, 2022
1ad7e9c
if: Boolean! = true
robrichard Aug 23, 2022
4b6554e
address pr feedback
robrichard Aug 23, 2022
9103fdb
clarify null behavior of if
robrichard Aug 24, 2022
3944d05
Add error boundary behavior
robrichard Sep 8, 2022
90b31ae
defer/stream response => payload
robrichard Sep 8, 2022
f1c0ec2
event stream => response stream
robrichard Sep 8, 2022
3830406
link to path section
robrichard Sep 8, 2022
f950efb
use case no dash
robrichard Sep 8, 2022
ad5b2e2
remove "or null"
robrichard Sep 8, 2022
c1f3f65
add detailed incremental example
robrichard Sep 8, 2022
2e41749
update label validation rule
robrichard Sep 8, 2022
abb14a0
clarify hasNext on incremental example
robrichard Sep 8, 2022
4ea2a34
clarify canceling of subsequent payloads
robrichard Sep 8, 2022
1565491
Add examples for non-null cases
robrichard Sep 8, 2022
a938f44
typo
robrichard Sep 9, 2022
a301f21
improve non-null example
robrichard Sep 9, 2022
38bfbb9
Add FilterSubsequentPayloads algorithm
robrichard Sep 9, 2022
8d07dee
link to note on should
robrichard Oct 12, 2022
008818d
update on hasNext
robrichard Nov 1, 2022
4adb05a
small fixes (#3)
yaacovCR Nov 7, 2022
ddd0fd7
remove ResolveFIeldGenerator (#4)
yaacovCR Nov 16, 2022
b54c9fe
fix typos (#6)
yaacovCR Nov 18, 2022
02d4676
Add error handling for stream iterators (#5)
yaacovCR Nov 21, 2022
3e74250
Raise a field error if defer/stream encountered during subscription e…
robrichard Nov 22, 2022
cb3ab46
Add validation rule for defer/stream on subscriptions
robrichard Nov 22, 2022
24cf072
clarify label is not required
robrichard Nov 23, 2022
d74430c
fix parentRecord argument in ExecuteStreamField (#7)
yaacovCR Nov 29, 2022
79da712
fix typo
robrichard Dec 5, 2022
8df13da
replace server with service
robrichard Jan 15, 2023
94363c9
CollectFields does not require path or asyncRecord (#11)
yaacovCR Jan 16, 2023
fe9d871
incremental delivery with deduplication, concurrent delivery, and ear…
yaacovCR May 21, 2023
831b10c
scattered fixes, streamlining
yaacovCR Sep 26, 2023
813ea2c
use identifiers instead of records when possible
yaacovCR Sep 28, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
scattered fixes, streamlining
  • Loading branch information
yaacovCR committed Sep 28, 2023
commit 831b10ce57e05b732c159df4652089b0367e28ea
150 changes: 68 additions & 82 deletions spec/Section 6 -- Execution.md
Original file line number Diff line number Diff line change
Expand Up @@ -426,11 +426,6 @@ A Stream Record is a structure that always contains:
- {id}: an implementation-specific value uniquely identifying this record,
created if not provided.

Within the Execution context, records of this type also include:

- {streamFieldGroup}: A Field Group record for completing stream items.
- {iterator}: The underlying iterator.

Within the Incremental Publisher context, records of this type also include:

- {label}: value derived from the corresponding `@stream` directive.
Expand All @@ -453,12 +448,6 @@ A Deferred Grouped Field Set Record is a structure that always contains:
- {id}: an implementation-specific value uniquely identifying this record,
created if not provided.

Within the Execution context, records of this type also include:

- {groupedFieldSet}: a Grouped Field Set to execute.
- {shouldInitiateDefer}: a boolean value indicating whether implementation
specific deferral of execution should be initiated.

Within the Incremental Publisher context, records of this type also include:

- {path}: a list of field names and indices from root to the location of this
Expand All @@ -479,7 +468,7 @@ a unit of Incremental Data as well as an Incremental Result.

#### New Deferred Fragment Event

Required event details for New Deferred Fragment Events include:
Required event details include:

- {id}: string value identifying this Deferred Fragment.
- {label}: value derived from the corresponding `@defer` directive.
Expand All @@ -490,7 +479,7 @@ Required event details for New Deferred Fragment Events include:

#### New Deferred Grouped Field Set Event

Required event details for New Deferred Grouped Field Set Event include:
Required event details include:

- {id}: string value identifying this Deferred Grouped Field Set.
- {path}: a list of field names and indices from root to the location of this
Expand All @@ -500,7 +489,7 @@ Required event details for New Deferred Grouped Field Set Event include:

#### Completed Deferred Grouped Field Set Event

Required event details for Completed Deferred Grouped Field Set Events include:
Required event details include:

- {id}: string value identifying this Deferred Grouped Field Set.
- {data}: ordered map represented the completed data for this Deferred Grouped
Expand All @@ -509,15 +498,15 @@ Required event details for Completed Deferred Grouped Field Set Events include:

#### Errored Deferred Grouped Field Set Event

Required event details for Errored Deferred Grouped Field Set Event include:
Required event details include:

- {id}: string value identifying this Deferred Grouped Field Set.
- {errors}: The _field error_ causing the entire Deferred Grouped Field Set to
error.

#### New Stream Event

Required event details for New Stream Events include:
Required event details include:

- {id}: string value identifying this Stream.
- {label}: value derived from the corresponding `@stream` directive.
Expand All @@ -528,7 +517,7 @@ Required event details for New Stream Events include:

#### New Stream Items Event

Required event details for New Stream Items Event include:
Required event details include:

- {id}: string value identifying these Stream Items.
- {streamId}: string value identifying the Stream
Expand All @@ -537,34 +526,34 @@ Required event details for New Stream Items Event include:

#### Completed Stream Items Event

Required event details for Completed Stream Items Event include:
Required event details include:

- {id}: string value identifying these Stream Items.
- {items}: the list of items.
- {errors}: the list of _field error_ for these items.

#### Completed Empty Stream Items Event

Required event details for Completed Empty Stream Items Events include:
Required event details include:

- {id}: string value identifying these Stream Items.

#### Errored Stream Items Event

Required event details for Errored Stream Items Events include:
Required event details include:

- {id}: string value identifying these Stream Items.
- {errors}: the _field error_ causing these items to error.

#### Completed Initial Result Event

Required event details for Completed Initial Result Events include:
Required event details include:

- {id}: string value identifying this Initial Result.

#### Field Error Event

Required event details for Field Error Events include:
Required event details include:

- {id}: string value identifying the Initial Result, Deferred Grouped Field Set
or Stream Items from which the _field error_ originates.
Expand Down Expand Up @@ -706,7 +695,7 @@ CreateIncrementalPublisher():
{earlyReturn}.
- Set the entry for {id} on {streamMap} to {stream}.

- Define the sub-procedure {HandleNewStreamItemsEvent(id, streamIds, parentIds)}
- Define the sub-procedure {HandleNewStreamItemsEvent(id, streamId, parentIds)}
as follows:

- Let {stream} be the entry in {streamMap} for {streamId}.
Expand Down Expand Up @@ -1031,31 +1020,31 @@ serial):

- Let {fieldsByTarget}, {targetsByKey}, and {newDeferUsages} be the result of
calling {AnalyzeSelectionSet(objectType, selectionSet, variableValues)}.
- Let {groupedFieldSet}, {newGroupedFieldSetDetails} be the result of calling
- Let {groupedFieldSet} and {groupDetailsMap} be the result of calling
{BuildGroupedFieldSets(fieldsByTarget, targetsByKey)}.
- Let {incrementalPublisher} be the result of {CreateIncrementalPublisher()}.
- Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher,
newDeferUsages, incrementalDataRecord)}.
- Let {newDeferredGroupedFieldSets} be the result of
{AddNewDeferredGroupedFieldSets(incrementalPublisher,
newGroupedFieldSetDetails, newDeferMap)}.
- Let {initialResultRecord} be a new Initial Result Record.
- Let {newDeferMap} be the result of {AddNewDeferFragments(incrementalPublisher,
newDeferUsages, initialResultRecord)}.
- Let {detailsList} be the result of
{AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap,
newDeferMap)}.
- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet,
queryType, initialValue, variableValues, incrementalPublisher,
initialResultRecord)} _serially_ if {serial} is {true}, _normally_ (allowing
parallelization) otherwise.
- In parallel, call {ExecuteDeferredGroupedFieldSets(queryType, initialValues,
variableValues, incrementalPublisher, newDeferredGroupedFieldSets,
newDeferMap)}.
variableValues, incrementalPublisher, detailsList, newDeferMap)}.
- Let {id} be the corresponding entry on {initialResultRecord}.
- Let {errors} be the list of all _field error_ raised while executing the
{groupedFieldSet}.
- Initialize {initialResult} to an empty unordered map.
- If {errors} is not empty:
- Set the corresponding entry on {initialResult} to {errors}.
- Set {data} on {initialResult} to {data}.
- Let {eventQueue} and {pending} be the corresponding entries on
{incrementalPublisher}.
- Enqueue a Completed Initial Result Event on {eventQueue} with {id}.
- Let {pending} be the corresponding entry on {incrementalPublisher}.
- Wait for {pending} to be set.
- If {pending} is empty, return {initialResult}.
- Let {hasNext} be {true}.
Expand All @@ -1077,8 +1066,7 @@ incrementalDataRecord, deferMap, path):
- Let {eventQueue} be the corresponding entry on {incrementalPublisher}.
- For each {deferUsage} in {newDeferUsages}:
- Let {label} be the corresponding entry on {deferUsage}.
- Let {parent} be (GetParentTarget(deferUsage, deferMap,
incrementalDataRecord)).
- Let {parent} be (GetParent(deferUsage, deferMap, incrementalDataRecord)).
- Let {parentId} be the entry for {id} on {parent}.
- Let {deferredFragment} be a new Deferred Fragment Record.
- Let {id} be the corresponding entry on {deferredFragment}.
Expand All @@ -1087,37 +1075,39 @@ incrementalDataRecord, deferMap, path):
- Set the entry for {deferUsage} in {newDeferMap} to {deferredFragment}.
- Return {newDeferMap}.

GetParentTarget(deferUsage, deferMap, incrementalDataRecord):
GetParent(deferUsage, deferMap, incrementalDataRecord):

- Let {ancestors} be the corresponding entry on {deferUsage}.
- Let {parentDeferUsage} be the first member of {ancestors}.
- If {parentDeferUsage} is not defined, return {incrementalDataRecord}.
- Let {parentRecord} be the corresponding entry in {deferMap} for
{parentDeferUsage}.
- Return {parentRecord}.
- Let {parent} be the corresponding entry in {deferMap} for {parentDeferUsage}.
- Return {parent}.

AddNewDeferredGroupedFieldSets(incrementalPublisher, newGroupedFieldSetDetails,
deferMap, path):
AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap, deferMap,
path):

- Initialize {newDeferredGroupedFieldSets} to an empty list.
- For each {deferUsageSet} and {groupedFieldSetDetails} in
{newGroupedFieldSetDetails}:
- Initialize {detailsList} to an empty list.
- For each {deferUsageSet} and {details} in {groupDetailsMap}:
- Let {groupedFieldSet} and {shouldInitiateDefer} be the corresponding entries
on {groupedFieldSetDetails}.
- Let {deferredGroupedFieldSet} be a new Deferred Grouped Field Set Record
created from {groupedFieldSet} and {shouldInitiateDefer}.
on {details}.
- Let {deferredGroupedFieldSetRecord} be a new Deferred Grouped Field Set
Record.
- Initialize {recordDetails} to an empty unordered map.
- Set the corresponding entries on {recordDetails} to
{deferredGroupedFieldSetRecord}, {groupedFieldSet}, and
{shouldInitiateDefer}.
- Let {deferredFragments} be the result of
{GetDeferredFragments(deferUsageSet, newDeferMap)}.
- Let {fragmentIds} be an empty list.
- For each {deferredFragment} in {deferredFragments}:
- Let {id} be the corresponding entry on {deferredFragment}.
- Append {id} to {fragmentIds}.
- Let {id} be the corresponding entry on {deferredGroupedFieldSet}.
- Let {id} be the corresponding entry on {deferredGroupedFieldSetRecord}.
- Let {eventQueue} be the corresponding entry on {incrementalPublisher}.
- Enqueue a New Deferred Grouped Field Set Event on {eventQueue} with details
{id}, {path}, and {fragmentIds}.
- Append {deferredGroupedFieldSet} to {newDeferredGroupedFieldSets}.
- Return {newDeferredGroupedFieldSets}.
- Append {recordDetails} to {detailsList}.
- Return {detailsList}.

GetDeferredFragments(deferUsageSet, deferMap):

Expand Down Expand Up @@ -1366,8 +1356,8 @@ A Field Details record is a structure containing:
- {target}: the Defer Usage record corresponding to the deferred fragment
enclosing this field or the value {undefined} if the field was not deferred.

Additional deferred grouped field sets are returned as Grouped Field Set Details
records which are structures containing:
Information about additional deferred grouped field sets are returned as a list
of Grouped Field Set Details structures containing:

- {groupedFieldSet}: the grouped field set itself.
- {shouldInitiateDefer}: a boolean value indicating whether the executor should
Expand Down Expand Up @@ -1444,7 +1434,7 @@ parentTarget, newTarget):
- Append {target} to {newDeferUsages}.
- Otherwise:
- Let {target} be {newTarget}.
- Let {fragmentTargetByKeys}, {fragmentFieldsByTarget},
- Let {fragmentTargetsByKey}, {fragmentFieldsByTarget},
{fragmentNewDeferUsages} be the result of calling
{AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues,
visitedFragments, parentTarget, target)}.
Expand Down Expand Up @@ -1485,7 +1475,7 @@ parentTarget, newTarget):
- Append {target} to {newDeferUsages}.
- Otherwise:
- Let {target} be {newTarget}.
- Let {fragmentTargetByKeys}, {fragmentFieldsByTarget},
- Let {fragmentTargetsByKey}, {fragmentFieldsByTarget},
{fragmentNewDeferUsages} be the result of calling
{AnalyzeSelectionSet(objectType, fragmentSelectionSet, variableValues,
visitedFragments, parentTarget, target)}.
Expand Down Expand Up @@ -1550,7 +1540,7 @@ BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets)
- Let {fieldDetails} be a new Field Details record created from {node}
and {target}.
- Append {fieldDetails} to the {fields} entry on {fieldGroup}.
- Initialize {newGroupedFieldSetDetails} to an empty unordered map.
- Initialize {groupDetailsMap} to an empty unordered map.
- For each {maskingTargets} and {targetSetDetails} in {targetSetDetailsMap}:
- Initialize {newGroupedFieldSet} to an empty ordered map.
- Let {keys} be the corresponding entry on {targetSetDetails}.
Expand All @@ -1573,11 +1563,11 @@ BuildGroupedFieldSets(fieldsByTarget, targetsByKey, parentTargets)
and {target}.
- Append {fieldDetails} to the {fields} entry on {fieldGroup}.
- Let {shouldInitiateDefer} be the corresponding entry on {targetSetDetails}.
- Let {details} be a new Grouped Field Set Details record created from
{newGroupedFieldSet} and {shouldInitiateDefer}.
- Set the entry for {maskingTargets} in {newGroupedFieldSetDetails} to
{details}.
- Return {groupedFieldSet} and {newGroupedFieldSetDetails}.
- Initialize {details} to an empty unordered map.
- Set the entry for {groupedFieldSet} in {details} to {newGroupedFieldSet}.
- Set the corresponding entry in {details} to {shouldInitiateDefer}.
- Set the entry for {maskingTargets} in {groupDetailsMap} to {details}.
- Return {groupedFieldSet} and {groupDetailsMap}.

Note: entries are always added to Grouped Field Set records in the order in
which they appear for the first target. Field order for deferred grouped field
Expand Down Expand Up @@ -1641,20 +1631,19 @@ IsSameSet(setA, setB):
## Executing Deferred Grouped Field Sets

ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues,
incrementalPublisher, path, newDeferredGroupedFieldSets, deferMap)
incrementalPublisher, path, detailsList, deferMap)

- If {path} is not provided, initialize it to an empty list.
- For each {deferredGroupedFieldSet} of {newDeferredGroupedFieldSets}:
- Let {shouldInitiateDefer} and {groupedFieldSet} be the corresponding entries
on {deferredGroupedFieldSet}.
- For each {recordDetails} in {detailsList}, allowing for parallelization:
- Let {deferredGroupedFieldSetRecord}, {groupedFieldSet}, and
{shouldInitiateDefer} be the corresponding entries on {recordDetails}.
- If {shouldInitiateDefer} is {true}:
- Initiate implementation specific deferral of further execution, resuming
execution as defined.
- Let {data} be the result of calling {ExecuteGroupedFieldSet(groupedFieldSet,
objectType, objectValue, variableValues, path, deferMap,
incrementalPublisher, deferredGroupedFieldSet)}.
- Let {eventQueue} be the corresponding entry on {incrementalPublisher}.
- Let {id} be the corresponding entry on {deferredGroupedFieldSet}.
- Let {id} be the corresponding entry on {deferredGroupedFieldSetRecord}.
- If _field error_ were raised, causing a {null} to be propagated to {data}:
- Let {incrementalErrors} be the list of such field errors.
- Enqueue an Errored Deferred Grouped Field Set event with details {id} and
Expand Down Expand Up @@ -1787,16 +1776,16 @@ yielded items satisfies `initialCount` specified on the `@stream` directive.

#### Execute Stream Field

ExecuteStreamField(stream, index, innerType, variableValues,
incrementalPublisher, parentIncrementalDataRecord):
ExecuteStreamField(stream, path, iterator, fieldGroup, index, innerType,
variableValues, incrementalPublisher, parentIncrementalDataRecord):

- Let {path} and {iterator} be the corresponding entries on {stream}.
- Let {incrementalErrors} be an empty list of _field error_ for the entire
stream, including all _field error_ bubbling up to {path}.
- Let {currentIndex} be {index}.
- Let {currentParent} be {parentIncrementalDataRecord}.
- Let {errored} be {false}.
- Let {eventQueue} be the corresponding entry on {incrementalPublisher}.
- Let {streamFieldGroup} be the result of {GetStreamFieldGroup(fieldGroup)}.
- Repeat the following steps:
- Let {itemPath} be {path} with {currentIndex} appended.
- Let {streamItems} be a new Stream Items Record.
Expand Down Expand Up @@ -1828,7 +1817,6 @@ incrementalPublisher, parentIncrementalDataRecord):
{id}.
- Return.
- Let {item} be the item retrieved from {iterator}.
- Let {streamFieldGroup} be the corresponding entry on {stream}.
- Let {newDeferMap} be an empty unordered map.
- Let {data} be the result of calling {CompleteValue(innerType,
streamedFieldGroup, item, variableValues, itemPath, newDeferMap,
Expand Down Expand Up @@ -1880,20 +1868,19 @@ incrementalPublisher, incrementalDataRecord):
- Let {iterator} be an iterator for {result}.
- Let {items} be an empty list.
- Let {index} be zero.
- Let {eventQueue} be the corresponding entry on {incrementalPublisher}.
- While {result} is not closed:
- If {streamDirective} is defined and {index} is greater than or equal to
{initialCount}:
- Let {streamFieldGroup} be the result of
{GetStreamFieldGroup(fieldGroup)}.
- Let {stream} be a new Stream Record created from {streamFieldGroup}, and
{iterator}.
- Let {stream} be a new Stream Record.
- Let {id} be the corresponding entry on {stream}.
- Let {earlyReturn} be the implementation-specific value denoting how to
notify {iterator} that no additional items will be requested.
- Enqueue a New Stream Event on {eventQueue} with details {id}, {label},
{path}, and {earlyReturn}.
- Call {ExecuteStreamField(stream, index, innerType, variableValues,
incrementalPublisher, incrementalDataRecord)}.
- Call {ExecuteStreamField(stream, path, iterator, fieldGroup, index,
innerType, variableValues, incrementalPublisher,
incrementalDataRecord)}.
- Return {items}.
- Otherwise:
- Wait for the next item from {result} via the {iterator}.
Expand All @@ -1913,21 +1900,20 @@ incrementalPublisher, incrementalDataRecord):
- Let {objectType} be {fieldType}.
- Otherwise if {fieldType} is an Interface or Union type.
- Let {objectType} be {ResolveAbstractType(fieldType, result)}.
- Let {groupedFieldSet}, {newGroupedFieldSetDetails}, and {deferUsages} be the
result of {ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}.
- Let {groupedFieldSet}, {groupDetailsMap}, and {deferUsages} be the result of
{ProcessSubSelectionSets(objectType, fieldGroup, variableValues)}.
- Let {newDeferMap} be the result of
{AddNewDeferFragments(incrementalPublisher, newDeferUsages,
incrementalDataRecord, deferMap, path)}.
- Let {newDeferredGroupedFieldSets} be the result of
{AddNewDeferredGroupedFieldSets(incrementalPublisher,
newGroupedFieldSetDetails, newDeferMap, path)}.
- Let {detailsList} be the result of
{AddNewDeferredGroupedFieldSets(incrementalPublisher, groupDetailsMap,
newDeferMap, path)}.
- Let {completed} be the result of evaluating
{ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues,
path, newDeferMap, incrementalPublisher, incrementalDataRecord)} _normally_
(allowing for parallelization).
- In parallel, call {ExecuteDeferredGroupedFieldSets(objectType, result,
variableValues, incrementalPublisher, newDeferredGroupedFieldSets,
newDeferredFragments, newDeferMap)}.
variableValues, incrementalPublisher, detailsList, newDeferMap)}.
- Return {completed}.

ProcessSubSelectionSets(objectType, fieldGroup, variableValues):
Expand Down