Placeholder issue for feedback on `@defer` & `@stream` proposal
See original GitHub issueFor each release of GraphQL.js, we are now publishing an accompanying release containing an experimental implementation of the @defer and @stream directives. We are hoping to get community feedback on these releases before the proposal is accepted into the GraphQL specification.
You can use this experimental release of GraphQL.js by adding the following to your project’s package.json file.
"graphql": "experimental-stream-defer"
A similar release of express-graphql will be coming soon.
I am creating this issue as a placeholder for anyone who has tried out these experimental packages to leave feedback. Please let us know if everything is working as expected or if you run into any issues. Any suggestions for improvements are also welcome.
References:
- Demo repo
- Conference talk about defer/stream
- defer/stream RFC
- Spec edits
- graphql-js branch
- express-graphql PR
- graphql-over-http RFC
- fetch-multipart-graphql (client library)
- meros (client library)
Feedback received so far
Call return on iterable when connection closes
- Raised by @danielrearden
graphql-js- Fixed: https://github.com/graphql/graphql-js/pull/2843
- Released: TBD
express-graphql- Fixed: https://github.com/graphql/express-graphql/pull/583/commits/ce8429e5c15172b394e65d5a27491611b5fb354e
- Released:
express-graphql@0.12.0-experimental-stream-defer.1
content-length in payload should not be required, parsers should use boundary instead
- Raised by @maraisr
graphql-over-httpspecfetch-multipart-graphql- Fix: WIP
express-graphql
Issue Analytics
- State:
- Created 3 years ago
- Reactions:33
- Comments:40 (30 by maintainers)
Top Results From Across the Web
Improving Latency with @defer and @stream Directives
The new @defer and @stream directives allow GraphQL servers to do exactly that by returning multiple payloads from a single GraphQL response.
Read more >What are the differences between Deferred, Promise and ...
Simply we can say that a Promise represents a value that is not yet known where as a Deferred represents work that is...
Read more >The Dangers of Deferred Maintenance: How to Avoid ...
Limited resources can lead to situations where teams are forced to postpone maintenance work until issues of higher priority are solved first.
Read more >Running parameterized queries | BigQuery
BigQuery supports query parameters to help prevent SQL injection when queries are constructed using user input. This feature is only available with Google ......
Read more >GraphQL
@stream. directives proposal advanced to stage 2! To advance it further we're looking for your feedback! Learn more in this post by.
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@jensneuse thanks for the feedback! Here’s our thoughts on these topics
content-negotiation
In this case could the clients make use of the
ifargument on@deferand@stream? The clients that do not support incremental delivery can pass afalsevariable.I think this would solve this issue without relying on out-of-band information being passed?
Response format
hasNext
I think there is benefit from
hasNextbeing part of the specified payload instead of deriving it from the underlying transport mechanism. These requests may be used over other transports where inspecting boundaries is not as easy and we wouldn’t want to tie the GraphQL spec to closely to one transport.Since many GraphQL clients are transport agnostic I think it’s convenient that there is a standard way to determine if a request is still in flight or not, while keeping the transport code fully encapsulated.
JSON Patch
As @AlicanC said, the only JSON patch operation that’s needed is
add, but for@deferit is really an object spread. You would need a JSON patchaddoperation for each field in the deferred fragment.For example this payload:
would be equivalent to this JSON patch:
Plus we need the additional fields for label, extensions, and errors.
While json-patch libraries make it easy to apply patches to the response, many GraphQL clients like Relay and Apollo use a normalized store for data. So incoming patch results would not actually be applied to the previous payload, but rather normalized and inserted into the store.
The GraphQL spec already has a definition for referencing response paths, currently used for errors. We are reusing this definition for incremental delivery, so I’d be hesitant to also introduce JSON Pointers into the spec.
Client Performance
Agreed that a flood of payloads can easily overwhelm clients. I think it could be handled client side with debouncing if a large response is potentially expected.
For example - Relay supports returning an array of payloads from the network layer. It will process all of the payloads into the store and only trigger one re-render of the React components.
Codegen-related question/proposal about giving clients some information about what fragments are included or not in the initial response.
The RFC makes
@deferoptional to support on the server side:If I understand correctly, given the following query:
The server can either respond with:
or
Both are valid responses.
In that case, it’s not possible to know in advance the shape of the models to use on the client side. The parser will have to try to map the response to either
class HumanWithDetails(id, name)orclass Human(id)This example is relatively simple but it can get more complicated if multiple type conditions and/or object types come into play. The parser can have to parse each fragment and for each fragment, potentially recurse into any nested objects. If that fails, rewind and try again for the next fragment. This makes generating such code more difficult and also certainly impacts the runtime parsing performance.
Would it be possible to inform the client about what to expect in the initial response? That could be done with an extra
"deferred"property in the response json:And if the fragment is already included:
If a server doesn’t support
@defer, it will not senddeferredand the client knows that all fragments will be included in the initial response. This way it is transparent to existing servers.