question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Placeholder issue for feedback on `@defer` & `@stream` proposal

See original GitHub issue

For each release of GraphQL.js, we are now publishing an accompanying release containing an experimental implementation of the @defer and @stream directives. We are hoping to get community feedback on these releases before the proposal is accepted into the GraphQL specification.

You can use this experimental release of GraphQL.js by adding the following to your project’s package.json file.

"graphql": "experimental-stream-defer"

A similar release of express-graphql will be coming soon.

I am creating this issue as a placeholder for anyone who has tried out these experimental packages to leave feedback. Please let us know if everything is working as expected or if you run into any issues. Any suggestions for improvements are also welcome.

References:

Feedback received so far

Call return on iterable when connection closes

content-length in payload should not be required, parsers should use boundary instead

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:33
  • Comments:40 (30 by maintainers)

github_iconTop GitHub Comments

7reactions
robrichardcommented, Dec 11, 2020

@jensneuse thanks for the feedback! Here’s our thoughts on these topics

content-negotiation

GraphQL Operations might be re-used by different clients or stored. Therefore it’s not always the case that the client who initially wrote the Operation will also send it.

In this case could the clients make use of the if argument on @defer and @stream? The clients that do not support incremental delivery can pass a false variable.

query SharedQuery($shouldStream: Boolean) {
  myListField(if: $shouldStream, initialCount: 0) {
    name
  }
}

I think this would solve this issue without relying on out-of-band information being passed?

Response format

hasNext

I think there is benefit from hasNext being part of the specified payload instead of deriving it from the underlying transport mechanism. These requests may be used over other transports where inspecting boundaries is not as easy and we wouldn’t want to tie the GraphQL spec to closely to one transport.

Since many GraphQL clients are transport agnostic I think it’s convenient that there is a standard way to determine if a request is still in flight or not, while keeping the transport code fully encapsulated.

JSON Patch

As @AlicanC said, the only JSON patch operation that’s needed is add, but for @defer it is really an object spread. You would need a JSON patch add operation for each field in the deferred fragment.

For example this payload:

{
  "data": {
    "firstName": "Rob",
    "lastName": "Richard",
    "username": "robrichard",
  },
  "path": ["user"],
  "hasNext": false
}

would be equivalent to this JSON patch:

[
  { "op": "add", "path": "/person/firstName", "value": "Rob"},
  { "op": "add", "path": "/person/lastName", "value": "Richard"},
  { "op": "add", "path": "/person/username", "value": "robrichard"},
]

Plus we need the additional fields for label, extensions, and errors.

While json-patch libraries make it easy to apply patches to the response, many GraphQL clients like Relay and Apollo use a normalized store for data. So incoming patch results would not actually be applied to the previous payload, but rather normalized and inserted into the store.

The GraphQL spec already has a definition for referencing response paths, currently used for errors. We are reusing this definition for incremental delivery, so I’d be hesitant to also introduce JSON Pointers into the spec.

Client Performance

Agreed that a flood of payloads can easily overwhelm clients. I think it could be handled client side with debouncing if a large response is potentially expected.

For example - Relay supports returning an array of payloads from the network layer. It will process all of the payloads into the store and only trigger one re-render of the React components.

5reactions
martinbonnincommented, May 24, 2021

Codegen-related question/proposal about giving clients some information about what fragments are included or not in the initial response.

The RFC makes @defer optional to support on the server side:

Server can ignore @defer/@stream. This approach allows the GraphQL server to treat @defer and @stream as hints. The server can ignore these directives and include the deferred data in previous responses. This requires clients to be written with the expectation that deferred data could arrive in either its own incrementally delivered response or part of a previously delivered response.

If I understand correctly, given the following query:

{
    hero {
        id
        ...humanDetails @defer
    }
}
fragment humanDetails  on Human {
    name
}

The server can either respond with:

{
  "hero": {
    "__typename": "Human",
    "id": "1001"
  }
}

or

{
  "hero": {
    "__typename": "Human",
    "id": "1001",
    "name": "Luke"
  }
}

Both are valid responses.

In that case, it’s not possible to know in advance the shape of the models to use on the client side. The parser will have to try to map the response to either class HumanWithDetails(id, name) or class Human(id)

This example is relatively simple but it can get more complicated if multiple type conditions and/or object types come into play. The parser can have to parse each fragment and for each fragment, potentially recurse into any nested objects. If that fails, rewind and try again for the next fragment. This makes generating such code more difficult and also certainly impacts the runtime parsing performance.

Would it be possible to inform the client about what to expect in the initial response? That could be done with an extra "deferred" property in the response json:

// payload 1: humanDetails deferred
// deferred must be sent before "data"
{
    deferred: ["humanDetails"],
    data: {__typename: "Human", id: 1001},
    hasNext: true
}

And if the fragment is already included:

// payload 1: humanDetails included
{
    data: {__typename: "Human", id: 1001, name: "Luke"},
    hasNext: false
}

If a server doesn’t support @defer, it will not send deferred and the client knows that all fragments will be included in the initial response. This way it is transparent to existing servers.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Improving Latency with @defer and @stream Directives
The new @defer and @stream directives allow GraphQL servers to do exactly that by returning multiple payloads from a single GraphQL response.
Read more >
What are the differences between Deferred, Promise and ...
Simply we can say that a Promise represents a value that is not yet known where as a Deferred represents work that is...
Read more >
The Dangers of Deferred Maintenance: How to Avoid ...
Limited resources can lead to situations where teams are forced to postpone maintenance work until issues of higher priority are solved first.
Read more >
Running parameterized queries | BigQuery
BigQuery supports query parameters to help prevent SQL injection when queries are constructed using user input. This feature is only available with Google ......
Read more >
GraphQL
@stream. directives proposal advanced to stage 2! To advance it further we're looking for your feedback! Learn more in this post by.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found