Unification of Overlays and Traits
See original GitHub issueI’m opening this issue simply as a place to collect some ideas about how the concepts of Overlays and Traits might be brought together.
In both proposals, I think the key notion is a “fragment”, which I would describe as: a “sparse” sub-object of an OpenAPI definition. In the Overlay proposal, a fragment is the value
of an “Update Object” and has a type of any
.
I think fragments – which I would like to call “mixins” – can have a more well-defined structure than just any
. If we use the discriminator
approach already present in OpenAPI for “mixins”, we can require (and validate) conformance to a particular structure. In particular, we can require a mixin to be a “sparse” form of any well-defined OpenAPI object, e.g. Operation, Response, Parameters, or even the whole OpenAPI definition.
Mixins could be defined as just another flavor of “component”. So
components:
mixins:
pagable:
type: operation << so what follows should validate as a "sparse"* OpenAPI Operation object
< pageable parameters and response props in #613 >
Note *: “sparse” here means all props are optional
Mixins could then be included effectively anywhere in the API doc by reference:
$mixin: "/components/mixin/pageable"
By virtue of the mixin type, it could be validated as allowed or not allowed at the point it is referenced.
Now Overlays can become simply a mechanism for inserting mixins and mixin references into an API document. The JMESPath mechanism of overlays still provide the ability to apply a single update to multiple target objects using wildcards, but that update would now be expressed as simply adding a “mixin” to each of the target objects.
These are just strawman ideas and I do not claim to have thought them through any detail, but I hope they can serve as useful seeds for discussion.
Examples
Mixins are a recasting of “Traits” as described in #613. Here’s how I imagine mixins could be used to apply a “pageable” trait to operations.
The “pageable” mixin would be defined in the components / mixins
section of the API doc:
components:
mixins:
pagable:
type: operation
content:
parameters:
- name: pageSize
in: query
type: number
required: false
default: 10
- name: pageNumber
in: query
type: number
required: false
default: 1
response:
200:
schema:
type: object
pagination:
$ref: "#/definitions/PaginationFragment"
and an operation would “apply” the “pageable” mixin with a $mixin property, as follows:
paths:
/foo:
get:
description: search for foo resources
$mixin:
- pagable
parameters:
- name: q
in: query
type: string
required: true
responses:
200:
schema:
type: object
FooItems:
array:
items:
$ref: '#/definitions/FooItem'
The application of the mixin to the operation would yield on operation like:
paths:
/foo:
get:
description: search for foo resources
parameters:
- name: q
in: query
type: string
required: true
- name: pageSize
in: query
type: number
required: false
default: 10
- name: pageNumber
in: query
type: number
required: false
default: 1
responses:
200:
schema:
type: object
FooItems:
array:
items:
$ref: '#/definitions/FooItem'
pagination:
$ref: "#/definitions/PaginationFragment"
Issue Analytics
- State:
- Created 5 years ago
- Reactions:4
- Comments:16 (11 by maintainers)
Top GitHub Comments
Thanks for the lucid explanation @handrews. I think preserving the integrity of JSON Schema’s processing model and composition semantics should be an important design goal for us.
I really cannot say much more without looking more carefully at use cases.
But I do think part of our problem is that we’re (still) trying to use JSON Schema as a type definition language. A prototypical use case for mix-ins goes something like, “I want to add these properties to the object schema of the request body.” But you’re not really adding properties, you’re adding constraints, which has a whole different set of implications. And the nature of the “adding” operation needs much more careful thought than we’re accustomed to giving it.
@tedepstein OK, after a lot of thought, I’ve distilled this down to a relatively concise explanation.
Part of the problem with
$merge
is potentially unexpected behavior as large systems grow and change.If I have a schema that looks like:
I publish this schema as the schema that officially validates Foos.
You decide that you have a FooBar which is pretty close to being a Foo but has one more special property in it. So you
$merge
or$mixin
or whatever:Because of how
properties
andadditionalProperties
interact, this has the effect that, if an instance has a property named"specialProp3"
, then to validate as a Foo, it would have to be a string, but to validate as a FooBar, it would have to be a boolean.With all of the current and planned features of JSON Schema, this is intentionally not possible. If you build on a Foo, then your derived schema MUST satisfy all of the constraints specified by the Foo schema.
But in this example, FooBar is derived from (in the sense of depending on / building on) Foo. But (due to the
required
) a valid FooBar is in fact never a valid Foo. You can, in fact, use this sort of keyword to slice things up and produce new schemas that have no clear relationship to the constituent schema.JSON Schema is a constraint system. A fundamental rule of such a constraint system is that you cannot lift constraints. You can add more, and that is how things are re-used. But you cannot lift them.
unevaluatedProperties
lets you do some complex things, but it is still adding constraints.Once all relative URIs in the schema are resolved (interactions between
$id
and$ref
), each schema object’s constraints can be evaluated independent of any parent or sibling schemas. First you evaluate all subschemas, and then you evaluate the local keywords.If you force some sort of
$merge
behavior into JSON Schema in the context of OpenAPI, then it is no longer a proper constraint system. While the independent evaluation of objects still technically exists in the form of the lazily evaluated merge results, schema authors cannot see those objects easily. In terms of what you can see, you can no longer trust that your schema object is evaluated independently.The author of the Foo schema may not have any idea that there is a FooBar that splices their Foo schema. But now, instead of the Foo schema being a properly encapsulated description of valid Foos, it is just a source of keywords that can be rearranged arbitrarily. There is no encapsulation anymore.
I have spent pretty much the entire current draft cycle focused on keeping people from breaking JSON Schema’s fundamental constraint and encapsulation design.
All of the work on modular extensibility, keyword classification, and
unevaluatedProperties
has been towards that goal.unevaluatedProperties
is obvious, but the rest of it I have done in order to enable users (specifically OAS) to build things like code generation vocabularies out of annotations, and therefore not need$merge
splicing features that ruin the constraint system in order to get the desired results.That required:
unevaluatedProperties
and similarly dynamic keywords without breaking the fundamental approachIt has been a lot of work, and not just by me. But if OpenAPI decides to allow schema mixins… well, you’re probably one of the biggest users of JSON Schema. People who are looking for shortcuts instead of building sustainable systems will demand the mixin feature be added to JSON Schema proper instead of learning all of the things that we did to build a better system.
I realize that not everyone cares about JSON Schema having a consistent, extensible, and elegant underlying model. Although I assert that having such an underlying model would make JSON Schema more successful in the long run as use cases grow and change. I certainly don’t expect OpenAPI to consider this property of JSON Schema a goal.
But I hope this makes it clear why I’m not happy with this direction and how it is likely to impact JSON Schema if chosen.