Overview
NOTE
This specification contains the low-level details for connector authors, and is intended as a complete reference.
Users looking to build their own connectors might want to also look at some additional resources:
- Hasura Connector Hub contains a list of currently available connectors
- Let's Build a Connector is a step-by-step to creating a connector using TypeScript
Hasura data connectors allow you to extend the functionality of the Hasura server by providing web services which can resolve new sources of data. By following this specification, those sources of data can be added to your Hasura graph, and the usual Hasura features such as relationships and permissions will be supported for your data source.
This specification is designed to be as general as possible, supporting many different types of data source, while still being targeted enough to provide useful features with high performance guarantees. It is important to note that data connectors are designed for tabular data which supports efficient filtering and sorting. If you are able to model your data source given these constraints, then it will be a good fit for a data connector, but if not, you might like to consider a GraphQL remote source integration with Hasura instead.
API Specification
Version |
---|
0.2.0 |
A data connector encapsulates a data source by implementing the protocol in this specification.
A data connector must implement several web service endpoints:
- A capabilities endpoint, which describes which features the data source is capable of implementing.
- A schema endpoint, which describes the resources provided by the data source, and the shape of the data they contain.
- A query endpoint, which reads data from one of the relations described by the schema endpoint.
- A query/explain endpoint, which explains a query plan, without actually executing it.
- A mutation endpoint, which modifies the data in one of the relations described by the schema endpoint.
- A mutation/explain endpoint, which explains a mutation plan, without actually executing it.
- A metrics endpoint, which exposes runtime metrics about the data connector.
- A health endpoint, which indicates service health and readiness
Changelog
0.2.0
Breaking Changes
ComparisonTarget::RootCollectionColumn
was removed and replaced by named scopes (RFC)path
was removed fromComparisonTarget::Column
(RFC)AggregateFunctionDefinition
was changed to anenum
, to support standardized aggregate functions (RFC)ComparisonValue::Column
no longer usesComparisonTarget
to pick the column. Instead, the necessary column and pathing details are inlined onto the enum variant.- Declarations of foreign keys has moved from
CollectionInfo
toObjectType
. This enables object types nested within a collection's object type to declare foreign keys. - The target column in column mappings can now reference an object-nested field. The target column is now a field path (
Vec<FieldName>
) instead of just a field (FieldName
). Column mappings occur in:Relationship::column_mapping
ForeignKeyConstraint::column_mapping
- Scalar type representations are now required, and the previously deprecated
number
andinteger
representations have been removed. - If the capability
query.aggregates
is enabled, it is now expected that the new schema propertycapabilities.query.aggregates
is also returned.
Specification
Grouping
A new section was added to the specification which allows callers to group rows and aggregate within groups, generalizing SQL's GROUP BY
functionality.
Extraction Functions
Extraction functions were added to the schema response to facilitate grouping by components of complex dimensions.
Named scopes
Root column references were generalized to named scopes. Scopes are introduced by EXISTS
expressions, and named scopes allow references to columns outside of the current scope; that is, outside the EXISTS
expression. Unlike root column references, named scopes allow the caller to refer to columns in any collection in scope, and not just the root collection.
Nested collections
NestedField::Collection
was added to support querying nested collections.- Exists predicates can now search nested collections.
Filtering involving nested scalar arrays
Nested scalar arrays can now be compared against in filter expressions.
- Exists predicates can now search nested scalar collections
- Expressions now have nested array comparison operators that can be used to test if a scalar array is empty or if it contains an element
Filter by aggregates
ComparisonTarget
was extended to allow filtering by aggregates.
Nested relationships
Nested relationships are relationships where the columns being joined upon exist on nested objects within collection's object type. While NDC 0.1.x supports selecting fields across a relationship that starts from within a nested object, it does not support nested relationships in other contexts, such as filtering and ordering. To resolve this, the following additions have been made:
ExistsInCollection::Related
has gained afield_path
field that enables descent through nested fields before applying the relationship. This enables support for filtering across a nested relationship.PathElement
has also gained afield_path
field that enables descent through nested fields before applying the relationship.PathElement
is used in multiple places, which unlocks nested relationships in these places:ComparisonValue::Column
- part of filter predicates; where the right hand side of a comparison operation references a columnComparisonTarget::Aggregate
- part of filter predicates; where the left hand side of a comparison operation references an aggregateOrderByTarget::Column
- when you want to order by a column across an object relationshipOrderByTarget::Aggregate
- when you want to order by an aggregate that happens across a nested object relationshipDimension::Column
- when selecting a column to group by that occurs across a nested object relationship
Column mappings used in relationships were also modified to allow the target column to be referenced via a field path, to allow targeting of object-nested columns across a relationship. Foreign keys are also now defined on the object type rather than the collection, which allows the declaration of foreign keys on object types that are used in nested fields inside a collection.
Nested relationships are now gated behind the relationships.nested
capabilities, and so connectors that do not declare these capabilities can expect to not have to deal with nested relationships.
Wider field arguments support
Object type fields can declare arguments that must be submitted when the field is evaluated. However, support for using these fields is not universal; there are some features which do not allow the use of fields with arguments, for example in nested field paths, or in relationship column mappings.
Now, support for field arguments has been added to:
ComparisonTarget::Column
ComparisonValue::Column
OrderByTarget::Column
Aggregate::ColumnCount
Aggregate::SingleColumn
However, field arguments are still considered an unstable feature and their use is not recommended outside of very specialized, advanced use cases.
More standard comparison operators, standard aggregate functions
Standard comparison operators have been added for >
, >=
, <
, and <=
, and string comparisons contains
, icontains
, starts_with
, istarts_with
, ends_with
and ends_with
. Connectors that have already defined these operators as custom operators should migrate them to standard operators.
In addition, aggregate functions now have a set of standard functions that can be implemented: sum
, average
, min
, max
. Connectors that have already defined these functions as custom aggregate functions should migrate them to standard aggregate functions.
X-Hasura-NDC-Version
header
Clients can now indicate the intended protocol version in a HTTP header alongside any request.
Scalar type representations
Scalar type representations are now required; previously they were optional, where a missing representation was assumed to mean JSON. In addition, the deprecated number and integer representations have been removed; a more precise representation (such as float64 or int32) should be chosen instead.
Capability-specific schema information
Certain capabilities may require specific data to be returned in the schema to support them. This data is now returned in the capabilities property on the schema response.
Specifically, there is a new schema property, capabilities.query.aggregates.count_scalar_type
, that defines the result type of all count aggregate functions. This must be returned if the capability query.aggregates
is enabled.
0.1.6
Specification
EXISTS
expressions can now query nested collections
0.1.5
Rust Libraries
- Add newtypes for string types
- Remove duplication by setting values in the workspace file
- Export the specification version from
ndc-models
0.1.4
Specification
- Aggregates over nested fields
ndc-test
- Replay test folders in alphabetical order
Fixes
- Add
impl Default
forNestedFieldCapabilities
0.1.3
Specification
- Support field-level arguments
- Support filtering and ordering by values of nested fields
- Added a
biginteger
type representation
ndc-test
- Validate all response types
- Release pipeline for ndc-test CLI
Rust Libraries
- Upgrade Rust to v1.78.0, and the Rust dependencies to their latest versions
- Add back features for native-tls vs rustls
0.1.2
Specification
- More type representations were added, and some were deprecated.
Rust Libraries
- Upgrade to Rust v1.77
- The
ndc-client
library was removed. Clients are advised to use the newndc-models
library for type definitions, and to use a HTTP client library of their choice directly.
0.1.1
Specification
- Equality operators were more precisely specified
- Scalar types can now specify representations
ndc-test
- Aggregate tests are gated behind the aggregates capability
- Automatic tests are now generated for exists predicates
- Automatic tests are now generated for
single_column
aggregates
Rust Libraries
rustls
is supported instead ofnative-tls
using a Cargo feature.- Upgrade
opentelemetry
to v0.22.0 colored
dependency removed in favor ofcolorful
0.1.0
Terminology
Tables are now known as collections.
Collection Names
Collection names are now single strings instead of arrays of strings. The array structure was previously used to represent qualification by a schema or database name, but the structure was not used anywhere on the client side, and had no semantic meaning. GDC now abstracts over these concepts, and expects relations to be named by strings.
No Configuration
The configuration header convention was removed. Connectors are now expected to manage their own configuration, and a connector URL fully represents that connector with its pre-specified configuration.
No Database Concepts in GDC
GDC no longer sends any metadata to indicate database-specific concepts. For example, a Collection used to indicate whether it was a Collection or view. Such metadata would be passed back in the query IR, to help the connector disambiguate which database object to query. When we proposed adding functions, we would have had to add a new type to disambiguate nullary functions from collections, etc. Instead, we now expect connectors to understand their own schema, and understand the query IR that they receive, as long as it is compatible with their GDC schema.
Column types are no longer sent in the query and mutation requests.
Tables, views and functions are unified under a single concept called "collections". GDC does not care how queries and mutations on relations are implemented.
Collection Arguments
Collection arguments were added to relations in order to support use cases like table-valued functions and certain REST endpoints. Relationships can determine collection arguments.
Functions
Collections which return a single column and a single row are also called "functions", and identified separately in the schema response.
Field Arguments
Field arguments were added to fields in order to support use cases like computed fields.
Operators
The equality operator is now expected on every scalar type implicitly.
Note: it was already implicitly supported by any connector advertising the variables
capability, which imposes column equality constraints in each row set fetched in a forall query.
The equality operator will have semantics assigned for the purposes of testing.
Scalars can define additional operators, whose semantics are opaque.
Procedures
Proceduress were added to the list of available mutation operation types
Schema
- Scalar types were moved to the schema endpoint
- The
object_types
field was added to the schema endpoint
Raw Queries
The raw query endpoint was removed, since it cannot be given any useful semantics across all implementations.
Datasets
The datasets endpoints were removed from the specification, because there was no way to usefully use it without prior knowledge of its implementation.
Basics
Data connectors are implemented as HTTP services. To refer to a running data connector, it suffices to specify its base URL. All required endpoints are specified relative to this base URL.
All endpoints should accept JSON (in the case of POST request bodies) and return JSON using the application/json
content type. The particular format of each JSON document will be specified for each endpoint.
Versioning
This specification is versioned using semantic versioning, and a data connector declares the semantic version of the specification that it implements via its capabilities endpoint.
Non-breaking changes to the specification may be achieved via the addition of new capabilities, which a connector will be assumed not to implement if the corresponding field is not present in its capabilities endpoint.
Requirements
The client may send a semantic version string in the X-Hasura-NDC-Version
HTTP header to any of the HTTP endpoints described by this specification. This header communicates the version of this specification that the client intends to use. Typically this should be the minimum non-breaking version of the specification that is supported by the client, so that the widest range of connectors can be used. For example, if a client sends supports sending v0.1.6 requests, then it technically is sending requests that are compatible with v0.1.0 clients because non-breaking additions are gated behind capabilities and would be disabled for older connectors. In this case, the client should send 0.1.0
as its version in the header.
If the client sends this header, the connector should check compatibility with the requested version, and return an appropriate HTTP error code (e.g. 400 Bad Request
) if it is not capable of providing an implementation. Compatibility is defined as the semver range: ^{requested-version}
. For example, if the client sends 0.2.0
, then the compatible semver range is ^0.2.0
. If the connector implemented spec version 0.1.6
, this would be incompatible, but if it implemented spec version 0.2.1
, this would be compatible.
Note: the /capabilities
endpoint also indicates the implemented specification version for any connector, but it may not be practical for a client to check the capabilities endpoint before issuing a new request, so this provides a way to check compatibility in the course of a normal request.
Error Handling
Status Codes
Data connectors should use standard HTTP error codes to signal error conditions back to the Hasura server. In particular, the following error codes should be used in the indicated scenarios:
Response Code | Meaning | Used when |
---|---|---|
200 | OK | The request was handled successfully according to this specification . |
400 | Bad Request | The request did not match the data connector's expectation based on this specification. |
403 | Forbidden | The request could not be handled because a permission check failed - for example, a mutation might fail because a check constraint was not met. |
409 | Conflict | The request could not be handled because it would create a conflicting state for the data source - for example, a mutation might fail because a foreign key constraint was not met. |
422 | Unprocessable Content | The request could not be handled because, while the request was well-formed, it was not semantically correct. For example, a value for a custom scalar type was provided, but with an incorrect type. |
500 | Internal Server Error | The request could not be handled because of an error on the server |
501 | Not Supported | The request could not be handled because it relies on an unsupported capability. Note: this ought to indicate an error on the caller side, since the caller should not generate requests which are incompatible with the indicated capabilities. |
502 | Bad Gateway | The request could not be handled because an upstream service was unavailable or returned an unexpected response, e.g., a connection to a database server failed |
Response Body
Data connectors should return an ErrorResponse
as JSON in the response body, in the case of an error.
Service Health
Data connectors must provide a health endpoint which can be used to indicate service health and readiness to any client applications.
Request
GET /health
Response
If the data connector is available and ready to accept requests, then the health endpoint should return status code 200 OK
.
Otherwise, it should ideally return a status code 503 Service Unavailable
, or some other appropriate HTTP error code.
Metrics
Data connectors should provide a metrics endpoint which reports relevant metrics in a textual format. Data connectors can report any metrics which are deemed relevant, or none at all, with the exception of any reserved keys.
Request
GET /metrics
Response
The metrics endpoint should return a content type of text/plain
, and return any metrics in the Prometheus textual format.
Reserved keys
Metric names prefixed with hasura_
are reserved for future use, and should not be included in the response.
Example
# HELP query_total The number of /query requests served
# TYPE query_total counter
query_total 10000 1685405427000
# HELP mutation_total The number of /mutation requests served
# TYPE mutation_total counter
mutation_total 5000 1685405427000
Telemetry
Hasura uses OpenTelemetry to coordinate the collection of traces and metrics with data connectors.
Trace Collection
Trace collection is out of the scope of this specification currently. This may change in a future revision.
Trace Propagation
Hasura uses the W3C TraceContext specification to implement trace propagation. Data connectors should propagate tracing headers in this format to any downstream services.
Capabilities
The capabilities endpoint provides metadata about the features which the data connector (and data source) support.
Request
GET /capabilities
Response
Example
{
"version": "0.2.0",
"capabilities": {
"query": {
"aggregates": {
"filter_by": {},
"group_by": {
"filter": {},
"order": {},
"paginate": {}
}
},
"variables": {},
"nested_fields": {
"filter_by": {
"nested_arrays": {
"contains": {},
"is_empty": {}
}
},
"order_by": {},
"aggregates": {},
"nested_collections": {}
},
"exists": {
"named_scopes": {},
"unrelated": {},
"nested_collections": {},
"nested_scalar_collections": {}
}
},
"mutation": {},
"relationships": {
"relation_comparisons": {},
"order_by_aggregate": {},
"nested": {
"array": {},
"filtering": {},
"ordering": {}
}
}
}
}
Response Fields
Name | Description |
---|---|
version | A semantic version number of this specification which the data connector claims to implement |
capabilities | The capabilities that this connector supports, see below |
Capabilities Fields
These fields are set underneath the capabilities
property on the CapabilitiesResponse
object:
Name | Description |
---|---|
mutation.explain | Whether the data connector is capable of describing mutation plans |
mutation.transactional | Whether the data connector is capable of executing multiple mutations in a transaction |
query.aggregates | Whether the data connector supports aggregate queries. The schema capabilities.query.aggregates should also be returned. |
query.aggregates.filter_by | Whether the data connector supports filtering by aggregated values |
query.aggregates.group_by | Whether the data connector supports grouping operations |
query.aggregates.group_by.filter | Whether the data connector supports filtering on groups |
query.aggregates.group_by.order | Whether the data connector supports ordering on groups |
query.aggregates.group_by.paginate | Whether the data connector supports pagination on groups |
query.exists.named_scopes | Whether the data connector supports named scopes in exists expressions |
query.exists.nested_collections | Whether the data connector supports exists expressions against nested collections |
query.exists.nested_scalar_collections | Whether the data connector supports exists expressions against nested scalar collections |
query.exists.unrelated | Whether the data connector supports exists expressions against unrelated collections |
query.explain | Whether the data connector is capable of describing query plans |
query.nested_fields.aggregates | Whether the data connector is capable of aggregating fields in nested objects |
query.nested_fields.filter_by | Whether the data connector is capable of filtering by nested fields |
query.nested_fields.filter_by.nested_arrays | Whether the data connector is capable of filtering over nested arrays using array_comparison expressions |
query.nested_fields.filter_by.nested_arrays.contains | Whether the data connector is capable of filtering over nested arrays using the contains operator |
query.nested_fields.filter_by.nested_arrays.is_empty | Whether the data connector is capable of filtering over nested arrays using the is empty operator |
query.nested_fields.nested_collections | Whether the data connector supports nested collection field queries |
query.nested_fields.order_by | Whether the data connector is capable of ordering by nested fields |
query.variables | Whether the data connector supports queries with variables |
relationships | Whether the data connector supports relationships |
relationships.nested | Whether the data connector supports relationships that can start from or end with columns in nested objects |
relationships.nested.array | Whether the data connector supports relationships that can start from columns inside nested objects inside nested arrays |
relationships.nested.filtering | Whether the data connector supports using relationships that can start from columns inside nested objects while filtering |
relationships.nested.ordering | Whether the data connector supports using relationships that can start from columns inside nested objects while ordering |
relationships.order_by_aggregate | Whether order by clauses can include aggregates |
relationships.relation_comparisons | Whether comparisons between two columns can include a value column that is across a relationship |
See also
- Type
Capabilities
- Type
CapabilitiesResponse
- Type
QueryCapabilities
- Type
NestedFieldCapabilities
- Type
MutationCapabilities
- Type
RelationshipCapabilities
Types
Several definitions in this specification make mention of types. Types are used to categorize the sorts of data returned and accepted by a data connector.
Scalar and named object types are defined in the schema response , and referred to by name at the point of use.
Array types, nullable types and predicate types are constructed at the point of use.
Named Types
To refer to a named (scalar or object) type, use the type named
, and provide the name:
{
"type": "named",
"name": "String"
}
Array Types
To refer to an array type, use the type array
, and refer to the type of the elements of the array in the element_type
field:
{
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
}
Nullable Types
To refer to a nullable type, use the type nullable
, and refer to the type of the underlying (non-null) inhabitants in the underlying_type
field:
{
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "String"
}
}
Nullable and array types can be nested. For example, to refer to a nullable array of nullable strings:
{
"type": "nullable",
"underlying_type": {
"type": "array",
"element_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "String"
}
}
}
}
Predicate Types
A predicate type can be used to represent valid predicates (of type Expression
) for an object type. A value of a predicate type is represented, in inputs and return values, as a JSON value which parses as an Expression
. Valid expressions are those which refer to the columns of the object type.
To refer to a predicate type, use the type predicate
, and provide the name of the object type:
{
"type": "predicate",
"object_type_name": "article"
}
Note: predicate types are intended primarily for use in arguments to functions and procedures, but they can be used anywhere a Type
is expected, including in output types.
See also
- Type
Type
- Scalar types
- Object types
Schema
The schema endpoint defines any types used by the data connector, and describes the collections and their columns, functions, and any procedures.
The schema endpoint is used to specify the behavior of a data connector, so that it can be tested, verified, and used by tools such as code generators. It is primarily provided by data connector implementors as a development and specification tool, and it is not expected to be used at "runtime", in the same sense that the /query
and /mutation
endpoints would be.
Request
GET /schema
Response
See SchemaResponse
Example
{
"scalar_types": {
"Date": {
"representation": {
"type": "date"
},
"aggregate_functions": {},
"comparison_operators": {
"eq": {
"type": "equal"
},
"in": {
"type": "in"
}
},
"extraction_functions": {
"day": {
"type": "day",
"result_type": "Int"
},
"month": {
"type": "month",
"result_type": "Int"
},
"year": {
"type": "year",
"result_type": "Int"
}
}
},
"Float": {
"representation": {
"type": "float64"
},
"aggregate_functions": {
"avg": {
"type": "average",
"result_type": "Float"
},
"max": {
"type": "max"
},
"min": {
"type": "min"
},
"sum": {
"type": "sum",
"result_type": "Float"
}
},
"comparison_operators": {
"eq": {
"type": "equal"
},
"gt": {
"type": "greater_than"
},
"gte": {
"type": "greater_than_or_equal"
},
"in": {
"type": "in"
},
"lt": {
"type": "less_than"
},
"lte": {
"type": "less_than_or_equal"
}
},
"extraction_functions": {}
},
"Int": {
"representation": {
"type": "int32"
},
"aggregate_functions": {
"avg": {
"type": "average",
"result_type": "Float"
},
"max": {
"type": "max"
},
"min": {
"type": "min"
},
"sum": {
"type": "sum",
"result_type": "Int64"
}
},
"comparison_operators": {
"eq": {
"type": "equal"
},
"gt": {
"type": "greater_than"
},
"gte": {
"type": "greater_than_or_equal"
},
"in": {
"type": "in"
},
"lt": {
"type": "less_than"
},
"lte": {
"type": "less_than_or_equal"
}
},
"extraction_functions": {}
},
"Int64": {
"representation": {
"type": "int64"
},
"aggregate_functions": {
"avg": {
"type": "average",
"result_type": "Float"
},
"max": {
"type": "max"
},
"min": {
"type": "min"
},
"sum": {
"type": "sum",
"result_type": "Int64"
}
},
"comparison_operators": {
"eq": {
"type": "equal"
},
"gt": {
"type": "greater_than"
},
"gte": {
"type": "greater_than_or_equal"
},
"in": {
"type": "in"
},
"lt": {
"type": "less_than"
},
"lte": {
"type": "less_than_or_equal"
}
},
"extraction_functions": {}
},
"String": {
"representation": {
"type": "string"
},
"aggregate_functions": {
"max": {
"type": "max"
},
"min": {
"type": "min"
}
},
"comparison_operators": {
"contains": {
"type": "contains"
},
"ends_with": {
"type": "ends_with"
},
"eq": {
"type": "equal"
},
"gt": {
"type": "greater_than"
},
"gte": {
"type": "greater_than_or_equal"
},
"icontains": {
"type": "contains_insensitive"
},
"iends_with": {
"type": "ends_with_insensitive"
},
"in": {
"type": "in"
},
"istarts_with": {
"type": "starts_with_insensitive"
},
"like": {
"type": "custom",
"argument_type": {
"type": "named",
"name": "String"
}
},
"lt": {
"type": "less_than"
},
"lte": {
"type": "less_than_or_equal"
},
"starts_with": {
"type": "starts_with"
}
},
"extraction_functions": {}
}
},
"object_types": {
"article": {
"description": "An article",
"fields": {
"author_id": {
"description": "The article's author ID",
"type": {
"type": "named",
"name": "Int"
}
},
"id": {
"description": "The article's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"published_date": {
"description": "The article's date of publication",
"type": {
"type": "named",
"name": "Date"
}
},
"title": {
"description": "The article's title",
"type": {
"type": "named",
"name": "String"
}
}
},
"foreign_keys": {
"Article_AuthorID": {
"column_mapping": {
"author_id": [
"id"
]
},
"foreign_collection": "authors"
}
}
},
"author": {
"description": "An author",
"fields": {
"first_name": {
"description": "The author's first name",
"type": {
"type": "named",
"name": "String"
}
},
"id": {
"description": "The author's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"last_name": {
"description": "The author's last name",
"type": {
"type": "named",
"name": "String"
}
}
},
"foreign_keys": {}
},
"city": {
"description": "A city",
"fields": {
"name": {
"description": "The institution's name",
"type": {
"type": "named",
"name": "String"
}
}
},
"foreign_keys": {}
},
"country": {
"description": "A country",
"fields": {
"area_km2": {
"description": "The country's area size in square kilometers",
"type": {
"type": "named",
"name": "Int"
}
},
"cities": {
"description": "The cities in the country",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "city"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
},
"id": {
"description": "The country's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"name": {
"description": "The country's name",
"type": {
"type": "named",
"name": "String"
}
}
},
"foreign_keys": {}
},
"institution": {
"description": "An institution",
"fields": {
"departments": {
"description": "The institution's departments",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
},
"id": {
"description": "The institution's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"location": {
"description": "The institution's location",
"type": {
"type": "named",
"name": "location"
}
},
"name": {
"description": "The institution's name",
"type": {
"type": "named",
"name": "String"
}
},
"staff": {
"description": "The institution's staff",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "staff_member"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
}
},
"foreign_keys": {}
},
"location": {
"description": "A location",
"fields": {
"campuses": {
"description": "The location's campuses",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
},
"city": {
"description": "The location's city",
"type": {
"type": "named",
"name": "String"
}
},
"country": {
"description": "The location's country",
"type": {
"type": "named",
"name": "String"
}
},
"country_id": {
"description": "The location's country ID",
"type": {
"type": "named",
"name": "Int"
}
}
},
"foreign_keys": {
"Location_CountryID": {
"column_mapping": {
"country_id": [
"id"
]
},
"foreign_collection": "countries"
}
}
},
"staff_member": {
"description": "A staff member",
"fields": {
"born_country_id": {
"description": "The ID of the country the staff member was born in",
"type": {
"type": "named",
"name": "Int"
}
},
"first_name": {
"description": "The staff member's first name",
"type": {
"type": "named",
"name": "String"
}
},
"last_name": {
"description": "The staff member's last name",
"type": {
"type": "named",
"name": "String"
}
},
"specialities": {
"description": "The staff member's specialities",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
}
},
"foreign_keys": {
"Staff_BornCountryID": {
"column_mapping": {
"born_country_id": [
"id"
]
},
"foreign_collection": "countries"
}
}
}
},
"collections": [
{
"name": "articles",
"description": "A collection of articles",
"arguments": {},
"type": "article",
"uniqueness_constraints": {
"ArticleByID": {
"unique_columns": [
"id"
]
}
}
},
{
"name": "authors",
"description": "A collection of authors",
"arguments": {},
"type": "author",
"uniqueness_constraints": {
"AuthorByID": {
"unique_columns": [
"id"
]
}
}
},
{
"name": "institutions",
"description": "A collection of institutions",
"arguments": {},
"type": "institution",
"uniqueness_constraints": {
"InstitutionByID": {
"unique_columns": [
"id"
]
}
}
},
{
"name": "countries",
"description": "A collection of countries",
"arguments": {},
"type": "country",
"uniqueness_constraints": {
"CountryByID": {
"unique_columns": [
"id"
]
}
}
},
{
"name": "articles_by_author",
"description": "Articles parameterized by author",
"arguments": {
"author_id": {
"type": {
"type": "named",
"name": "Int"
}
}
},
"type": "article",
"uniqueness_constraints": {}
}
],
"functions": [
{
"name": "latest_article_id",
"description": "Get the ID of the most recent article",
"arguments": {},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
},
{
"name": "latest_article",
"description": "Get the most recent article",
"arguments": {},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "article"
}
}
}
],
"procedures": [
{
"name": "upsert_article",
"description": "Insert or update an article",
"arguments": {
"article": {
"description": "The article to insert or update",
"type": {
"type": "named",
"name": "article"
}
}
},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "article"
}
}
},
{
"name": "delete_articles",
"description": "Delete articles which match a predicate",
"arguments": {
"where": {
"description": "The predicate",
"type": {
"type": "predicate",
"object_type_name": "article"
}
}
},
"result_type": {
"type": "array",
"element_type": {
"type": "named",
"name": "article"
}
}
}
],
"capabilities": {
"query": {
"aggregates": {
"count_scalar_type": "Int"
}
}
}
}
Response Fields
Name | Description |
---|---|
scalar_types | Scalar types |
object_types | Object types |
collections | Collection |
functions | Functions |
procedures | Procedures |
capabilities | Capability-specific information |
Scalar Types
The schema should describe any irreducible scalar types. Scalar types can be used as the types of columns, or in general as the types of object fields.
Scalar types define several types of operations, which extend the capabilities of the query and mutation APIs: comparison operators and aggregate functions.
Type Representations
A scalar type definition must include a type representation. The representation indicates to potential callers what values can be expected in responses, and what values are considered acceptable in requests.
Supported Representations
type | Description | JSON representation |
---|---|---|
boolean | Boolean | Boolean |
string | String | String |
int8 | An 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1 | Number |
int16 | A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1 | Number |
int32 | A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1 | Number |
int64 | A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1 | String |
float32 | An IEEE-754 single-precision floating-point number | Number |
float64 | An IEEE-754 double-precision floating-point number | Number |
biginteger | Arbitrary-precision integer string | String |
bigdecimal | Arbitrary-precision decimal string | String |
uuid | UUID string (8-4-4-4-12 format) | String |
date | ISO 8601 date | String |
timestamp | ISO 8601 timestamp | String |
timestamptz | ISO 8601 timestamp-with-timezone | String |
geography | GeoJSON, per RFC 7946 | JSON |
geometry | GeoJSON Geometry object, per RFC 7946 | JSON |
bytes | Base64-encoded bytes | String |
json | Arbitrary JSON | JSON |
Enum Representations
A scalar type with a representation of type enum
accepts one of a set of string values, specified by the one_of
argument.
For example, this representation indicates that the only three valid values are the strings "foo"
, "bar"
and "baz"
:
{
"type": "enum",
"one_of": ["foo", "bar", "baz"]
}
Comparison Operators
Comparison operators extend the query AST with the ability to express new binary comparison expressions in the predicate.
For example, a data connector might augment a String
scalar type with a LIKE
operator which tests for a fuzzy match based on a regular expression.
A comparison operator is either a standard operator, or a custom operator.
To define a comparison operator, add a ComparisonOperatorDefinition
to the comparison_operators
field of the schema response.
For example:
{
"scalar_types": {
"String": {
"aggregate_functions": {},
"comparison_operators": {
"like": {
"type": "custom",
"argument_type": {
"type": "named",
"name": "String"
}
}
}
}
},
...
}
Standard Comparison Operators
Equal
An operator defined using type equal
tests if a column value is equal to a scalar value, another column value, or a variable.
Note: syntactic equality
Specifically, a predicate expression which uses an operator of type equal
should implement syntactic equality:
- An expression which tests for equality of a column with a scalar value or variable should return that scalar value exactly (equal as JSON values) for all rows in each corresponding row set, whenever the same column is selected.
- An expression which tests for equality of a column with another column should return the same values in both columns (equal as JSON values) for all rows in each corresponding row set, whenever both of those those columns are selected.
This type of equality is quite strict, and it might not be possible to implement such an operator for all scalar types. For example, a case-insensitive string type's natural case-insensitive equality operator would not meet the criteria above. In such cases, the scalar type should not provide an equal operator.
In
An operator defined using type in
tests if a column value is a member of an array of values. The array is specified either as a scalar, a variable, or as the value of another column.
It should accept an array type as its argument, whose element type is the scalar type for which it is defined. It should be equivalent to a disjunction of individual equality tests on the elements of the provided array, where the equality test is an equivalence relation in the same sense as above.
less_than
, greater_than
, less_than_or_equal
, greater_than_or_equal
An operator defined using type less_than
tests if a column value is less than a specified value. Similarly for the other comparisons here.
If a connector defines more than one of these standard operators, then they should be compatible:
- When using
less_than
, a row should be included in the generated row set if and only if it would not be returned in the correspondinggreater_than_or_equal
comparison, and vice versa. More succinctly, it is expected thatx < y
holds exactly whenx >= y
does not hold. - It is expected that
x < y
holds exactly wheny > x
holds. - It is expected that
x <= y
holds exactly wheny >= x
holds.
The less_than_or_equal
and greater_than_or_equal
operators are expected to be reflexive. That is, they should return a superset of those rows returned by the corresponding equal
(syntactic equality) operator.
Each of these four operators is expected to be transitive. That is, for example x < y
and y < z
together imply x < z
, and similarly for the other operators.
contains
, icontains
, starts_with
, istarts_with
, ends_with
, iends_with
These operators must only apply to scalar types whose type representation is string
.
An operator defined using type contains
tests if a string-valued column on the left contains a string value on the right. icontains
is the case-insensitive variant.
An operator defined using type starts_with
tests if a string-valued column on the left starts with a string value on the right. istarts_with
is the case-insensitive variant.
An operator defined using type ends_with
tests if a string-valued column on the left ends with a string value on the right. iends_with
is the case-insensitive variant.
Custom Comparison Operators
Data connectors can also define custom comparison operators using type custom
. A custom operator is defined by its argument type, and its semantics is undefined.
Aggregate Functions
Aggregate functions extend the query AST with the ability to express new aggregates within the aggregates
portion of a query. They also allow sorting the query results via the order_by
query field.
Note: data connectors are required to implement the count and count-distinct aggregations for columns of all scalar types, and those operator is distinguished in the query AST. There is no need to define these aggregates as aggregate functions.
For example, a data connector might augment a Float
scalar type with a SUM
function which aggregates a sum of a collection of floating-point numbers.
Just like for comparison operators, an aggregate function is either a standard function, or a custom function.
To define an aggregate function, add a AggregateFunctionDefinition
to the aggregate_functions
field of the schema response.
For example:
{
"scalar_types": {
"Float": {
"aggregate_functions": {
"sum": {
"type": "sum",
"result_type": "Float"
},
"stddev": {
"type": "custom",
"result_type": {
"type": "named",
"name": "Float"
}
}
},
"comparison_operators": {}
}
},
...
}
Standard Aggregate Functions
sum
An aggregate function defined using type sum
should return the numerical sum of its provided values.
The result type should be provided explicitly, in the result_type
field, and should be a scalar type with a type representation of either Int64
or Float64
, depending on whether the scalar type defining this function has an integer representation or floating point representation.
A sum
function should ignore the order of its input values, and should be invariant of partitioning, that is: sum(x, sum(y, z))
= sum(x, y, z)
for any partitioning x, y, z
of the input values. It should return 0
for an empty set of input values.
average
An aggregate function defined using type average
should return the average of its provided values.
The result type should be provided explicitly, in the result_type
field, and should be a scalar type with a type representation of Float64
.
An average
function should ignore the order of its input values. It should return null
for an empty set of input values.
min
, max
An aggregate function defined using type min
or max
should return the minimal/maximal value from its provided values, according to some ordering.
Its implicit result type, i.e. the type of the aggregated values, is the same as the scalar type on which the function is defined, but with nulls allowed if not allowed already.
A min
/max
function should return null for an empty set of input values.
If the set of input values is a singleton, then the function should return the single value.
A min
/max
function should ignore the order of its input values, and should be invariant of partitioning, that is: min(x, min(y, z))
= min(x, y, z)
for any partitioning x, y, z
of the input values.
Custom Aggregate Functions
A custom aggregate function has type custom
and is defined by its result type - that is, the type of the aggregated data. The result type can be any type, not just a scalar type.
Extraction Functions
Extraction functions extend the query AST with the ability to extract components from a value with a scalar type. Extraction functions can be used to group by components of a scalar type.
For example, a Date
scalar type might expose extraction functions which extract the individual year, month and day components as integers.
Just like for comparison operators and aggregate functions, an extraction function is either a standard function, or a custom function.
To define an extraction function, add a ExtractionFunctionDefinition
to the extraction_functions
field of the schema response.
For example:
{
"scalar_types": {
"Date": {
"extraction_functions": {
"year": {
"type": "year",
"result_type": "Int"
},
}
"aggregate_functions": {},
"comparison_operators": {}
}
},
...
}
Standard Extraction Functions
The following standard extraction functions are supported:
Day
DayOfWeek
DayOfYear
Hour
Microsecond
Minute
Month
Nanosecond
Quarter
Second
Week
Year
For each of these, the return type should be a scalar type whose representation is one of int8
, int16
, int32
, or int64
.
Custom Extraction Functions
A custom extraction function has type custom
and is defined by its result type - that is, the type of the extracted data. The result type can be any type, not just a scalar type.
See also
Object Types
The schema should define any named object types which will be used as the types of collection row sets, or procedure inputs or outputs.
An object type consists of a name and a collection of named fields. Each field is defined by its type, and any arguments.
Note: field arguments are only used in a query context. Objects with field arguments cannot be used as input types, and fields with arguments cannot be used to define column mappings, or in nested field references.
Object types can also define "foreign keys", which are an indicator that a relationship exists between columns on this object type and a row in a collection
.
To define an object type, add an ObjectType
to the object_types
field of the schema response.
Example
{
"object_types": {
"coords": {
"description": "Latitude and longitude",
"fields": {
"latitude": {
"description": "Latitude in degrees north of the equator",
"arguments": {},
"type": {
"type": "named",
"name": "Float"
}
},
"longitude": {
"description": "Longitude in degrees east of the Greenwich meridian",
"arguments": {},
"type": {
"type": "named",
"name": "Float"
}
}
},
"foreign_keys": {}
},
...
},
...
}
Extended Example
Object types can refer to other object types in the types of their fields, and make use of other type structure such as array types and nullable types.
In the context of array types, it can be useful to use arguments on fields to allow the caller to customize the response.
For example, here we define a type widget
, and a second type which contains a widgets
field, parameterized by a limit
argument:
{
"object_types": {
"widget": {
"description": "Description of a widget",
"fields": {
"id": {
"description": "Primary key",
"arguments": {},
"type": {
"type": "named",
"name": "ID"
}
},
"name": {
"description": "Name of this widget",
"arguments": {},
"type": {
"type": "named",
"name": "String"
}
}
},
"foreign_keys": {}
},
"inventory": {
"description": "The items in stock",
"fields": {
"widgets": {
"description": "Those widgets currently in stock",
"arguments": {
"limit": {
"description": "The maximum number of widgets to fetch",
"argument_type": {
"type": "named",
"name": "Int"
}
}
},
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "widget"
}
}
}
},
"foreign_keys": {}
}
},
...
}
Foreign Keys Example
Foreign keys can be defined on an object type to hint that a relationship can be established between this object type and a collection. The column mapping maps fields from the object type to field paths on the foreign collection. The field path is an array of field names; an array of one field name simply indicates a field on the object type of the collection. More than one element in the array indicates a path through nested object types, following the field names in order.
{
"object_types": {
"article": {
"description": "An article",
"fields": {
"author_id": {
"description": "The article's author ID",
"type": {
"type": "named",
"name": "Int"
}
},
"id": {
"description": "The article's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"title": {
"description": "The article's title",
"type": {
"type": "named",
"name": "String"
}
}
},
"foreign_keys": {
"Article_AuthorID": {
"column_mapping": {
"author_id": ["id"]
},
"foreign_collection": "authors"
}
}
}
}
}
See also
- Type
ObjectType
- Type
ObjectField
Collections
The schema should define the metadata for any collections which can be queried using the query endpoint, or mutated using the mutation endpoint.
Each collection is defined by its name, any collection arguments, the object type of its rows, and some additional metadata related to permissions and constraints.
To describe a collection, add a CollectionInfo
structure to the collections
field of the schema response.
Requirements
- The
type
field should name an object type which is defined in the schema response.
Example
{
"collections": [
{
"name": "articles",
"description": "A collection of articles",
"arguments": {},
"type": "article",
"deletable": false,
"uniqueness_constraints": {
"ArticleByID": {
"unique_columns": [
"id"
]
}
}
},
{
"name": "authors",
"description": "A collection of authors",
"arguments": {},
"type": "author",
"deletable": false,
"uniqueness_constraints": {
"AuthorByID": {
"unique_columns": [
"id"
]
}
}
}
],
...
}
See also
- Type
CollectionInfo
Functions
Functions are a special case of collections, which are identified separately in the schema for convenience.
A function is a collection which returns a single row and a single column, named __value
. Like collections, functions can have arguments. Unlike collections, functions cannot be used by the mutations endpoint, do not describe constraints, and only provide a type for the __value
column, not the name of an object type.
Note: even though a function acts like a collection returning a row type with a single column, there is no need to define and name such a type in the object_types
section of the schema response.
To describe a function, add a FunctionInfo
structure to the functions
field of the schema response.
Example
{
"functions": [
{
"name": "latest_article_id",
"description": "Get the ID of the most recent article",
"arguments": {},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
],
...
}
See also
- Type
FunctionInfo
Procedures
The schema should define metadata for each procedure which the data connector implements.
Each procedure is defined by its name, any arguments types and a result type.
To describe a procedure, add a ProcedureInfo
structure to the procedure
field of the schema response.
Example
{
"procedures": [
{
"name": "upsert_article",
"description": "Insert or update an article",
"arguments": {
"article": {
"description": "The article to insert or update",
"type": {
"type": "named",
"name": "article"
}
}
},
"result_type": {
"type": "named",
"name": "article"
}
}
],
...
}
See also
- Type
ProcedureInfo
Capabilities
The schema response should also provide any capability-specific data, based on the set of enabled capabilities.
Requirements
- If the
query.aggregates
capability is enabled, then the schema response should include thecapabilities.query.aggregates
object, which has typeAggregateCapabilitiesSchemaInfo
.- This object should indicate the scalar type used as count aggregate result type, in order to implement aggregates.
Example
{
...
"capabilities": {
"query": {
"aggregates": {
"count_scalar_type": "Int"
}
}
}
}
Queries
The query endpoint accepts a query request, containing expressions to be evaluated in the context the data source, and returns a response consisting of relevant rows of data.
The structure and requirements for specific fields listed below will be covered in subsequent chapters.
Request
POST /query
Request
See QueryRequest
Request Fields
Name | Description |
---|---|
collection | The name of a collection to query |
query | The query syntax tree |
arguments | Values to be provided to any top-level collection arguments |
collection_relationships | Any relationships between collections involved in the query request |
variables | One set of named variables for each rowset to fetch. Each variable set should be subtituted in turn, and a fresh set of rows returned. |
Response
See QueryResponse
Requirements
- If the request specifies
variables
, then the response must contain oneRowSet
for each collection of variables provided. If not, the data connector should respond as ifvariables
were set to a single empty collection of variables:[{}]
. - If the request specifies
fields
, then the response must containrows
according to the schema advertised for the requestedcollection
. - If the request specifies
aggregates
then the response must containaggregates
, with one response key per requested aggregate, using the same keys. See aggregates. - If the request specifies
arguments
, then the implementation must validate the provided arguments against the types specified by the collection's schema. See arguments.
Field Selection
A Query
can specify which fields to fetch. The available fields are either
- the columns on the selected collection (i.e. those advertised in the corresponding
CollectionInfo
structure in the schema response), or - fields from related collections
The requested fields are specified as a collection of Field
structures in the field
property on the Query
.
Field Arguments
Arguments can be supplied to fields via the arguments
key. These match the format described in the arguments documentation.
The schema response will specify which fields take arguments via its respective arguments
key.
If a field has any arguments defined, then the arguments
field must be provided wherever that field is referenced. All fields are required, including nullable fields.
Nested Fields
Queries can specify nested field selections for columns which have structured types (that is, not simply a scalar type or a nullable scalar type).
In order to specify nested field selections, the fields
property of the Field
structure, which is a NestedField
structure.
If fields
is omitted, the entire structure of the column's data should be returned.
If fields
is provided, its value should be compatible with the type of the column:
Nested objects
For an object-typed column (whether nullable or not), the fields
property should contain a NestedField
with type object
.
The fields
property of the NestedField
specifies a Field
structure for each requested nested field from the objects.
Nested arrays
For an array-typed column (whether nullable or not), the fields
property may contain a NestedField
with type array
.
The fields
property of the NestedField
should contain another NestedField
structure, compatible with the type of the elements of the array. The selection function denoted by this nested NestedField
structure should be applied to each element of each array.
Nested collections
For a column whose type is an array of objects (whether nullable or not), the fields
property may contain a NestedField
with type collection
.
A connector should handle such fields by treating the nested array of objects as a collection. Such a field will include a nested Query
, and the connector should execute that query in the context of this nested collection.
A response for a field with a fields
property of type collection
should be a RowSet
which is computed from the nested collection by executing the specified query.
Note: support for nested collection queries is indicated by the query.nested_fields.nested_collections
capability.
Nested fields and relationships
Within the scope of a nested object, that object should be used as the "current row" wherever that concept is appropriate:
- In a
Field::Column
field, the column name points to a field of the nested object, - In a
Field::Relationship
field:- A column mapping refers to fields from the nested object,
- A relationship argument which selects a column refers to fields of the nested object.
Note that only connectors that enable the relationships.nested
capability will receive queries where relationships start from a nested object. Additionally, only connectors that enable the relationships.nested.array
will receive queries where relationships start from nested objects inside nested arrays.
Examples
Simple column selection
Here is an example of a query which selects some columns from the articles
collection of the reference data connector:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
},
"collection_relationships": {}
}
Example with Nested Object Types
Here is an example of a query which selects some columns from a nested object inside the rows of the institutions
collection of the reference data connector:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"city": {
"type": "column",
"column": "city"
},
"campuses": {
"type": "column",
"column": "campuses",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
}
},
"location_all": {
"type": "column",
"column": "location"
}
}
},
"collection_relationships": {}
}
Notice that the location
column is fetched twice: once to illustrate the use of the fields
property, to fetch a subset of data, and again in the location_all
field, which omits the fields
property and fetches the entire structure.
Example with Nested Array Types
Here is an example of a query which selects some columns from a nested array inside the rows of the institutions
collection of the reference data connector:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"fields": {
"type": "array",
"fields": {
"type": "object",
"fields": {
"last_name": {
"type": "column",
"column": "last_name"
},
"fields_of_study": {
"type": "column",
"column": "specialities",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
}
}
},
"departments": {
"type": "column",
"column": "departments",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
},
"collection_relationships": {}
}
Notice that the staff
column is fetched using a fields
property of type array
. For each staff member in each institution row, we apply the selection function denoted by its fields
property (of type object
). Specifically, the last_name
and specialities
properties are selected for each staff member.
Example with a Nested Collection
Here is an example of a query which computes aggregates over a nested collection inside the staff
field of each row of the institutions
collection:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"staff_aggregates": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"field_path": [],
"fields": {
"type": "collection",
"query": {
"aggregates": {
"count": {
"type": "star_count"
}
}
}
}
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"fields": {
"type": "array",
"fields": {
"type": "object",
"fields": {
"last_name": {
"type": "column",
"column": "last_name"
},
"first_name": {
"type": "column",
"column": "first_name"
}
}
}
}
}
}
},
"collection_relationships": {}
}
Note the staff_aggregates
field in particular, which has fields
with type collection
.
Example with Nested Types and Relationships
This query selects institution
data, and fetches author
data if the first and last name fields match for any nested staff
objects:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"name": {
"type": "column",
"column": "name"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"fields": {
"type": "array",
"fields": {
"type": "object",
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"author": {
"type": "relationship",
"arguments": {},
"query": {
"aggregates": null,
"fields": {
"id": {
"type": "column",
"column": "id"
},
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
}
}
},
"relationship": "author_by_first_and_last"
}
}
}
}
}
}
},
"collection_relationships": {
"author_by_first_and_last": {
"arguments": {},
"column_mapping": {
"first_name": ["first_name"],
"last_name": ["last_name"]
},
"relationship_type": "object",
"target_collection": "authors"
}
}
}
Note that the first_name
and last_name
properties in the column mapping are evaluated in the context of the nested staff
object, and not in the context of the original institution
row.
Example with Field Arguments
Here is an example of a query which selects some columns from a nested array inside the rows of the institutions
collection of the reference data connector and uses the limit
field argument to limit the number of items returned:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": 1
}
},
"fields": {
"type": "array",
"fields": {
"type": "object",
"fields": {
"last_name": {
"type": "column",
"column": "last_name"
},
"fields_of_study": {
"type": "column",
"column": "specialities",
"arguments": {
"limit": {
"type": "literal",
"value": 2
}
}
}
}
}
}
},
"departments": {
"type": "column",
"column": "departments",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
},
"collection_relationships": {}
}
Requirements
- If the
QueryRequest
contains aQuery
which specifiesfields
, then eachRowSet
in the response should contain therows
property, and each row should contain all of the requested fields.
See also
- Type
Query
- Type
RowFieldValue
- Type
RowSet
Filtering
A Query
can specify a predicate expression which should be used to filter rows considered during field selection for returning rows. The predicate expression also filters the rows that are aggregated across and grouped over (ie. it filters the input rows to the aggregation/grouping operation).
A predicate expression can be one of
- An application of a comparison operator to a column and a value, or
- An
EXISTS
expression, or - A conjunction of other expressions, or
- A disjunction of other expressions, or
- A negation of another expression
The predicate expression is specified in the predicate
field of the Query
object.
Comparison Operators
Unary Operators
Unary comparison operators are denoted by expressions with a type
field of unary_comparison_operator
.
The only supported unary operator currently is is_null
, which return true
when a column value is null
:
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "title"
}
}
Binary Operators
Binary comparison operators are denoted by expressions with a type
field of binary_comparison_operator
.
The set of available operators depends on the type of the column involved in the expression. The operator
property should specify the name of one of the binary operators from the field's scalar type definition.
The type ComparisonValue
describes the valid inhabitants of the value
field. The value
field should be an expression which evaluates to a value whose type is compatible with the definition of the comparison operator.
Equality Operators
This example makes use of an eq
operator, which is defined using the equal
semantics, to test a single column for equality with a scalar value:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "id"
},
"operator": "eq",
"value": {
"type": "scalar",
"value": 1
}
}
},
"collection_relationships": {}
}
Set Membership Operators
This example uses an in
operator, which is defined using the in
semantics, to test a single column for membership in a set of values:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "author_id"
},
"operator": "in",
"value": {
"type": "scalar",
"value": [1, 2]
}
}
},
"collection_relationships": {}
}
Custom Operators
This example uses a custom like
operator:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
},
"collection_relationships": {}
}
Nested Array Comparison Operators
If the connector declares support for the query.nested_fields.filter_by.nested_arrays
capability, it can receive expressions of type array_comparison
. These expressions allow scalar array-specific comparisons against columns that contain an array of scalar values.
There are two supported comparison operators that connectors can declare support for:
contains
: Whether or not the array contains the specified scalar value. This must be supported for all types that can be contained in an array that implement an 'eq' comparison operator.- Capability:
query.nested_fields.filter_by.nested_arrays.contains
- Capability:
is_empty
: Whether or not the array is empty. This must be supported no matter what type is contained in the array.- Capability:
query.nested_fields.filter_by.nested_arrays.is_empty
- Capability:
This example finds institutions
where the nested location.campuses
array contains the Lindholmen
value:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"campuses": {
"type": "column",
"column": "campuses",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
}
}
},
"predicate": {
"type": "array_comparison",
"column": {
"type": "column",
"name": "location",
"field_path": ["campuses"]
},
"comparison": {
"type": "contains",
"value": {
"type": "scalar",
"value": "Lindholmen"
}
}
}
},
"collection_relationships": {}
}
This example finds countries
which have an empty cities
array:
{
"collection": "countries",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"cities": {
"type": "column",
"column": "cities",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
},
"predicate": {
"type": "array_comparison",
"column": {
"type": "column",
"name": "cities",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
},
"comparison": {
"type": "is_empty"
}
}
},
"collection_relationships": {}
}
Columns in Operators
Comparison operators compare values. The value on the left hand side of any operator is described by a ComparisonTarget
, and the various cases will be explained next.
Referencing a column from the same collection
If the ComparisonTarget
has type column
, then the name
property refers to a column in the current collection. The arguments
property allows clients to submit argument values for columns that require arguments.
Referencing nested fields within columns
If the field_path
property is empty or not present then the target is the value of the named column.
If field_path
is non-empty then it refers to a path to a nested field within the named column
Note: a ComparisonTarget
may only have a non-empty field_path
if the connector supports capability query.nested_fields.filter_by
.
Computing an aggregate
If the ComparisonTarget
has type aggregate
, then the target is an aggregate computed over a related collection. The relationship is described by the (non-empty) path
field, and the aggregate to compute is specified in the aggregate
field.
For example, this query finds authors who have written exactly 2 articles:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "aggregate",
"aggregate": {
"type": "star_count"
},
"path": [
{
"arguments": {},
"relationship": "author_articles"
}
]
},
"operator": "eq",
"value": {
"type": "scalar",
"value": 2
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Note: type aggregate
will only be sent if the query.aggregates.filter_by
capability is turned on. If that capability is turned on, then the schema response should also contain the capabilities.query.aggregates
object. That object should indicate the scalar type used for the result type of count aggregates (star_count
and column_count
), so that clients can know what comparison operators are valid.
Values in Binary Operators
Binary (including array-valued) operators compare columns to values, but there are several types of valid values:
- Scalar values, as seen in the examples above, compare the column to a specific value,
- Variable values compare the column to the current value of a variable,
- Column values compare the column to another column. The column may be on the same row, or it may be on a related row. Comparing against columns on related rows requires the connector to indicate support via the
relationships.relation_comparisons
capability.
Referencing a column from a collection in scope
When an expression appears inside one or more exists expressions, there are multiple collections in scope.
If the query.exists.named_scopes
capability is enabled then these scopes can be named explicitly when referencing a column in an outer scope. The scope
field of the ComparisonValue
type can be used to specify the scope of a column reference.
Scopes are named by integers in the following manner:
- The scope named
0
refers to the current collection, - The scope named
1
refers to the collection under consideration outside the immediately-enclosing exists expression. - Scopes
2
,3
, and so on, refer to the collections considered during the evaluation of expressions outside subsequently enclosing exists expressions.
Therefore, the largest valid scope is the maximum nesting depth of exists expressions, up to the nearest enclosing Query
object.
Put another way, we can consider a stack of scopes which grows as we descend into each nested exists expression. Each stack frame contains the collection currently under consideration. The named scopes are then the top-down indices of elements of this stack.
For example, we can express an equality between an author_id
column and the id
column of the enclosing author
object (in scope 1
):
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "unrelated",
"arguments": {},
"collection": "articles"
},
"predicate": {
"type": "and",
"expressions": [
{
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "author_id"
},
"operator": "eq",
"value": {
"type": "column",
"path": [],
"name": "id",
"scope": 1
}
},
{
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
]
}
}
},
"collection_relationships": {}
}
EXISTS
expressions
An EXISTS
expression tests whether a row exists in some possibly-related collection, and is denoted by an expression with a type
field of exists
.
EXISTS
expressions can query related or unrelated collections.
Related Collections
Related collections are related to the original collection by a relationship in the collection_relationships
field of the top-level QueryRequest
.
For example, this query fetches authors who have written articles whose titles contain the string "Functional"
:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"arguments": {},
"relationship": "author_articles"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Nested relationships
If the related collection is related from a field inside a nested object, then the field path to the nested object can be first descended through using field_path
before the relationship is navigated.
Only connectors that enable the relationships.nested.filtering
capability will receive these sorts of queries.
In this example, the relationship joins from the nested location.country_id
across to the id
column on the countries
collection.
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"name": {
"type": "column",
"column": "name"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"country_id": {
"type": "column",
"column": "country_id"
},
"country": {
"type": "relationship",
"relationship": "location_country",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"area_km2": {
"type": "column",
"column": "area_km2"
}
}
}
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"field_path": ["location"],
"relationship": "location_country",
"arguments": {}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "area_km2"
},
"operator": "gt",
"value": {
"type": "scalar",
"value": 300000
}
}
}
},
"collection_relationships": {
"location_country": {
"arguments": {},
"column_mapping": {
"country_id": ["id"]
},
"relationship_type": "object",
"target_collection": "countries"
}
}
}
Unrelated Collections
If the query.exists.unrelated
capability is enabled, then exists expressions can reference unrelated collections.
Unrelated exists expressions can be useful when using collections with arguments. For example, this query uses the unrelated author_articles
collection, providing its arguments via the source row's columns:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "unrelated",
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"collection": "articles_by_author"
},
"predicate": {
"type": "and",
"expressions": [
{
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
]
}
}
},
"collection_relationships": {}
}
It can also be useful to reference a column in another scope when using unrelated exists expressions.
Nested Collections
If the query.exists.nested_collections
capability is enabled, then exists expressions can reference nested collections.
For example, this query finds institutions
which employ at least one staff member whose last name contains the letter s
:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "nested_collection",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"column_name": "staff"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "last_name"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "s"
}
}
}
},
"collection_relationships": {}
}
References to columns in another scope may be useful when using these sorts of expressions, in order to refer to columns from the outer (unnested) row.
Nested Scalar Collections
If the query.exists.nested_scalar_collections
capability is enabled, then exists expressions can reference columns that contain nested arrays of scalar values. In this case, each element of the nested array is lifted into a virtual row with the element value in a field called __value
. This allows predicate applied to the exists to reference the __value
column to compare against the scalar element.
For example, if there was a nested array such as [1,2,3]
, it would be converted into a virtual rows [{"__value": 1}, {"_value": 2}, {"_value": 3}]
.
For example, this query finds institutions
that have at least one campus whose name contains the letter d
(campuses are a string array nested inside location):
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"campuses": {
"type": "column",
"column": "campuses",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "nested_scalar_collection",
"column_name": "location",
"field_path": ["campuses"],
"arguments": {}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "__value"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "d"
}
}
}
},
"collection_relationships": {}
}
Conjunction of expressions
To express the conjunction of multiple expressions, specify a type
field of and
, and provide the expressions in the expressions
field.
For example, to test if the first_name
column is null and the last_name
column is also null:
{
"type": "and",
"expressions": [
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "first_name"
}
},
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "last_name"
}
}
]
}
Disjunction of expressions
To express the disjunction of multiple expressions, specify a type
field of or
, and provide the expressions in the expressions
field.
For example, to test if the first_name
column is null or the last_name
column is also null:
{
"type": "or",
"expressions": [
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "first_name"
}
},
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "last_name"
}
}
]
}
Negation
To express the negation of an expressions, specify a type
field of not
, and provide that expression in the expression
field.
For example, to test if the first_name
column is not null:
{
"type": "not",
"expression": {
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "first_name"
}
}
}
See also
- Type
Expression
Sorting
A Query
can specify how rows should be sorted in the response.
The requested ordering can be found in the order_by
field of the Query
object.
Computing the Ordering
To compute the ordering from the order_by
field, data connectors should implement the following ordering between rows:
- Consider each element of the
order_by.elements
array in turn. - For each
OrderByElement
:- If
element.target.type
iscolumn
, then to compare two rows, compare the value in the selected column. See typecolumn
below. - If
element.target.type
isaggregate
, compare two rows by comparing aggregates over a related collection. See typeaggregate
below.
- If
Type column
The property element.target.name
refers to a column name.
If the connector supports capability query.nested_fields.order_by
then the target may also reference nested fields within a column using the field_path
property. If the column has arguments, the the arguments
property is used to provide values for the arguments.
If element.order_direction
is asc
, then the row with the smaller column comes first.
If element.order_direction
is asc
, then the row with the smaller column comes second.
If the column values are incomparable, continue to the next OrderByElement
.
The data connector should document, for each scalar type, a comparison function to use for any two values of that scalar type.
For example, a data connector might choose to use the obvious ordering for a scalar integer-valued type, but to use the database-given ordering for a string-valued type, based on a certain choice of collation.
For example, the following query
requests that a collection of articles be ordered by title
descending:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"order_by": {
"elements": [
{
"target": {
"type": "column",
"name": "title",
"path": []
},
"order_direction": "desc"
}
]
}
},
"collection_relationships": {}
}
The selected column can be chosen from a related collection by specifying the path
property. path
consists of a list of named relationships.
For example, this query sorts articles by their author's last names, and then by their first names, by traversing the relationship from articles to authors:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
},
"author": {
"type": "relationship",
"arguments": {},
"relationship": "article_author",
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
}
}
}
}
},
"order_by": {
"elements": [
{
"target": {
"type": "column",
"name": "last_name",
"path": [
{
"arguments": {},
"relationship": "article_author",
"predicate": {
"type": "and",
"expressions": []
}
}
]
},
"order_direction": "asc"
},
{
"target": {
"type": "column",
"name": "first_name",
"path": [
{
"arguments": {},
"relationship": "article_author",
"predicate": {
"type": "and",
"expressions": []
}
}
]
},
"order_direction": "asc"
}
]
}
},
"collection_relationships": {
"article_author": {
"arguments": {},
"column_mapping": {
"author_id": ["id"]
},
"relationship_type": "object",
"source_collection_or_type": "article",
"target_collection": "authors"
}
}
}
Nested relationships
If the connector enables the relationships.nested.ordering
capability, it may receive path
relationships where the relationship starts from inside a nested object. The path to descend through the nested objects before navigating the relationship is specified by the field_path
property.
For example, this query sorts institutions
by their location's country's area. The relationship starts from within the location
nested object and joins its country_id
column to the countries
collection's id
column.
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"name": {
"type": "column",
"column": "name"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"country_id": {
"type": "column",
"column": "country_id"
},
"country": {
"type": "relationship",
"arguments": {},
"relationship": "location_country",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"area_km2": {
"type": "column",
"column": "area_km2"
}
}
}
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "column",
"path": [
{
"field_path": ["location"],
"relationship": "location_country",
"arguments": {},
"predicate": null
}
],
"name": "area_km2",
"field_path": []
}
}
]
}
},
"collection_relationships": {
"location_country": {
"arguments": {},
"column_mapping": {
"country_id": ["id"]
},
"relationship_type": "object",
"target_collection": "countries"
}
}
}
Type aggregate
An ordering of type aggregate
orders rows by aggregating rows in some related collection, and comparing aggregations for each of the two rows. The relationship path is specified by the path
property. Connectors must enable the relationships.order_by_aggregate
capability to receive this ordering type.
If the respective aggregates are incomparable, the ordering should continue to the next OrderByElement
.
If the connector enables the relationships.nested.ordering
capability, it may receive path
relationships where the relationship starts from inside a nested object. The path to descend through the nested objects before navigating the relationship is specified by the field_path
property.
Examples
For example, this query sorts article authors by their total article count:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"count": {
"type": "star_count"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "aggregate",
"aggregate": {
"type": "star_count"
},
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
This query sorts article authors by their maximum article ID:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"max_id": {
"type": "single_column",
"column": "id",
"function": "max"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "asc",
"target": {
"type": "aggregate",
"aggregate": {
"type": "single_column",
"column": "id",
"function": "max"
},
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
This query sorts institutions first by those institutions that are in countries that have the most institutions in them, then by the institutions' name. This example navigates the nested relationship that begins in the location
nested object and joins back onto the institutions
collection, targeting the nested location.country_id
property.
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"name": {
"type": "column",
"column": "name"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"country_id": {
"type": "column",
"column": "country_id"
},
"country": {
"type": "relationship",
"arguments": {},
"relationship": "location_institution_location_country",
"query": {
"fields": {
"name": {
"type": "column",
"column": "name"
},
"location": {
"type": "column",
"column": "location"
}
}
}
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "aggregate",
"path": [
{
"field_path": ["location"],
"relationship": "location_institution_location_country",
"arguments": {},
"predicate": null
}
],
"aggregate": {
"type": "star_count"
}
}
},
{
"order_direction": "desc",
"target": {
"type": "column",
"name": "name",
"path": []
}
}
]
}
},
"collection_relationships": {
"location_institution_location_country": {
"arguments": {},
"column_mapping": {
"country_id": ["location", "country_id"]
},
"relationship_type": "array",
"target_collection": "institutions"
}
}
}
Requirements
- Rows in the response should be ordered according to the algorithm described above.
- The
order_by
field should not affect the set of collection which are returned, except for their order. - If the
order_by
field is not provided then rows should be returned in an unspecified but deterministic order. For example, an implementation might choose to return rows in the order of their primary key or creation timestamp by default.
See also
- Type
OrderBy
- Type
OrderByElement
- Type
OrderByTarget
Pagination
The limit
and offset
parameters on the Query
object control pagination:
limit
specifies the maximum number of rows that are considered during field selection and before aggregates and grouping are applied.offset
: The index of the first row to consider during field selection and before aggregates and grouping are applied.
limit
and offset
are applied after the predicate filter from the Query is applied and after sorting from the Query is applied, but before aggregates and grouping are applied. Both limit
and offset
affect the rows returned by field selection.
Requirements
- If
limit
is specified, the response should contain at most that many rows, and aggregates and grouping should be applied to at most that many rows.
See also
- Type
Query
Aggregates
In addition to fetching multiple rows of raw data from a collection, the query API supports fetching aggregated data. If a connector wants to support aggregates, it needs to enable the query.aggregates
capability. It also needs to return the capabilities.query.aggregates
object from the schema to indicate which scalar type is used to return the result of count aggregates.
Aggregates are requested in the aggregates
field of the Query
object.
There are three types of aggregate:
single_column
aggregates apply an aggregation function (as defined by the column's scalar type in the schema response) to a column,column_count
aggregates count the number of rows with non-null values in the specified columns. If thedistinct
flag is set, then the count should only count unique non-null values of those columns,star_count
aggregates count all matched rows.
If the connector supports capability query.nested_fields.aggregates
then single_column
and column_count
aggregates may also reference nested fields within a column using the field_path
property.
If the column referenced in single_column
and column_count
aggregates has arguments defined for it in the schema, then the arguments
property is used to provide values for those arguments.
Example
The following query object requests the aggregated sum of all order totals, along with the count of all orders, and the count of all orders which have associated invoices (via the nullable invoice_id
column):
{
"collection": ["orders"],
"collection_relationships": {},
"query": {
"aggregates": {
"orders_total": {
"type": "single_column",
"function": "sum",
"column": "total"
},
"invoiced_orders_count": {
"type": "column_count",
"columns": ["invoice_id"]
},
"orders_count": {
"type": "star_count"
}
}
}
}
In this case, the query has no predicate function, so all three aggregates would be computed over all rows.
Requirements
- Each aggregate should be computed over all rows that match the
Query
. - Each requested aggregate must be returned in the
aggregates
property on theQueryResponse
object, using the same key as used to request it.
See also
- Type
Aggregate
Grouping
If a connector supports aggregates, it may also support grouping data and then aggregating data in those groups. This ability is tracked by the query.aggregates.group_by
capability.
Grouping is requested in the query API alongside fields and aggregates, in the groups
field of the Query
object.
A grouping operation specifies one or more dimensions along which to partition the row set. Each dimension selects a column from which to draw values (see Dimension::Column
). For each group, every row should have equal values in each of those dimension columns.
If the dimension's column's schema defines arguments, then the arguments
property is used to provide values for those arguments.
In addition, a grouping operation specifies aggregates which should be computed and returned for each group separately.
Dimensions
Dimension columns can be:
- A column
- A object-nested column
- A column across an object relationship
- A column across an object-nested object relationship
A key property is that nested arrays or nested relationships cannot be traversed from the rows being grouped over when selecting a dimension column. Only nested objects or object relationships can be traversed.
Extraction Functions and Complex Dimensions
We can also group by components of scalar types using extraction functions.
In order to apply an extraction function to the value of a dimension, the Dimension
should specify an extraction
property, which is the name of the extraction function to apply.
For example, this query groups articles by the year component of their published date:
{
"collection": "articles",
"arguments": {},
"query": {
"groups": {
"aggregates": {
"count": {
"type": "star_count"
}
},
"dimensions": [
{
"type": "column",
"column_name": "published_date",
"path": [],
"extraction": "year"
}
]
}
},
"collection_relationships": {}
}
Filtering
Grouping operations have two types of filtering:
- The initial row set can be filtered before the grouping operation, using the
predicate
field of theQuery
object as usual, and - The groups themselves can be filtered after the grouping operation, using the
predicate
field of theGrouping
object. This is controlled by thequery.aggregates.group_by.filter
capability.
Unlike regular predicates on rows, group predicates are not allowed to compare columns, but must instead compare values of aggregates over the group. For example, we can filter groups by comparing a count of rows in the group, but not by comparing values in individual rows.
Ordering
As with filtering, group operations support two types of ordering:
- The initial row set can be ordered before the grouping operation, using the
order_by
field of theQuery
object as usual, and - The groups themselves can be ordered after the grouping operation, using the
order_by
field of theGrouping
object. This is controlled by thequery.aggregates.group_by.order
capability.
Group sort orders are restricted to comparing aggregate values, similar to filtering. For example, we can order groups by a count, but not by the value of individual rows. However, we can also choose to sort by the selected grouping dimensions.
Pagination
Pagination can also be applied both before and after grouping:
- The initial row set can be paginated before the grouping operation, using the
limit
andoffset
fields of theQuery
object as usual, and - The groups themselves can be paginated after the grouping operation, using the
limit
andoffset
fields of theGrouping
object. This is controlled by thequery.aggregates.group_by.paginate
capability.
Examples
This example partitions the articles
collection by author_id
, and then returns the row count for each group. That is, it computes the number of articles written by each author:
{
"collection": "articles",
"arguments": {},
"query": {
"groups": {
"aggregates": {
"article_count": {
"type": "star_count"
}
},
"dimensions": [
{
"type": "column",
"column_name": "author_id",
"path": []
}
]
}
},
"collection_relationships": {}
}
Filtering examples
This example applies a predicate to the rows before grouping:
{
"collection": "articles",
"arguments": {},
"query": {
"groups": {
"aggregates": {
"min_id": {
"type": "single_column",
"column": "id",
"function": "min"
},
"max_id": {
"type": "single_column",
"column": "id",
"function": "max"
}
},
"dimensions": [
{
"type": "column",
"column_name": "author_id",
"path": []
}
]
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "author_id",
"path": []
},
"operator": "eq",
"value": {
"type": "scalar",
"value": 1
}
}
},
"collection_relationships": {}
}
This example applies a predicate to the groups themselves, after grouping. It computes some aggregates for author groups which have exactly two articles:
{
"collection": "articles",
"arguments": {},
"query": {
"groups": {
"aggregates": {
"min_id": {
"type": "single_column",
"column": "id",
"function": "min"
},
"max_id": {
"type": "single_column",
"column": "id",
"function": "max"
}
},
"dimensions": [
{
"type": "column",
"column_name": "author_id",
"path": []
}
],
"predicate": {
"type": "binary_comparison_operator",
"target": {
"type": "aggregate",
"aggregate": {
"type": "star_count"
}
},
"operator": "eq",
"value": {
"type": "scalar",
"value": 2
}
}
}
},
"collection_relationships": {}
}
Ordering and pagination
This example computes the article count for the author with the most articles, by ordering the groups by article count, and then using pagination to select the first group:
{
"collection": "articles",
"arguments": {},
"query": {
"groups": {
"aggregates": {
"article_count": {
"type": "star_count"
}
},
"dimensions": [
{
"type": "column",
"column_name": "author_id",
"path": []
}
],
"limit": 1,
"offset": 0,
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "aggregate",
"aggregate": {
"type": "star_count"
},
"path": []
}
}
]
}
}
},
"collection_relationships": {}
}
This example sorts the groups by the values of their dimensions. It groups articles by their author_id
, and then sorts the groups by that author_id
dimension, descending:
{
"collection": "articles",
"arguments": {},
"query": {
"groups": {
"aggregates": {
"article_count": {
"type": "star_count"
}
},
"dimensions": [
{
"type": "column",
"column_name": "author_id",
"path": []
}
],
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "dimension",
"index": 0,
"path": []
}
}
]
}
}
},
"collection_relationships": {}
}
Requirements
- If the
Query
object specifies thegroups
field, then each correpondingRowSet
object must contain a non-nullgroups
field. - Each returned
Group
object must contain values for each requested dimension, in the order in which they were requested: - Each returned
Group
object must contain values for each requested aggregate, using the same key as used to request it:- Aggregates should be computed over the rows in each group in turn.
See also
Arguments
Collection arguments parameterize an entire collection, and must be provided in queries wherever the collection is referenced, either directly, or via relationships.
Field arguments parameterize a single field, and must be provided wherever that field is referenced.
Collection Arguments
Collection arguments should be provided in the QueryRequest
anywhere a collection is referenced. The set of provided arguments should be compatible with the list of arguments required by the corresponding collection in the schema response.
Specifying arguments to the top-level collection
Collection arguments should be provided as key-value pairs in the arguments
property of the top-level QueryRequest
object:
{
"collection": "articles_by_author",
"arguments": {
"author_id": {
"type": "literal",
"value": 1
}
},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
},
"collection_relationships": {}
}
Relationships
Relationships can specify values for arguments on their target collection:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Any arguments which are not defined by the relationship itself should be specified where the relationship is used. For example, here the author_id
argument can be moved from the relationship definition to the field which uses it:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Collection arguments in predicates
Arguments must be specified in predicates whenever a reference to a secondary collection is required.
For example, in an EXISTS
expression, if the target collection has arguments:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"relationship": "author_articles",
"arguments": {}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"column_mapping": {},
"relationship_type": "array",
"target_collection": "articles_by_author"
}
}
}
Collection arguments in order_by
Arguments must be specified when an OrderByElement
references a related collection.
For example, when ordering by an aggregate of rows in a related collection, and that collection has arguments:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "aggregate",
"aggregate": {
"type": "star_count"
},
"path": [
{
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Field Arguments
CAUTION
Field arguments considered somewhat unstable. Fields arguments are not well supported across all aspects of the specification. It is not recommended that field arguments are used, except for very specialized, advanced use cases.
Field arguments can be provided to any field requested (in addition to those described for top-level collections). These are specified in the schema response and their use is described in field selection. Their specification and usage matches that of collection arguments above.
Relationships
Queries can request data from other collections via relationships. A relationship identifies rows in one collection (the "source collection") with possibly-many related rows in a second collection (the "target collection") in two ways:
- Columns in the two collections can be related via column mappings, and
- Collection arguments to the target collection can be computed via the row of the source collection.
Connectors that support relationships should indicate so by enabling the relationships
capability.
Defining Relationships
Relationships are defined (and given names) in the top-level QueryRequest
object, and then referred to by name everywhere they are used. To define a relationship, add a Relationship
object to the collection_relationships
property of the QueryRequest
object.
Column Mappings
A column mapping is a set of pairs of columns - each consisting of one column from the source object type and one column from the target collection - which must be pairwise equal in order for a pair of rows to be considered equal.
What the source object type is depends on where the relationship is used. Often, a relationship will simply relate columns from one source collection's object type to a target collection's object type. However, at various locations such as field selection, filtering, ordering, grouping, queries can descend into nested objects and arrays before navigating the relationship. In these cases, the source column will be on the nested object type. Only connectors that enable the relationships.nested
capability will encounter relationships that involve nested objects. Additionally, only connectors that enable the relationships.nested.array
capability will encounter relationships that start from inside nested objects in nested arrays.
The column from the target collection may be an object-nested column, so it is specified using a field path to the column. An array of one field name specifies a column on the target collection's object type. Two field names specifies, firstly, the column on the target collection that contains a nested object, and secondly the column on the nested object type.
However, unless a connector enables the relationships.nested
capability, it can expect to only receive field paths with only one entry in column mappings (ie. non-nested columns).
For example, we can fetch each author
with its list of related articles
by establishing a column mapping between the author's primary key and the article's author_id
column:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Collection Arguments
See collection arguments for examples.
Advanced relationship use cases
Relationships are not used only for fetching data - they are used in practically all features of data connectors, as we will see below.
Relationships in predicates
EXISTS
expressions in predicates can query related collections. Here we find all authors who have written any article with "Functional"
in the title:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"arguments": {},
"relationship": "author_articles"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title"
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Relationships in order_by
Sorting can be defined in terms of row counts and aggregates over related collections.
For example, here we order authors by the number of articles they have written:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"count": {
"type": "star_count"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "aggregate",
"aggregate": {
"type": "star_count"
},
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
We can also order by custom aggregate functions applied to related collections. For example, here we order authors by their most recent (maximum) article ID:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"max_id": {
"type": "single_column",
"column": "id",
"function": "max"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "asc",
"target": {
"type": "aggregate",
"aggregate": {
"type": "single_column",
"column": "id",
"function": "max"
},
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": ["author_id"]
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Variables
A QueryRequest
can optionally specify one or more sets of variables which can be referenced throughout the Query
object.
Query variables will only be provided if the query.variables
capability is advertised in the capabilities response.
The intent is that the data connector should attempt to perform multiple versions of the query in parallel - one instance of the query for each set of variables. For each set of variables, each variable value should be substituted wherever it is referenced in the query - for example in a ComparisonValue
.
Example
In the following query, we fetch two rowsets of article data. In each rowset, the rows are filtered based on the author_id
column, and the prescribed author_id
is determined by a variable. The choice of author_id
varies between rowsets.
The result contains one rowset containing articles from the author with ID 1
, and a second for the author with ID 2
.
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "id"
},
"operator": "eq",
"value": {
"type": "variable",
"name": "$article_id"
}
}
},
"collection_relationships": {},
"variables": [
{
"$article_id": 1
},
{
"$article_id": 2
}
]
}
Requirements
- If
variables
are provided in theQueryRequest
, then theQueryResponse
should contain oneRowSet
for each set of variables, in the same order. - If
variables
are not provided, the data connector should return a singleRowSet
.
Functions
A function is invoked in a query request in exactly the same way as any other collection - recall that a function is simply a collection which returns a single row, and a single column, named __value
.
Because a function returns a single row, many query capabilities are limited in their usefulness:
- It would not make sense to specify
limit
oroffset
, - Sorting has no effect
- Filtering can only remove the whole result row, based on some condition expressed in terms of the result.
However, some query functions are still useful in the context of functions:
- The caller can request a subset of the full result, by using nested field queries,
- A function can be the source or target of a relationship,
- Function arguments are specified in the same way as collection arguments, and can also be specified using variables.
Examples
A function returning a scalar value
This example uses the latest_article_id
function, which returns a scalar type:
{
"arguments": {},
"query": {
"fields": {
"__value": {
"type": "column",
"column": "__value"
}
}
},
"collection_relationships": {}
}
The response JSON includes the requested data in the special __value
field:
[
{
"rows": [
{
"__value": 3
}
]
}
]
A function returning an object type
This example uses the latest_article
function instead, which returns the full article
object. To query the object structure, it uses a nested field request:
{
"arguments": {},
"query": {
"fields": {
"__value": {
"type": "column",
"column": "__value",
"fields": {
"type": "object",
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {}
}
Again, the response is sent in the __value
field:
[
{
"rows": [
{
"__value": {
"id": 3,
"title": "The Design And Implementation Of Programming Languages"
}
}
]
}
]
Mutations
The mutation endpoint accepts a mutation request, containing a collection of mutation operations to be performed transactionally in the context the data source, and returns a response containing a result for each operation.
The structure and requirements for specific fields listed below will be covered in subsequent chapters.
Request
POST /mutation
Request
See MutationRequest
Request Fields
Name | Description |
---|---|
operations | A list of mutation operations to perform |
collection_relationships | Any relationships between collections involved in the mutation request |
Mutation Operations
Each operation is described by a MutationOperation
structure, which can be one of several types. However, currently procedures are the only supported operation type.
Multiple Operations
If the mutation.transactional
capability is enabled, then the caller may provide multiple operations in a single request.
Otherwise, the caller must provide exactly one operation.
The intent is that multiple operations ought to be performed together in a single transaction.
That is, they should all succeed, or all fail together. If any operation fails, then a single ErrorResponse
should capture
the failure, and none of the operations should effect any changes to the data source.
Response
See MutationResponse
Requirements
- The
operation_results
field of theMutationResponse
should contain oneMutationOperationResults
structure for each requested operation in theMutationRequest
.
Procedures
A procedure which is described in the schema can be invoked using a MutationOperation
.
The operation should specify the procedure name, any arguments, and a list of Field
s to be returned.
Note: just as for functions, fields to return can include relationships or nested fields. However, unlike functions, procedures do not need to wrap their result in a __value
field, so top-level fields can be extracted without use of nested field queries.
Requirements
- The
MutationResponse
structure will contain aMutationOperationResults
structure for the procedure response. This structure should have typeprocedure
and contain aresult
field with a result of the type indicated in the schema response.
Explain
There are two endpoints related to explain:
- The
/query/explain
endpoint, which accepts a query request. - The
/mutation/explain
endpoint, which accepts a mutation request.
Both endpoints return a representation of the execution plan without actually executing the query or mutation.
Connectors that wish to support these endpoints should indicate this in their capabilities; specifically with the query.explain
capability and the mutation.explain
capability.
Request
POST /query/explain
See QueryRequest
Request
POST /mutation/explain
See MutationRequest
Response
See ExplainResponse
Tutorial
In this tutorial, we will walk through the reference implementation of the specification, which will illustrate how to implement data connectors from scratch.
The reference implementation is written in Rust, but it should be possible to follow along using any language of your choice, as long as you can implement a basic web server and implement serializers and deserializers for the data formats involved.
It is recommended that you follow along chapter-by-chapter, as each will build on the last.
Setup
To compile and run the reference implementation, you will need to install a Rust toolchain, and then run:
git clone git@github.com:hasura/ndc-spec.git
cd ndc-spec/ndc-reference
cargo build
cargo run
Alternatively, you can run the reference implementation entirely inside a Docker container:
git clone git@github.com:hasura/ndc-spec.git
cd ndc-spec
docker build -t reference_connector .
docker run -it reference_connector
Either way, you should have a working data connector running on http://localhost:8100/, which you can test as follows:
curl http://localhost:8100/schema
Testing
Testing tools are provided in the specification repository to aid in the development of connectors.
ndc-test
The ndc-test
executable performs basic validation of the data returned by the capabilities and schema endpoints, and performs some basic queries.
To test a connector, provide its endpoint to ndc-test
on the command line:
ndc-test --endpoint <ENDPOINT>
For example, running the reference connector and passing its URL to ndc-test
, we will see that it issues test queries against the articles
and authors
collections:
ndc-test test --endpoint http://localhost:8100
Capabilities
├ Fetching /capabilities ... ... OK
├ Validating capabilities ... OK
Schema
├ Fetching /schema ... OK
├ Validating schema ...
│ ├ object_types ... OK
│ ├ Collections ...
│ │ ├ articles ...
│ │ │ ├ Arguments ... OK
│ │ │ ├ Collection type ... OK
│ │ ├ authors ...
│ │ │ ├ Arguments ... OK
│ │ │ ├ Collection type ... OK
│ │ ├ articles_by_author ...
│ │ │ ├ Arguments ... OK
│ │ │ ├ Collection type ... OK
│ ├ Functions ...
│ │ ├ latest_article_id ...
│ │ │ ├ Result type ... OK
│ │ │ ├ Arguments ... OK
│ │ ├ Procedures ...
│ │ │ ├ upsert_article ...
│ │ │ │ ├ Result type ... OK
│ │ │ │ ├ Arguments ... OK
Query
├ articles ...
│ ├ Simple queries ...
│ │ ├ Select top N ... OK
│ │ ├ Predicates ... OK
│ ├ Aggregate queries ...
│ │ ├ star_count ... OK
├ authors ...
│ ├ Simple queries ...
│ │ ├ Select top N ... OK
│ │ ├ Predicates ... OK
│ ├ Aggregate queries ...
│ │ ├ star_count ... OK
├ articles_by_author ...
However, ndc-test
cannot validate the entire schema. For example, it will not issue queries against the articles_by_author
collection, because it does not have any way to synthesize inputs for its required collection argument.
Getting Started
The reference implementation will serve queries and mutations based on in-memory data read from newline-delimited JSON files.
First, we will define some types to represent the data in the newline-delimited JSON files. Rows of JSON data will be stored in memory as ordered maps:
type Row = BTreeMap<models::FieldName, serde_json::Value>;
Our application state will consist of collections of various types of rows:
#[derive(Debug, Clone)]
pub struct AppState {
pub articles: BTreeMap<i32, Row>,
pub authors: BTreeMap<i32, Row>,
pub institutions: BTreeMap<i32, Row>,
pub countries: BTreeMap<i32, Row>,
pub metrics: Metrics,
}
In our main
function, the data connector reads the initial data from the newline-delimited JSON files, and creates the AppState
:
fn init_app_state() -> AppState {
// Read the JSON data files
let articles = read_json_lines(ARTICLES_JSON).unwrap();
let authors = read_json_lines(AUTHORS_JSON).unwrap();
let institutions = read_json_lines(INSTITUTIONS_JSON).unwrap();
let countries = read_json_lines(COUNTRIES_JSON).unwrap();
let metrics = Metrics::new().unwrap();
AppState {
articles,
authors,
institutions,
countries,
metrics,
}
}
Finally, we start a web server with the endpoints which are required by this specification:
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn Error>> {
let app_state = Arc::new(Mutex::new(init_app_state()));
let app = Router::new()
.route("/health", get(get_health))
.route("/metrics", get(get_metrics))
.route("/capabilities", get(get_capabilities))
.route("/schema", get(get_schema))
.route("/query", post(post_query))
.route("/query/explain", post(post_query_explain))
.route("/mutation", post(post_mutation))
.route("/mutation/explain", post(post_mutation_explain))
.layer(axum::middleware::from_fn(check_version_header))
.layer(axum::middleware::from_fn_with_state(
Arc::clone(&app_state),
metrics_middleware,
))
.with_state(app_state);
// Start the server on `localhost:<PORT>`.
// This says it's binding to an IPv6 address, but will actually listen to
// any IPv4 or IPv6 address.
let host = net::IpAddr::V6(net::Ipv6Addr::UNSPECIFIED);
let port = env::var("PORT")
.map(|s| s.parse())
.unwrap_or(Ok(DEFAULT_PORT))?;
let addr = net::SocketAddr::new(host, port);
let listener = tokio::net::TcpListener::bind(addr).await?;
println!("Serving on {}", listener.local_addr()?);
axum::serve(listener, app)
.with_graceful_shutdown(shutdown_handler())
.await?;
Ok(())
}
Note: the application state is stored in an Arc<Mutex<_>>
, so that we can perform locking reads and writes in multiple threads.
In the next chapters, we will look at the implementation of each of these endpoints in turn.
Capabilities
The capabilities endpoint should return data describing which features the data connector can implement, along with the version of this specification that the data connector claims to implement.
The reference implementation returns a static CapabilitiesResponse
:
async fn get_capabilities() -> Json<models::CapabilitiesResponse> {
Json(models::CapabilitiesResponse {
version: models::VERSION.into(),
capabilities: models::Capabilities {
query: models::QueryCapabilities {
aggregates: Some(models::AggregateCapabilities {
filter_by: Some(models::LeafCapability {}),
group_by: Some(models::GroupByCapabilities {
filter: Some(models::LeafCapability {}),
order: Some(models::LeafCapability {}),
paginate: Some(models::LeafCapability {}),
}),
}),
variables: Some(models::LeafCapability {}),
exists: models::ExistsCapabilities {
named_scopes: Some(models::LeafCapability {}),
unrelated: Some(models::LeafCapability {}),
nested_collections: Some(models::LeafCapability {}),
nested_scalar_collections: Some(models::LeafCapability {}),
},
explain: None,
nested_fields: models::NestedFieldCapabilities {
filter_by: Some(models::NestedFieldFilterByCapabilities {
nested_arrays: Some(models::NestedArrayFilterByCapabilities {
contains: Some(models::LeafCapability {}),
is_empty: Some(models::LeafCapability {}),
}),
}),
order_by: Some(models::LeafCapability {}),
aggregates: Some(models::LeafCapability {}),
nested_collections: Some(models::LeafCapability {}),
},
},
mutation: models::MutationCapabilities {
transactional: None,
explain: None,
},
relationships: Some(models::RelationshipCapabilities {
order_by_aggregate: Some(models::LeafCapability {}),
relation_comparisons: Some(models::LeafCapability {}),
nested: Some(models::NestedRelationshipCapabilities {
array: Some(models::LeafCapability {}),
filtering: Some(models::LeafCapability {}),
ordering: Some(models::LeafCapability {}),
}),
}),
},
})
}
Note: the reference implementation supports all capabilities with the exception of query.explain
and mutation.explain
. This is because all queries are run in memory by naively interpreting the query request - there is no better description of the query plan than the raw query request itself!
Schema
The schema endpoint should return data describing the data connector's scalar and object types, along with any collections, functions and procedures which are exposed.
async fn get_schema() -> Json<models::SchemaResponse> {
// ...
Json(models::SchemaResponse {
scalar_types,
object_types,
collections,
functions,
procedures,
capabilities,
})
}
Scalar Types
We define two scalar types: String
and Int
.
String
supports a custom like
comparison operator, and Int
supports the standard aggregation operators min
and max
.
let scalar_types = BTreeMap::from_iter([
(
"String".into(),
models::ScalarType {
representation: models::TypeRepresentation::String,
aggregate_functions: BTreeMap::from_iter([
("max".into(), models::AggregateFunctionDefinition::Max),
("min".into(), models::AggregateFunctionDefinition::Min),
]),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
(
"gt".into(),
models::ComparisonOperatorDefinition::GreaterThan,
),
(
"gte".into(),
models::ComparisonOperatorDefinition::GreaterThanOrEqual,
),
("lt".into(), models::ComparisonOperatorDefinition::LessThan),
(
"lte".into(),
models::ComparisonOperatorDefinition::LessThanOrEqual,
),
(
"contains".into(),
models::ComparisonOperatorDefinition::Contains,
),
(
"icontains".into(),
models::ComparisonOperatorDefinition::ContainsInsensitive,
),
(
"starts_with".into(),
models::ComparisonOperatorDefinition::StartsWith,
),
(
"istarts_with".into(),
models::ComparisonOperatorDefinition::StartsWithInsensitive,
),
(
"ends_with".into(),
models::ComparisonOperatorDefinition::EndsWith,
),
(
"iends_with".into(),
models::ComparisonOperatorDefinition::EndsWithInsensitive,
),
("in".into(), models::ComparisonOperatorDefinition::In),
(
"like".into(),
models::ComparisonOperatorDefinition::Custom {
argument_type: models::Type::Named {
name: "String".into(),
},
},
),
]),
extraction_functions: BTreeMap::new(),
},
),
(
"Int".into(),
models::ScalarType {
representation: models::TypeRepresentation::Int32,
aggregate_functions: BTreeMap::from_iter([
("max".into(), models::AggregateFunctionDefinition::Max),
("min".into(), models::AggregateFunctionDefinition::Min),
(
"sum".into(),
models::AggregateFunctionDefinition::Sum {
result_type: models::ScalarTypeName::from("Int64"),
},
),
(
"avg".into(),
models::AggregateFunctionDefinition::Average {
result_type: models::ScalarTypeName::from("Float"),
},
),
]),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
("in".into(), models::ComparisonOperatorDefinition::In),
(
"gt".into(),
models::ComparisonOperatorDefinition::GreaterThan,
),
(
"gte".into(),
models::ComparisonOperatorDefinition::GreaterThanOrEqual,
),
("lt".into(), models::ComparisonOperatorDefinition::LessThan),
(
"lte".into(),
models::ComparisonOperatorDefinition::LessThanOrEqual,
),
]),
extraction_functions: BTreeMap::new(),
},
),
(
"Int64".into(),
models::ScalarType {
representation: models::TypeRepresentation::Int64,
aggregate_functions: BTreeMap::from_iter([
("max".into(), models::AggregateFunctionDefinition::Max),
("min".into(), models::AggregateFunctionDefinition::Min),
(
"sum".into(),
models::AggregateFunctionDefinition::Sum {
result_type: models::ScalarTypeName::from("Int64"),
},
),
(
"avg".into(),
models::AggregateFunctionDefinition::Average {
result_type: models::ScalarTypeName::from("Float"),
},
),
]),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
("in".into(), models::ComparisonOperatorDefinition::In),
(
"gt".into(),
models::ComparisonOperatorDefinition::GreaterThan,
),
(
"gte".into(),
models::ComparisonOperatorDefinition::GreaterThanOrEqual,
),
("lt".into(), models::ComparisonOperatorDefinition::LessThan),
(
"lte".into(),
models::ComparisonOperatorDefinition::LessThanOrEqual,
),
]),
extraction_functions: BTreeMap::new(),
},
),
(
"Float".into(),
models::ScalarType {
representation: models::TypeRepresentation::Float64,
aggregate_functions: BTreeMap::from_iter([
("max".into(), models::AggregateFunctionDefinition::Max),
("min".into(), models::AggregateFunctionDefinition::Min),
(
"sum".into(),
models::AggregateFunctionDefinition::Sum {
result_type: models::ScalarTypeName::from("Float"),
},
),
(
"avg".into(),
models::AggregateFunctionDefinition::Average {
result_type: models::ScalarTypeName::from("Float"),
},
),
]),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
("in".into(), models::ComparisonOperatorDefinition::In),
(
"gt".into(),
models::ComparisonOperatorDefinition::GreaterThan,
),
(
"gte".into(),
models::ComparisonOperatorDefinition::GreaterThanOrEqual,
),
("lt".into(), models::ComparisonOperatorDefinition::LessThan),
(
"lte".into(),
models::ComparisonOperatorDefinition::LessThanOrEqual,
),
]),
extraction_functions: BTreeMap::new(),
},
),
(
"Date".into(),
models::ScalarType {
representation: models::TypeRepresentation::Date,
aggregate_functions: BTreeMap::new(),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
("in".into(), models::ComparisonOperatorDefinition::In),
]),
extraction_functions: BTreeMap::from_iter([
(
"year".into(),
models::ExtractionFunctionDefinition::Year {
result_type: models::ScalarTypeName::from("Int"),
},
),
(
"month".into(),
models::ExtractionFunctionDefinition::Month {
result_type: models::ScalarTypeName::from("Int"),
},
),
(
"day".into(),
models::ExtractionFunctionDefinition::Day {
result_type: models::ScalarTypeName::from("Int"),
},
),
]),
},
),
]);
Object Types
For each collection, we define an object type for its rows. In addition, we define object types for any nested types which we use:
let object_types = BTreeMap::from_iter([
("article".into(), article_type),
("author".into(), author_type),
("institution".into(), institution_type),
("location".into(), location_type),
("staff_member".into(), staff_member_type),
("country".into(), country_type),
("city".into(), city_type),
]);
Author
let author_type = models::ObjectType {
description: Some("An author".into()),
fields: BTreeMap::from_iter([
(
"id".into(),
models::ObjectField {
description: Some("The author's primary key".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"first_name".into(),
models::ObjectField {
description: Some("The author's first name".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
(
"last_name".into(),
models::ObjectField {
description: Some("The author's last name".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
]),
foreign_keys: BTreeMap::new(),
};
Article
let article_type = models::ObjectType {
description: Some("An article".into()),
fields: BTreeMap::from_iter([
(
"id".into(),
models::ObjectField {
description: Some("The article's primary key".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"title".into(),
models::ObjectField {
description: Some("The article's title".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
(
"author_id".into(),
models::ObjectField {
description: Some("The article's author ID".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"published_date".into(),
models::ObjectField {
description: Some("The article's date of publication".into()),
r#type: models::Type::Named {
name: "Date".into(),
},
arguments: BTreeMap::new(),
},
),
]),
foreign_keys: BTreeMap::from_iter([(
"Article_AuthorID".into(),
models::ForeignKeyConstraint {
foreign_collection: "authors".into(),
column_mapping: BTreeMap::from_iter([("author_id".into(), vec!["id".into()])]),
},
)]),
};
Institution
Note: the fields with array types have field-level arguments (array_arguments
) in order to support nested array operations.
let institution_type = models::ObjectType {
description: Some("An institution".into()),
fields: BTreeMap::from_iter([
(
"id".into(),
models::ObjectField {
description: Some("The institution's primary key".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"name".into(),
models::ObjectField {
description: Some("The institution's name".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
(
"location".into(),
models::ObjectField {
description: Some("The institution's location".into()),
r#type: models::Type::Named {
name: "location".into(),
},
arguments: BTreeMap::new(),
},
),
(
"staff".into(),
models::ObjectField {
description: Some("The institution's staff".into()),
r#type: models::Type::Array {
element_type: Box::new(models::Type::Named {
name: "staff_member".into(),
}),
},
arguments: array_arguments.clone(),
},
),
(
"departments".into(),
models::ObjectField {
description: Some("The institution's departments".into()),
r#type: models::Type::Array {
element_type: Box::new(models::Type::Named {
name: "String".into(),
}),
},
arguments: array_arguments.clone(),
},
),
]),
foreign_keys: BTreeMap::new(),
};
Collections
We define each collection's schema using the type information defined above:
let collections = vec![
articles_collection,
authors_collection,
institutions_collection,
countries_collection,
articles_by_author_collection,
];
Author
let authors_collection = models::CollectionInfo {
name: "authors".into(),
description: Some("A collection of authors".into()),
collection_type: "author".into(),
arguments: BTreeMap::new(),
uniqueness_constraints: BTreeMap::from_iter([(
"AuthorByID".into(),
models::UniquenessConstraint {
unique_columns: vec!["id".into()],
},
)]),
};
Article
let articles_collection = models::CollectionInfo {
name: "articles".into(),
description: Some("A collection of articles".into()),
collection_type: "article".into(),
arguments: BTreeMap::new(),
uniqueness_constraints: BTreeMap::from_iter([(
"ArticleByID".into(),
models::UniquenessConstraint {
unique_columns: vec!["id".into()],
},
)]),
};
articles_by_author
We define one additional collection, articles_by_author
, which is provided as an example of a collection with an argument:
let articles_by_author_collection = models::CollectionInfo {
name: "articles_by_author".into(),
description: Some("Articles parameterized by author".into()),
collection_type: "article".into(),
arguments: BTreeMap::from_iter([(
"author_id".into(),
models::ArgumentInfo {
argument_type: models::Type::Named { name: "Int".into() },
description: None,
},
)]),
uniqueness_constraints: BTreeMap::new(),
};
Institution
let institutions_collection = models::CollectionInfo {
name: "institutions".into(),
description: Some("A collection of institutions".into()),
collection_type: "institution".into(),
arguments: BTreeMap::new(),
uniqueness_constraints: BTreeMap::from_iter([(
"InstitutionByID".into(),
models::UniquenessConstraint {
unique_columns: vec!["id".into()],
},
)]),
};
Functions
The schema defines a list of functions, each including its input and output types.
Get Latest Article
As an example, we define a latest_article_id
function, which returns a single integer representing the maximum article ID.
let latest_article_id_function = models::FunctionInfo {
name: "latest_article_id".into(),
description: Some("Get the ID of the most recent article".into()),
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named { name: "Int".into() }),
},
arguments: BTreeMap::new(),
};
A second example returns the full corresponding article, to illustrate functions returning structured types:
let latest_article_function = models::FunctionInfo {
name: "latest_article".into(),
description: Some("Get the most recent article".into()),
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named {
name: "article".into(),
}),
},
arguments: BTreeMap::new(),
};
Procedures
The schema defines a list of procedures, each including its input and output types.
Upsert Article
As an example, we define an upsert procedure for the article collection defined above. The procedure will accept an input argument of type article
, and returns a nulcollection article
, representing the state of the article before the update, if it were already present.
let upsert_article = models::ProcedureInfo {
name: "upsert_article".into(),
description: Some("Insert or update an article".into()),
arguments: BTreeMap::from_iter([(
"article".into(),
models::ArgumentInfo {
description: Some("The article to insert or update".into()),
argument_type: models::Type::Named {
name: "article".into(),
},
},
)]),
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named {
name: "article".into(),
}),
},
};
Capability-specific information
The schema response includes required capability-specific information:
let capabilities = Some(models::CapabilitySchemaInfo {
query: Some(models::QueryCapabilitiesSchemaInfo {
aggregates: Some(ndc_models::AggregateCapabilitiesSchemaInfo {
count_scalar_type: "Int".into(),
}),
}),
});
Queries
The reference implementation of the /query
endpoint may seem complicated, because there is a lot of functionality packed into a single endpoint. However, we will break the implementation down into small sections, each of which should be easily understood.
We start by looking at the type signature of the post_query
function, which is the top-level function implementing the query endpoint:
pub async fn post_query(
State(state): State<Arc<Mutex<AppState>>>,
Json(request): Json<models::QueryRequest>,
) -> Result<Json<models::QueryResponse>> {
This function accepts a QueryRequest
and must produce a QueryResponse
.
In the next section, we will start to break down this problem step-by-step.
Query Variables
The first step in post_query
is to reduce the problem from a query with multiple sets of query variables to only a single set.
The post_query
function iterates over all variable sets, and for each one, produces a RowSet
of rows corresponding to that set of variables. Each RowSet
is then added to the final QueryResponse
:
pub async fn post_query(
State(state): State<Arc<Mutex<AppState>>>,
Json(request): Json<models::QueryRequest>,
) -> Result<Json<models::QueryResponse>> {
let state = state.lock().await;
let variable_sets = request.variables.unwrap_or(vec![BTreeMap::new()]);
let mut row_sets = vec![];
for variables in &variable_sets {
let row_set = execute_query_with_variables(
&request.collection,
&request.arguments,
&request.collection_relationships,
&request.query,
variables,
&state,
)?;
row_sets.push(row_set);
}
Ok(Json(models::QueryResponse(row_sets)))
}
In order to compute the RowSet
for a given set of variables, the function delegates to a function named execute_query_with_variables
:
fn execute_query_with_variables(
collection: &models::CollectionName,
arguments: &BTreeMap<models::ArgumentName, models::Argument>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
query: &models::Query,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
) -> Result<models::RowSet> {
In the next section, we will break down the implementation of this function.
Evaluating Arguments
Now that we have reduced the problem to a single set of query variables, we must evaluate any collection arguments, and in turn, evaluate the collection of rows that we will be working with.
From there, we will be able to apply predicates, sort and paginate rows. But one step at a time!
The first step is to evaluate each argument, which the execute_query_with_variables
function does by delegating to the eval_argument
function:
fn execute_query_with_variables(
collection: &models::CollectionName,
arguments: &BTreeMap<models::ArgumentName, models::Argument>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
query: &models::Query,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
) -> Result<models::RowSet> {
let mut argument_values = BTreeMap::new();
for (argument_name, argument_value) in arguments {
if argument_values
.insert(
argument_name.clone(),
eval_argument(variables, argument_value)?,
)
.is_some()
{
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "duplicate argument names".into(),
details: serde_json::Value::Null,
}),
));
}
}
let collection = get_collection_by_name(collection, &argument_values, state)?;
execute_query(
collection_relationships,
variables,
state,
query,
Root::Reset,
collection,
)
}
Once this is complete, and we have a collection of evaluated argument_values
, we can delegate to the get_collection_by_name
function. This function peforms the work of computing the full collection, by pattern matching on the name of the collection:
fn get_collection_by_name(
collection_name: &models::CollectionName,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
state: &AppState,
) -> Result<Vec<Row>> {
match collection_name.as_str() {
"articles" => Ok(state.articles.values().cloned().collect()),
"authors" => Ok(state.authors.values().cloned().collect()),
"institutions" => Ok(state.institutions.values().cloned().collect()),
"countries" => Ok(state.countries.values().cloned().collect()),
"articles_by_author" => {
let author_id = arguments.get("author_id").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "missing argument author_id".into(),
details: serde_json::Value::Null,
}),
))?;
let author_id_int: i32 = author_id
.as_i64()
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "author_id must be an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "author_id out of range".into(),
details: serde_json::Value::Null,
}),
)
})?;
let mut articles_by_author = vec![];
for article in state.articles.values() {
let article_author_id = article.get("author_id").ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "author_id not found".into(),
details: serde_json::Value::Null,
}),
))?;
let article_author_id_int: i32 = article_author_id
.as_i64()
.ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "author_id must be an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "author_id out of range".into(),
details: serde_json::Value::Null,
}),
)
})?;
if article_author_id_int == author_id_int {
articles_by_author.push(article.clone());
}
}
Ok(articles_by_author)
}
"latest_article_id" => {
let latest_id = state.articles.keys().max();
let latest_id_value = serde_json::to_value(latest_id).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "unable to encode value".into(),
details: serde_json::Value::Null,
}),
)
})?;
Ok(vec![BTreeMap::from_iter([(
"__value".into(),
latest_id_value,
)])])
}
"latest_article" => {
let latest = state
.articles
.iter()
.max_by_key(|(&id, _)| id)
.map(|(_, article)| article);
let latest_value = serde_json::to_value(latest).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "unable to encode value".into(),
details: serde_json::Value::Null,
}),
)
})?;
Ok(vec![BTreeMap::from_iter([(
"__value".into(),
latest_value,
)])])
}
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid collection name".into(),
details: serde_json::Value::Null,
}),
)),
}
}
Note 1: the articles_by_author
collection is the only example here which has to apply any arguments. It is provided as an example of a collection which accepts an author_id
argument, and it must validate that the argument is present, and that it is an integer.
Note 2: the latest_article_id
collection is provided as an example of a function. It is a collection like all the others, but must follow the rules for functions: it must consist of a single row, with a single column named __value
.
In the next section, we will break down the implementation of execute_query
.
Once we have computed the full collection, we can move onto evaluating the query in the context of that collection, using the execute_query
function:
fn execute_query(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
query: &models::Query,
root: Root,
collection: Vec<Row>,
) -> Result<models::RowSet> {
In the next section, we will break down the implementation of execute_query
.
Executing Queries
In this section, we will break down the implementation of the execute_query
function:
fn execute_query(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
query: &models::Query,
root: Root,
collection: Vec<Row>,
) -> Result<models::RowSet> {
At this point, we have already computed the full collection, which is passed via the collection
argument. Now, we need to evaluate the Query
in the context of this collection.
The Query
describes the predicate which should be applied to all rows, the sort order, pagination options, along with any aggregates to compute and fields to return.
The first step is to sort the collection.
Note: we could also start by filtering, and then sort the filtered rows. Which is more efficient depends on the data and the query, and choosing between these approaches would be the job of a query planner in a real database engine. However, this is out of scope here, so we make an arbitrary choice, and sort the data first.
Sorting
The first step is to sort the rows in the full collection:
let sorted = sort(
collection_relationships,
variables,
state,
collection,
&query.order_by,
)?;
The Query
object defines the sort order in terms of a list of OrderByElement
s. See the sorting specification for details on how this ought to be interpreted.
The sort
function
The sort
function implements a simple insertion sort, computing the ordering for each pair of rows, and inserting each row at the correct place:
fn sort(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
collection: Vec<Row>,
order_by: &Option<models::OrderBy>,
) -> Result<Vec<Row>> {
match order_by {
None => Ok(collection),
Some(order_by) => {
let mut copy = vec![];
for item_to_insert in collection {
let mut index = 0;
for other in © {
if let Ordering::Greater = eval_order_by(
collection_relationships,
variables,
state,
order_by,
other,
&item_to_insert,
)? {
break;
}
index += 1;
}
copy.insert(index, item_to_insert);
}
Ok(copy)
}
}
}
sort
delegates to the eval_order_by
function to compute the ordering between two rows:
Evaluating the Ordering
To compare two rows, the eval_order_by
computes each OrderByElement
in turn, and compares the rows in order, or in reverse order, depending on whether the ordering is ascending or descending.
The function returns the first Ordering
which makes the two rows distinct (if any):
fn eval_order_by(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
order_by: &models::OrderBy,
t1: &Row,
t2: &Row,
) -> Result<Ordering> {
let mut result = Ordering::Equal;
for element in &order_by.elements {
let v1 = eval_order_by_element(collection_relationships, variables, state, element, t1)?;
let v2 = eval_order_by_element(collection_relationships, variables, state, element, t2)?;
let x = match element.order_direction {
models::OrderDirection::Asc => compare(v1, v2)?,
models::OrderDirection::Desc => compare(v2, v1)?,
};
result = result.then(x);
}
Ok(result)
}
The ordering for a single OrderByElement
is computed by the eval_order_by_element
function.
We won't cover every branch of this function in detail here, but it works by pattern matching on the type of ordering being used.
Ordering by a column
As an example, here is the function eval_order_by_column
which evaluates ordering by a column:
This code computes the target table, possibly by traversing relationships using eval_path
(we will cover this function later when we cover relationships), and validates that we computed a single row before selecting the value of the chosen column.
Now that we have sorted the full collection, we can apply the predicate to filter down the collection of rows. We will cover this in the next section.
Filtering
The next step is to filter the rows based on the provided predicate expression:
let filtered: Vec<Row> = (match &query.predicate {
None => Ok(sorted),
Some(expr) => {
let mut filtered: Vec<Row> = vec![];
for item in sorted {
let scopes: Vec<&Row> = match root {
Root::PushCurrentRow(scopes) => {
let mut scopes = scopes.to_vec();
scopes.push(&item);
scopes
}
Root::Reset => vec![&item],
};
if eval_expression(
collection_relationships,
variables,
state,
expr,
&scopes,
&item,
)? {
filtered.push(item);
}
}
Ok(filtered)
}
})?;
As we can see, the function delegates to the eval_expression
function in order to evaluate the predicate on each row.
Evaluating expressions
The eval_expression
function evaluates a predicate by pattern matching on the type of the expression expr
, and returns a boolean value indicating whether the current row matches the predicate:
fn eval_expression(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
expr: &models::Expression,
scopes: &[&Row],
item: &Row,
) -> Result<bool> {
Logical expressions
The first category of expression types are the logical expressions - and (conjunction), or (disjunction) and not (negation) - whose evaluators are straightforward:
- To evaluate a conjunction/disjunction of subexpressions, we evaluate all of the subexpressions to booleans, and find the conjunction/disjunction of those boolean values respectively.
- To evaluate the negation of a subexpression, we evaluate the subexpression to a boolean value, and negate the boolean.
match expr {
models::Expression::And { expressions } => {
for expr in expressions {
if !eval_expression(
collection_relationships,
variables,
state,
expr,
scopes,
item,
)? {
return Ok(false);
}
}
Ok(true)
}
models::Expression::Or { expressions } => {
for expr in expressions {
if eval_expression(
collection_relationships,
variables,
state,
expr,
scopes,
item,
)? {
return Ok(true);
}
}
Ok(false)
}
models::Expression::Not { expression } => {
let b = eval_expression(
collection_relationships,
variables,
state,
expression,
scopes,
item,
)?;
Ok(!b)
}
Unary Operators
The next category of expressions are the unary operators. The only unary operator is the IsNull
operator, which is evaluated by evaluating the operator's comparison target, and then comparing the result to null
:
models::Expression::UnaryComparisonOperator { column, operator } => match operator {
models::UnaryComparisonOperator::IsNull => {
let vals = eval_comparison_target(
collection_relationships,
variables,
state,
column,
item,
)?;
Ok(vals.is_null())
}
},
To evaluate the comparison target, we delegate to the eval_comparison_target
function, which pattern matches:
- A column is evaluated using the
eval_column_field_path
function. - An aggregate is evaluated using
eval_path
(which we will talk more about when we get to relationships) andeval_aggregate
(which we will talk about when we get to aggregates).
fn eval_comparison_target(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
target: &models::ComparisonTarget,
item: &Row,
) -> Result<serde_json::Value> {
match target {
models::ComparisonTarget::Column {
name,
arguments,
field_path,
} => eval_column_field_path(variables, item, name, field_path.as_deref(), arguments),
models::ComparisonTarget::Aggregate { aggregate, path } => {
let rows: Vec<Row> = eval_path(
collection_relationships,
variables,
state,
path,
&[item.clone()],
)?;
eval_aggregate(variables, aggregate, &rows)
}
}
}
Binary Operators
The next category of expressions are the binary operators. Binary operators can be standard or custom.
Binary operators are evaluated by evaluating their comparison target and comparison value, and comparing them using a specific comparison operator:
models::Expression::BinaryComparisonOperator {
column,
operator,
value,
} => {
let left_val =
eval_comparison_target(collection_relationships, variables, state, column, item)?;
let right_vals = eval_comparison_value(
collection_relationships,
variables,
value,
state,
scopes,
item,
)?;
eval_comparison_operator(operator, &left_val, &right_vals)
}
The standard binary comparison operators are:
- The equality operator,
equal
, - The set membership operator,
in
, - Comparison operators
less_than
,less_than_or_equal
,greater_than
, andgreater_than_or_equal
, - String comparisons
contains
,icontains
,starts_with
,istarts_with
,ends_with
,iends_with
andlike
.
equal
is evaluated by evaluating its comparison target and comparison value, and comparing them for equality:
"eq" => {
for right_val in right_vals {
if left_val == right_val {
return Ok(true);
}
}
Ok(false)
}
The ordering comparisons (less_than
, less_than_or_equal
, greater_than
, and greater_than_or_equal
) depend on their type, so first we need to determine the type of the comparison target and dispatch on it to eval_partial_ord_comparison
to perform the actual comparisons:
"gt" | "lt" | "gte" | "lte" => {
if let Some(column_int) = left_val.as_i64() {
eval_partial_ord_comparison(operator, &column_int, right_vals, |right_val| {
right_val.as_i64().ok_or_else(|| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "value is not an integer".into(),
details: serde_json::Value::Null,
}),
)
})
})
} else if let Some(column_float) = left_val.as_f64() {
eval_partial_ord_comparison(operator, &column_float, right_vals, |right_val| {
right_val.as_f64().ok_or_else(|| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "value is not a float".into(),
details: serde_json::Value::Null,
}),
)
})
})
} else if let Some(column_string) = left_val.as_str() {
eval_partial_ord_comparison(operator, &column_string, right_vals, |right_val| {
right_val.as_str().ok_or_else(|| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "value is not a string".into(),
details: serde_json::Value::Null,
}),
)
})
})
} else {
Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: format!(
"column is does not support comparison operator {operator}"
),
details: serde_json::Value::Null,
}),
))
}
}
fn eval_partial_ord_comparison<'a, T, FConvert>(
operator: &ndc_models::ComparisonOperatorName,
left_value: &T,
right_values: &'a [serde_json::Value],
convert: FConvert,
) -> Result<bool>
where
T: PartialOrd,
FConvert: Fn(&'a serde_json::Value) -> Result<T>,
{
for right_val in right_values {
let right_val = convert(right_val)?;
let op = operator.as_str();
if op == "gt" && *left_value > right_val
|| op == "lt" && *left_value < right_val
|| op == "gte" && *left_value >= right_val
|| op == "lte" && *left_value <= right_val
{
return Ok(true);
}
}
Ok(false)
}
The in
operator is evaluated by evaluating its comparison target, and all of its comparison values, and testing whether the evaluated target appears in the list of evaluated values:
"in" => {
for comparison_value in right_vals {
let right_vals = comparison_value.as_array().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "expected array".into(),
details: serde_json::Value::Null,
}),
))?;
for right_val in right_vals {
if left_val == right_val {
return Ok(true);
}
}
}
Ok(false)
}
String comparison operators are evaluated similarly:
"contains" | "icontains" | "starts_with" | "istarts_with" | "ends_with" | "iends_with" => {
if let Some(left_str) = left_val.as_str() {
for right_val in right_vals {
let right_str = right_val.as_str().ok_or_else(|| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "value is not a string".into(),
details: serde_json::Value::Null,
}),
)
})?;
let op = operator.as_str();
let left_str_lower = left_str.to_lowercase();
let right_str_lower = right_str.to_lowercase();
if op == "contains" && left_str.contains(right_str)
|| op == "icontains" && left_str_lower.contains(&right_str_lower)
|| op == "starts_with" && left_str.starts_with(right_str)
|| op == "istarts_with" && left_str_lower.starts_with(&right_str_lower)
|| op == "ends_with" && left_str.ends_with(right_str)
|| op == "iends_with" && left_str_lower.ends_with(&right_str_lower)
{
return Ok(true);
}
}
Ok(false)
} else {
Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: format!(
"comparison operator {operator} is only supported on strings"
),
details: serde_json::Value::Null,
}),
))
}
}
The reference implementation provides a single custom binary operator as an example, which is the like
operator on strings:
"like" => {
for regex_val in right_vals {
let column_str = left_val.as_str().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "regex is not a string".into(),
details: serde_json::Value::Null,
}),
))?;
let regex_str = regex_val.as_str().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "regex is invalid".into(),
details: serde_json::Value::Null,
}),
))?;
let regex = Regex::new(regex_str).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid regular expression".into(),
details: serde_json::Value::Null,
}),
)
})?;
if regex.is_match(column_str) {
return Ok(true);
}
}
Ok(false)
}
Scalar Array Comparison Operators
The next category of expressions are the scalar array comparison operators. First we must evaluate the comparison target and then we can evaluate the array comparison itself.
models::Expression::ArrayComparison { column, comparison } => {
let left_val =
eval_comparison_target(collection_relationships, variables, state, column, item)?;
eval_array_comparison(
collection_relationships,
variables,
&left_val,
comparison,
state,
scopes,
item,
)
}
Evaluating the array comparison is done using eval_array_comparison
. In it, we can evaluate the two standard operators we have: contains
and is_empty
.
contains
simply evaluates the comparison value and then tests whether the array from the comparison target contains any of the comparison values.
models::ArrayComparison::Contains { value } => {
let right_vals = eval_comparison_value(
collection_relationships,
variables,
value,
state,
scopes,
item,
)?;
for right_val in right_vals {
if left_val_array.contains(&right_val) {
return Ok(true);
}
}
Ok(false)
}
is_empty
simply checks is the comparison target array is empty:
models::ArrayComparison::IsEmpty => Ok(left_val_array.is_empty()),
EXISTS
expressions
An EXISTS
expression is evaluated by recursively evaluating a Query
on another source of rows, and testing to see whether the resulting RowSet
contains any rows.
models::Expression::Exists {
in_collection,
predicate,
} => {
let query = models::Query {
aggregates: None,
fields: Some(IndexMap::new()),
limit: None,
offset: None,
order_by: None,
predicate: predicate.clone().map(|e| *e),
groups: None,
};
let collection = eval_in_collection(
collection_relationships,
item,
variables,
state,
in_collection,
)?;
let row_set = execute_query(
collection_relationships,
variables,
state,
&query,
Root::PushCurrentRow(scopes),
collection,
)?;
let rows: Vec<IndexMap<_, _>> = row_set.rows.ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "expected 'rows'".into(),
details: serde_json::Value::Null,
}),
))?;
Ok(!rows.is_empty())
Note in particular, we push the current row onto the stack of scopes
before executing the inner query, so that references to columns in those scopes can be resolved correctly.
The source of the rows is defined by in_collection
, which we evaluate with eval_in_collection
in order to get the rows to evaluate the inner query against. There are four different sources of rows.
ExistsInCollection::Related
The first source of rows is a related collection. We first find the specified relationship, and then use eval_path_element
to get the rows across that relationship from the current row:
models::ExistsInCollection::Related {
field_path,
relationship,
arguments,
} => {
let relationship = collection_relationships.get(relationship).ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "relationship is undefined".into(),
details: serde_json::Value::Null,
}),
))?;
let source = vec![item.clone()];
eval_path_element(
collection_relationships,
variables,
state,
relationship,
arguments,
&source,
field_path.as_deref(),
&None,
)
}
ExistsInCollection::Unrelated
The second source of rows is an unrelated collection. This simply returns all rows in that collection by using get_collection_by_name
:
models::ExistsInCollection::Unrelated {
collection,
arguments,
} => {
let arguments = arguments
.iter()
.map(|(k, v)| Ok((k.clone(), eval_relationship_argument(variables, item, v)?)))
.collect::<Result<BTreeMap<_, _>>>()?;
get_collection_by_name(collection, &arguments, state)
}
ExistsInCollection::NestedCollection
The third source of rows is a nested collection. This allows us to source our rows from a nested array of objects in a column on the current row. We do this using eval_column_field_path
.
ndc_models::ExistsInCollection::NestedCollection {
column_name,
field_path,
arguments,
} => {
let value =
eval_column_field_path(variables, item, column_name, Some(field_path), arguments)?;
serde_json::from_value(value).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "nested collection must be an array of objects".into(),
details: serde_json::Value::Null,
}),
)
})
}
ExistsInCollection::NestedScalarCollection
The fourth source of rows is a nested scalar collection. This allows us to read a nested array of scalars from a column on the current row (using eval_column_field_path
) and create a virtual row for each element in the array, placing the array element into a __value
field on the row:
models::ExistsInCollection::NestedScalarCollection {
field_path,
column_name,
arguments,
} => {
let value =
eval_column_field_path(variables, item, column_name, Some(field_path), arguments)?;
let value_array = value.as_array().ok_or_else(|| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "nested scalar collection column value must be an array".into(),
details: serde_json::Value::Null,
}),
)
})?;
let wrapped_array_values = value_array
.iter()
.map(|v| BTreeMap::from([(models::FieldName::from("__value"), v.clone())]))
.collect();
Ok(wrapped_array_values)
Pagination
Once the irrelevant rows have been filtered out, the execute_query
function applies the limit
and offset
arguments by calling the `paginate function:
let paginated: Vec<Row> = paginate(filtered.into_iter(), query.limit, query.offset);
The paginate
function is implemented using the skip
and take
functions on iterators:
fn paginate<I: Iterator>(collection: I, limit: Option<u32>, offset: Option<u32>) -> Vec<I::Item> {
let start = offset.unwrap_or(0).try_into().unwrap();
match limit {
Some(n) => collection.skip(start).take(n.try_into().unwrap()).collect(),
None => collection.skip(start).collect(),
}
}
Aggregates
Now that we have computed the sorted, filtered, and paginated rows of the original collection, we can compute any aggregates over those rows.
Each aggregate is computed in turn by the eval_aggregate
function, and added to the list of all aggregates to return:
let aggregates = query
.aggregates
.as_ref()
.map(|aggregates| eval_aggregates(variables, aggregates, &paginated))
.transpose()?;
The eval_aggregate
function works by pattern matching on the type of the aggregate being computed:
- A
star_count
aggregate simply counts all rows, - A
column_count
aggregate computes the subset of rows where the named column is non-null, and returns the count of only those rows, - A
single_column
aggregate is computed by delegating to theeval_aggregate_function
function, which computes a custom aggregate operator over the values of the selected column taken from all rows.
fn eval_aggregate(
variables: &BTreeMap<models::VariableName, serde_json::Value>,
aggregate: &models::Aggregate,
rows: &[Row],
) -> Result<serde_json::Value> {
match aggregate {
models::Aggregate::StarCount {} => Ok(serde_json::Value::from(rows.len())),
models::Aggregate::ColumnCount {
column,
arguments,
field_path,
distinct,
} => {
let values = rows
.iter()
.map(|row| {
eval_column_field_path(variables, row, column, field_path.as_deref(), arguments)
})
.collect::<Result<Vec<_>>>()?;
let non_null_values = values.iter().filter(|value| !value.is_null());
let agg_value = if *distinct {
non_null_values
.map(|value| {
serde_json::to_string(value).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "unable to encode value".into(),
details: serde_json::Value::Null,
}),
)
})
})
.collect::<Result<HashSet<_>>>()?
.len()
} else {
non_null_values.count()
};
serde_json::to_value(agg_value).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "unable to encode value".into(),
details: serde_json::Value::Null,
}),
)
})
}
models::Aggregate::SingleColumn {
column,
arguments,
field_path,
function,
} => {
let values = rows
.iter()
.map(|row| {
eval_column_field_path(variables, row, column, field_path.as_deref(), arguments)
})
.collect::<Result<Vec<_>>>()?;
eval_aggregate_function(function, &values)
}
}
}
The eval_aggregate_function
function discovers the type of data being aggregated and then dispatches to a specific function that implements aggregation for that type.
fn eval_aggregate_function(
function: &models::AggregateFunctionName,
values: &[serde_json::Value],
) -> Result<serde_json::Value> {
if let Some(first_value) = values.iter().next() {
if first_value.is_i64() {
let int_values = values
.iter()
.map(|value| {
value.as_i64().ok_or_else(|| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "column is not an integer".into(),
details: serde_json::Value::Null,
}),
)
})
})
.collect::<Result<Vec<i64>>>()?;
eval_integer_aggregate_function(function, int_values)
}
...
For example, integer aggregation is implemented by eval_integer_aggregate_function
. In it, the min
, max
, sum
, and avg
functions are implemented.
#[allow(clippy::cast_precision_loss)]
fn eval_integer_aggregate_function(
function: &models::AggregateFunctionName,
int_values: Vec<i64>,
) -> Result<serde_json::Value> {
match function.as_str() {
"min" => Ok(serde_json::Value::from(int_values.into_iter().min())),
"max" => Ok(serde_json::Value::from(int_values.into_iter().max())),
"sum" => Ok(serde_json::Value::from(int_values.into_iter().sum::<i64>())),
"avg" => {
let count: f64 = int_values.len() as f64; // Potential precision loss (u64 -> f64)
let sum: f64 = int_values.into_iter().sum::<i64>() as f64; // Potential precision loss (i64 -> f64)
let avg = sum / count;
Ok(serde_json::Value::from(avg))
}
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid integer aggregation function".into(),
details: serde_json::Value::Null,
}),
)),
}
}
Field Selection
In addition to computing aggregates, we can also return fields selected directly from the rows themselves.
This is done by mapping over the computed rows, and using the eval_field
function to evaluate each selected field in turn:
let rows = query
.fields
.as_ref()
.map(|fields| {
let mut rows: Vec<IndexMap<models::FieldName, models::RowFieldValue>> = vec![];
for item in &paginated {
let row = eval_row(fields, collection_relationships, variables, state, item)?;
rows.push(row);
}
Ok(rows)
})
.transpose()?;
The eval_field
function works by pattern matching on the field type:
- A
column
is selected using theeval_column
function (oreval_nested_field
if there are nested fields to fetch) - A
relationship
field is selected by evaluating the related collection usingeval_path_element
(we will cover this in the next section), and then recursively executing a query usingexecute_query
:
fn eval_field(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
field: &models::Field,
item: &Row,
) -> Result<models::RowFieldValue> {
match field {
models::Field::Column {
column,
fields,
arguments,
} => {
let col_val = eval_column(variables, item, column, arguments)?;
match fields {
None => Ok(models::RowFieldValue(col_val)),
Some(nested_field) => eval_nested_field(
collection_relationships,
variables,
state,
col_val,
nested_field,
),
}
}
models::Field::Relationship {
relationship,
arguments,
query,
} => {
let relationship = collection_relationships.get(relationship).ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "relationship is undefined".into(),
details: serde_json::Value::Null,
}),
))?;
let source = vec![item.clone()];
let collection = eval_path_element(
collection_relationships,
variables,
state,
relationship,
arguments,
&source,
None,
&None,
)?;
let row_set = execute_query(
collection_relationships,
variables,
state,
query,
Root::Reset,
collection,
)?;
let row_set_json = serde_json::to_value(row_set).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode rowset".into(),
details: serde_json::Value::Null,
}),
)
})?;
Ok(models::RowFieldValue(row_set_json))
}
}
}
Grouping
In addition to field selection and computing aggregates, we also need to return the results of any requested grouping operations.
This is done by delegating to the eval_groups
function:
let groups = query
.groups
.as_ref()
.map(|grouping| {
eval_groups(
collection_relationships,
variables,
state,
grouping,
&paginated,
)
})
.transpose()?;
eval_groups
takes a set of rows, and proceeds largely like execute_query
itself.
First, rows are partitioned into groups:
fn eval_groups(
collection_relationships: &BTreeMap<models::RelationshipName, ndc_models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
grouping: &ndc_models::Grouping,
paginated: &[Row],
) -> Result<Vec<ndc_models::Group>> {
let chunks: Vec<Chunk> = paginated
.iter()
.chunk_by(|row| {
eval_dimensions(
collection_relationships,
variables,
state,
row,
&grouping.dimensions,
)
.expect("cannot eval dimensions")
})
.into_iter()
.map(|(dimensions, rows)| Chunk {
dimensions,
rows: rows.cloned().collect(),
})
.collect();
The eval_dimensions
function computes a vector of dimensions for each row:
fn eval_dimensions(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
row: &Row,
dimensions: &[ndc_models::Dimension],
) -> Result<Vec<serde_json::Value>> {
let mut values = vec![];
for dimension in dimensions {
let value = eval_dimension(collection_relationships, variables, state, row, dimension)?;
values.push(value);
}
Ok(values)
}
The only type of dimension we need to handle is a column. First the value of the column is computed by delegating to eval_column_field_path
, and then any extraction function is evaluated using the eval_extraction
function:
fn eval_dimension(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
row: &Row,
dimension: &models::Dimension,
) -> Result<serde_json::Value> {
match dimension {
models::Dimension::Column {
column_name,
arguments,
field_path,
path,
extraction,
} => {
let value = eval_column_at_path(
collection_relationships,
variables,
state,
row,
path,
column_name,
arguments,
field_path.as_deref(),
)?;
eval_extraction(extraction, value)
}
}
}
Next, the partitions are sorted, using the group_sort
function which is very similar to its row-based counterpart sort
:
let sorted = group_sort(variables, chunks, &grouping.order_by)?;
Next, groups are aggregated and filtered:
let mut groups: Vec<models::Group> = vec![];
for chunk in &sorted {
let dimensions = chunk.dimensions.clone();
let mut aggregates: IndexMap<models::FieldName, serde_json::Value> = IndexMap::new();
for (aggregate_name, aggregate) in &grouping.aggregates {
aggregates.insert(
aggregate_name.clone(),
eval_aggregate(variables, aggregate, &chunk.rows)?,
);
}
if let Some(predicate) = &grouping.predicate {
if eval_group_expression(variables, predicate, &chunk.rows)? {
groups.push(models::Group {
dimensions: dimensions.clone(),
aggregates,
});
}
} else {
groups.push(models::Group {
dimensions: dimensions.clone(),
aggregates,
});
}
}
The eval_group_expression
function is also very similar to the eval_expression
function which performs a similar operation on rows.
Finally, the groups are paginated and returned:
let paginated: Vec<models::Group> =
paginate(groups.into_iter(), grouping.limit, grouping.offset);
Ok(paginated)
}
Relationships
Relationships appear in many places in the QueryRequest
, but are always computed using the eval_path
function.
eval_path
accepts a list of PathElement
s, each of which describes the traversal of a single edge of the collection-relationship graph. eval_path
computes the collection at the final node of this path through the graph.
It does this by successively evaluating each edge in turn using the eval_path_element
function:
fn eval_path(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
path: &[models::PathElement],
items: &[Row],
) -> Result<Vec<Row>> {
let mut result: Vec<Row> = items.to_vec();
for path_element in path {
let relationship = collection_relationships
.get(&path_element.relationship)
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid relationship name in path".into(),
details: serde_json::Value::Null,
}),
))?;
result = eval_path_element(
collection_relationships,
variables,
state,
relationship,
&path_element.arguments,
&result,
path_element.field_path.as_deref(),
&path_element.predicate,
)?;
}
Ok(result)
}
The eval_path_element
function computes a collection from a single relationship, one source row at a time. If a field_path
exists, the source row is replaced by descending through the nested objects as specified by the field path (using eval_row_field_path
). Once this is done, all relationship arguments are evaluated, and the target collection is computed by using get_collection_by_name
. Finally the column mapping is evaluated on any resulting rows.
fn eval_path_element(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
relationship: &models::Relationship,
arguments: &BTreeMap<models::ArgumentName, models::RelationshipArgument>,
source: &[Row],
field_path: Option<&[models::FieldName]>,
predicate: &Option<Box<models::Expression>>,
) -> Result<Vec<Row>> {
let mut matching_rows: Vec<Row> = vec![];
// Note: Join strategy
//
// Rows can be related in two ways: 1) via a column mapping, and
// 2) via collection arguments. Because collection arguments can be computed
// using the columns on the source side of a relationship, in general
// we need to compute the target collection once for each source row.
// This join strategy can result in some target rows appearing in the
// resulting row set more than once, if two source rows are both related
// to the same target row.
//
// In practice, this is not an issue, either because a) the relationship
// is computed in the course of evaluating a predicate, and all predicates are
// implicitly or explicitly existentially quantified, or b) if the
// relationship is computed in the course of evaluating an ordering, the path
// should consist of all object relationships, and possibly terminated by a
// single array relationship, so there should be no double counting.
for src_row in source {
let src_row = eval_row_field_path(field_path, src_row)?;
let mut all_arguments = BTreeMap::new();
for (argument_name, argument_value) in &relationship.arguments {
if all_arguments
.insert(
argument_name.clone(),
eval_relationship_argument(variables, &src_row, argument_value)?,
)
.is_some()
{
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "duplicate argument names".into(),
details: serde_json::Value::Null,
}),
));
}
}
for (argument_name, argument_value) in arguments {
if all_arguments
.insert(
argument_name.clone(),
eval_relationship_argument(variables, &src_row, argument_value)?,
)
.is_some()
{
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "duplicate argument names".into(),
details: serde_json::Value::Null,
}),
));
}
}
let target =
get_collection_by_name(&relationship.target_collection, &all_arguments, state)?;
for tgt_row in &target {
if eval_column_mapping(relationship, &src_row, tgt_row)?
&& if let Some(expression) = predicate {
eval_expression(
collection_relationships,
variables,
state,
expression,
&[],
tgt_row,
)?
} else {
true
}
{
matching_rows.push(tgt_row.clone());
}
}
}
Ok(matching_rows)
}
Mutations
In this section, we will break down the implementation of the /mutation
endpoint.
The mutation endpoint is handled by the post_mutation
function:
async fn post_mutation(
State(state): State<Arc<Mutex<AppState>>>,
Json(request): Json<models::MutationRequest>,
) -> Result<Json<models::MutationResponse>> {
This function receives the application state, and the MutationRequest
structure.
The function iterates over the collection of requested MutationOperation
structures, and handles each one in turn, adding each result to the operation_results
field in the response:
if request.operations.len() > 1 {
Err((
StatusCode::NOT_IMPLEMENTED,
Json(models::ErrorResponse {
message: "transactional mutations are not supported".into(),
details: serde_json::Value::Null,
}),
))
} else {
let mut state = state.lock().await;
let mut operation_results = vec![];
for operation in &request.operations {
let operation_result = execute_mutation_operation(
&mut state,
&request.collection_relationships,
operation,
)?;
operation_results.push(operation_result);
}
Ok(Json(models::MutationResponse { operation_results }))
}
}
The execute_mutation_operation
function is responsible for executing an individual operation. In the next section, we'll break that function down.
Handling Operations
The execute_mutation_operation
function is responsible for handling a single MutationOperation
, and returning the corresponding MutationOperationResults
:
fn execute_mutation_operation(
state: &mut AppState,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
operation: &models::MutationOperation,
) -> Result<models::MutationOperationResults> {
match operation {
models::MutationOperation::Procedure {
name,
arguments,
fields,
} => execute_procedure(state, name, arguments, fields, collection_relationships),
}
}
The function matches on the type of the operation, and delegates to the appropriate function. Currently, the only type of operation is Procedure
, so the function delegates to the execute_procedure
function. In the next section, we will break down the implementation of that function.
Procedures
The execute_procedure
function is responsible for executing a single procedure:
fn execute_procedure(
state: &mut AppState,
name: &models::ProcedureName,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
fields: &Option<models::NestedField>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
) -> std::result::Result<models::MutationOperationResults, (StatusCode, Json<models::ErrorResponse>)>
The function receives the application state
, along with the name
of the procedure to invoke, a list of arguments
, a list of fields
to return, and a list of collection_relationships
.
The function matches on the name of the procedure, and fails if the name is not recognized. We will walk through each procedure in turn.
{
match name.as_str() {
"upsert_article" => {
execute_upsert_article(state, arguments, fields, collection_relationships)
}
"delete_articles" => {
execute_delete_articles(state, arguments, fields, collection_relationships)
}
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "unknown procedure".into(),
details: serde_json::Value::Null,
}),
)),
}
}
upsert_article
The upsert_article
procedure is implemented by the execute_upsert_article
function.
The execute_upsert_article
function reads the article
argument from the arguments
list, failing if it is not found or invalid.
It then inserts or updates that article in the application state, depending on whether or not an article with that id
already exists or not.
Finally, it delegates to the eval_nested_field
function to evaluate any nested fields, and returns the selected fields in the result:
fn execute_upsert_article(
state: &mut AppState,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
fields: &Option<models::NestedField>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
) -> std::result::Result<models::MutationOperationResults, (StatusCode, Json<models::ErrorResponse>)>
{
let article = arguments.get("article").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "Expected argument 'article'".into(),
details: serde_json::Value::Null,
}),
))?;
let article_obj: Row = serde_json::from_value(article.clone()).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "article must be an object".into(),
details: serde_json::Value::Null,
}),
)
})?;
let id = article_obj.get("id").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "article missing field 'id'".into(),
details: serde_json::Value::Null,
}),
))?;
let id_int = id
.as_i64()
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "id must be an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "id out of range".into(),
details: serde_json::Value::Null,
}),
)
})?;
let old_row = state.articles.insert(id_int, article_obj);
Ok(models::MutationOperationResults::Procedure {
result: old_row.map_or(Ok(serde_json::Value::Null), |old_row| {
let old_row_value = serde_json::to_value(old_row).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode response".into(),
details: serde_json::Value::Null,
}),
)
})?;
let old_row_fields = match fields {
None => Ok(models::RowFieldValue(old_row_value)),
Some(nested_field) => eval_nested_field(
collection_relationships,
&BTreeMap::new(),
state,
old_row_value,
nested_field,
),
}?;
Ok(old_row_fields.0)
})?,
})
}
delete_articles
The delete_articles
procedure is implemented by the execute_delete_articles
function.
It is provided as an example of a procedure with a predicate type as the type of an argument.
The execute_delete_articles
function reads the where
argument from the arguments
list, failing if it is not found or invalid.
It then deletes all articles in the application state which match the predicate, and returns a list of the deleted rows.
This function delegates to the eval_nested_field
function to evaluate any nested fields, and returns the selected fields in the result:
fn execute_delete_articles(
state: &mut AppState,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
fields: &Option<models::NestedField>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
) -> std::result::Result<models::MutationOperationResults, (StatusCode, Json<models::ErrorResponse>)>
{
let predicate_value = arguments.get("where").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "Expected argument 'where'".into(),
details: serde_json::Value::Null,
}),
))?;
let predicate: models::Expression =
serde_json::from_value(predicate_value.clone()).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "Bad predicate".into(),
details: serde_json::Value::Null,
}),
)
})?;
let mut removed: Vec<Row> = vec![];
let state_snapshot = state.clone();
for article in state.articles.values_mut() {
if eval_expression(
&BTreeMap::new(),
&BTreeMap::new(),
&state_snapshot,
&predicate,
&[],
article,
)? {
removed.push(article.clone());
}
}
let removed_value = serde_json::to_value(removed).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode response".into(),
details: serde_json::Value::Null,
}),
)
})?;
let removed_fields = match fields {
None => Ok(models::RowFieldValue(removed_value)),
Some(nested_field) => eval_nested_field(
collection_relationships,
&BTreeMap::new(),
&state_snapshot,
removed_value,
nested_field,
),
}?;
Ok(models::MutationOperationResults::Procedure {
result: removed_fields.0,
})
}
Explain
The /query/explain
and /mutation/explain
endpoints are not implemented in the reference implementation, because their respective request objects are interpreted directly. There is no intermediate representation (such as SQL) which could be described as an "execution plan".
The query.explain
and mutation.explain
capabilities are turned off in the capabilities endpoint,
and the /query/explain
and /mutation/explain
endpoints throw an error:
async fn post_query_explain(
Json(_request): Json<models::QueryRequest>,
) -> Result<Json<models::ExplainResponse>> {
Err((
StatusCode::NOT_IMPLEMENTED,
Json(models::ErrorResponse {
message: "explain is not supported".into(),
details: serde_json::Value::Null,
}),
))
}
Health and Metrics
Service Health
The /health
endpoint has nothing to check, because the reference implementation does not need to connect to any other services. Therefore, once the reference implementation is running, it can always report a healthy status:
async fn get_health() -> StatusCode {
StatusCode::OK
}
In practice, a connector should make sure that any upstream services can be successfully contacted, and respond accordingly.
Metrics
The reference implementation maintains some generic access metrics in its application state:
metrics.total_requests
counts the number of requests ever served, andmetrics.active_requests
counts the number of requests currently being served.
The metrics endpoint reports these metrics using the Rust prometheus crate:
async fn get_metrics(State(state): State<Arc<Mutex<AppState>>>) -> Result<String> {
let state = state.lock().await;
state.metrics.as_text().ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode metrics".into(),
details: serde_json::Value::Null,
}),
))
}
To maintain these metrics, it uses a simple metrics middleware:
async fn metrics_middleware(
state: State<Arc<Mutex<AppState>>>,
request: axum::extract::Request,
next: axum::middleware::Next,
) -> axum::response::Response {
// Don't hold the lock to update metrics, since the
// lock doesn't protect the metrics anyway.
let metrics = {
let state = state.lock().await;
state.metrics.clone()
};
metrics.total_requests.inc();
metrics.active_requests.inc();
let response = next.run(request).await;
metrics.active_requests.dec();
response
}
Types
Aggregate
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[skip_serializing_none]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Aggregate")]
pub enum Aggregate {
ColumnCount {
/// The column to apply the count aggregate function to
column: FieldName,
/// Arguments to satisfy the column specified by 'column'
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Whether or not only distinct items should be counted
distinct: bool,
},
SingleColumn {
/// The column to apply the aggregation function to
column: FieldName,
/// Arguments to satisfy the column specified by 'column'
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Single column aggregate function name.
function: AggregateFunctionName,
},
StarCount {},
}
AggregateCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Aggregate Capabilities")]
pub struct AggregateCapabilities {
/// Does the connector support filtering based on aggregated values
pub filter_by: Option<LeafCapability>,
/// Does the connector support aggregations over groups
pub group_by: Option<GroupByCapabilities>,
}
AggregateCapabilitiesSchemaInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Aggregate Capabilities Schema Info")]
pub struct AggregateCapabilitiesSchemaInfo {
/// The scalar type which should be used for the return type of count
/// (star_count and column_count) operations.
pub count_scalar_type: ScalarTypeName,
}
AggregateFunctionDefinition
/// The definition of an aggregation function on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Aggregate Function Definition")]
pub enum AggregateFunctionDefinition {
Min,
Max,
Sum {
/// The scalar type of the result of this function, which should have
/// one of the type representations Int64 or Float64, depending on
/// whether this function is defined on a scalar type with an integer or
/// floating-point representation, respectively.
result_type: ScalarTypeName,
},
Average {
/// The scalar type of the result of this function, which should have
/// the type representation Float64
result_type: ScalarTypeName,
},
Custom {
/// The scalar or object type of the result of this function
result_type: Type,
},
}
Argument
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Argument")]
pub enum Argument {
/// The argument is provided by reference to a variable.
/// Only used if the 'query.variables' capability is supported.
Variable { name: VariableName },
/// The argument is provided as a literal value
Literal { value: serde_json::Value },
}
ArgumentInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Argument Info")]
pub struct ArgumentInfo {
/// Argument description
pub description: Option<String>,
/// The name of the type of this argument
#[serde(rename = "type")]
pub argument_type: Type,
}
ArrayComparison
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Array Comparison")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ArrayComparison {
/// Check if the array contains the specified value.
/// Only used if the 'query.nested_fields.filter_by.nested_arrays.contains' capability is supported.
Contains { value: ComparisonValue },
/// Check is the array is empty.
/// Only used if the 'query.nested_fields.filter_by.nested_arrays.is_empty' capability is supported.
IsEmpty,
}
Capabilities
CapabilitiesResponse
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Capabilities Response")]
pub struct CapabilitiesResponse {
pub version: String,
pub capabilities: Capabilities,
}
CapabilitySchemaInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Capability Schema Info")]
pub struct CapabilitySchemaInfo {
/// Schema information relevant to query capabilities
pub query: Option<QueryCapabilitiesSchemaInfo>,
}
ComparisonOperatorDefinition
/// The definition of a comparison operator on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Comparison Operator Definition")]
pub enum ComparisonOperatorDefinition {
Equal,
In,
LessThan,
LessThanOrEqual,
GreaterThan,
GreaterThanOrEqual,
Contains,
ContainsInsensitive,
StartsWith,
StartsWithInsensitive,
EndsWith,
EndsWithInsensitive,
Custom {
/// The type of the argument to this operator
argument_type: Type,
},
}
ComparisonTarget
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Comparison Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ComparisonTarget {
/// The comparison targets a column.
Column {
/// The name of the column
name: FieldName,
/// Arguments to satisfy the column specified by 'name'
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested field within an object column.
/// Only non-empty if the 'query.nested_fields.filter_by' capability is supported.
field_path: Option<Vec<FieldName>>,
},
/// The comparison targets the result of aggregation.
/// Only used if the 'query.aggregates.filter_by' capability is supported.
Aggregate {
/// Non-empty collection of relationships to traverse
path: Vec<PathElement>,
/// The aggregation method to use
aggregate: Aggregate,
},
}
ComparisonValue
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Comparison Value")]
pub enum ComparisonValue {
/// The value to compare against should be drawn from another column
Column {
/// Any relationships to traverse to reach this column.
/// Only non-empty if the 'relationships.relation_comparisons' is supported.
path: Vec<PathElement>,
/// The name of the column
name: FieldName,
/// Arguments to satisfy the column specified by 'name'
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested field within an object column.
/// Only non-empty if the 'query.nested_fields.filter_by' capability is supported.
field_path: Option<Vec<FieldName>>,
/// The scope in which this column exists, identified
/// by an top-down index into the stack of scopes.
/// The stack grows inside each `Expression::Exists`,
/// so scope 0 (the default) refers to the current collection,
/// and each subsequent index refers to the collection outside
/// its predecessor's immediately enclosing `Expression::Exists`
/// expression.
/// Only used if the 'query.exists.named_scopes' capability is supported.
scope: Option<usize>,
},
/// A scalar value to compare against
Scalar { value: serde_json::Value },
/// A value to compare against that is to be drawn from the query's variables.
/// Only used if the 'query.variables' capability is supported.
Variable { name: VariableName },
}
Dimension
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[skip_serializing_none]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Dimension")]
pub enum Dimension {
Column {
/// Any (object) relationships to traverse to reach this column.
/// Only non-empty if the 'relationships' capability is supported.
path: Vec<PathElement>,
/// The name of the column
column_name: FieldName,
/// Arguments to satisfy the column specified by 'column_name'
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// The name of the extraction function to apply to the selected value, if any
extraction: Option<ExtractionFunctionName>,
},
}
ErrorResponse
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Error Response")]
pub struct ErrorResponse {
/// A human-readable summary of the error
pub message: String,
/// Any additional structured information about the error
pub details: serde_json::Value,
}
ExistsCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Exists Capabilities")]
pub struct ExistsCapabilities {
/// Does the connector support named scopes in column references inside
/// EXISTS predicates
pub named_scopes: Option<LeafCapability>,
/// Does the connector support ExistsInCollection::Unrelated
pub unrelated: Option<LeafCapability>,
/// Does the connector support ExistsInCollection::NestedCollection
pub nested_collections: Option<LeafCapability>,
/// Does the connector support filtering over nested scalar arrays using existential quantification.
/// This means the connector must support ExistsInCollection::NestedScalarCollection.
pub nested_scalar_collections: Option<LeafCapability>,
}
ExistsInCollection
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Exists In Collection")]
pub enum ExistsInCollection {
/// The rows to evaluate the exists predicate against come from a related collection.
/// Only used if the 'relationships' capability is supported.
Related {
#[serde(skip_serializing_if = "Option::is_none", default)]
/// Path to a nested field within an object column that must be navigated
/// before the relationship is navigated.
/// Only non-empty if the 'relationships.nested.filtering' capability is supported.
field_path: Option<Vec<FieldName>>,
/// The name of the relationship to follow
relationship: RelationshipName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
/// The rows to evaluate the exists predicate against come from an unrelated collection
/// Only used if the 'query.exists.unrelated' capability is supported.
Unrelated {
/// The name of a collection
collection: CollectionName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
/// The rows to evaluate the exists predicate against come from a nested array field.
/// Only used if the 'query.exists.nested_collections' capability is supported.
NestedCollection {
column_name: FieldName,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested collection via object columns
#[serde(skip_serializing_if = "Vec::is_empty", default)]
field_path: Vec<FieldName>,
},
/// Specifies a column that contains a nested array of scalars. The
/// array will be brought into scope of the nested expression where
/// each element becomes an object with one '__value' column that
/// contains the element value.
/// Only used if the 'query.exists.nested_scalar_collections' capability is supported.
NestedScalarCollection {
column_name: FieldName,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested collection via object columns
#[serde(skip_serializing_if = "Vec::is_empty", default)]
field_path: Vec<FieldName>,
},
}
ExplainResponse
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Explain Response")]
pub struct ExplainResponse {
/// A list of human-readable key-value pairs describing
/// a query execution plan. For example, a connector for
/// a relational database might return the generated SQL
/// and/or the output of the `EXPLAIN` command. An API-based
/// connector might encode a list of statically-known API
/// calls which would be made.
pub details: BTreeMap<String, String>,
}
Expression
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Expression")]
pub enum Expression {
And {
expressions: Vec<Expression>,
},
Or {
expressions: Vec<Expression>,
},
Not {
expression: Box<Expression>,
},
UnaryComparisonOperator {
column: ComparisonTarget,
operator: UnaryComparisonOperator,
},
BinaryComparisonOperator {
column: ComparisonTarget,
operator: ComparisonOperatorName,
value: ComparisonValue,
},
/// A comparison against a nested array column.
/// Only used if the 'query.nested_fields.filter_by.nested_arrays' capability is supported.
ArrayComparison {
column: ComparisonTarget,
comparison: ArrayComparison,
},
Exists {
in_collection: ExistsInCollection,
predicate: Option<Box<Expression>>,
},
}
ExtractionFunctionDefinition
/// The definition of an aggregation function on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Extraction Function Definition")]
pub enum ExtractionFunctionDefinition {
Nanosecond {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Microsecond {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Second {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Minute {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Hour {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Day {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Week {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Month {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Quarter {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Year {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
DayOfWeek {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
DayOfYear {
/// The result type, which must be a defined scalar type in the schema response.
result_type: ScalarTypeName,
},
Custom {
/// The scalar or object type of the result of this function
result_type: Type,
},
}
Field
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Field")]
pub enum Field {
/// A field satisfied by returning the value of a row's column.
Column {
column: FieldName,
/// When the type of the column is a (possibly-nullable) array or object,
/// the caller can request a subset of the complete column data,
/// by specifying fields to fetch here.
/// If omitted, the column data will be fetched in full.
fields: Option<NestedField>,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
},
/// A field satisfied by navigating a relationship from the current row to a related collection.
/// Only used if the 'relationships' capability is supported.
Relationship {
query: Box<Query>,
/// The name of the relationship to follow for the subquery
relationship: RelationshipName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
}
ForeignKeyConstraint
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Foreign Key Constraint")]
pub struct ForeignKeyConstraint {
/// The columns on which you want want to define the foreign key.
/// This is a mapping between fields on object type to columns on the foreign collection.
/// The column on the foreign collection is specified via a field path (ie. an array of field
/// names that descend through nested object fields). The field path must only contain a single item,
/// meaning a column on the foreign collection's type, unless the 'relationships.nested'
/// capability is supported, in which case multiple items can be used to denote a nested object field.
pub column_mapping: BTreeMap<FieldName, Vec<FieldName>>,
/// The name of a collection
pub foreign_collection: CollectionName,
}
FunctionInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Function Info")]
pub struct FunctionInfo {
/// The name of the function
pub name: FunctionName,
/// Description of the function
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the function's result type
pub result_type: Type,
}
Group
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Group")]
pub struct Group {
/// Values of dimensions which identify this group
pub dimensions: Vec<serde_json::Value>,
/// Aggregates computed within this group
pub aggregates: IndexMap<FieldName, serde_json::Value>,
}
GroupComparisonTarget
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Aggregate Comparison Target")]
pub enum GroupComparisonTarget {
Aggregate { aggregate: Aggregate },
}
GroupComparisonValue
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Aggregate Comparison Value")]
pub enum GroupComparisonValue {
/// A scalar value to compare against
Scalar { value: serde_json::Value },
/// A value to compare against that is to be drawn from the query's variables.
/// Only used if the 'query.variables' capability is supported.
Variable { name: VariableName },
}
GroupExpression
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Group Expression")]
pub enum GroupExpression {
And {
expressions: Vec<GroupExpression>,
},
Or {
expressions: Vec<GroupExpression>,
},
Not {
expression: Box<GroupExpression>,
},
UnaryComparisonOperator {
target: GroupComparisonTarget,
operator: UnaryComparisonOperator,
},
BinaryComparisonOperator {
target: GroupComparisonTarget,
operator: ComparisonOperatorName,
value: GroupComparisonValue,
},
}
Grouping
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Grouping")]
pub struct Grouping {
/// Dimensions along which to partition the data
pub dimensions: Vec<Dimension>,
/// Aggregates to compute in each group
pub aggregates: IndexMap<FieldName, Aggregate>,
/// Optionally specify a predicate to apply after grouping rows.
/// Only used if the 'query.aggregates.group_by.filter' capability is supported.
pub predicate: Option<GroupExpression>,
/// Optionally specify how groups should be ordered
/// Only used if the 'query.aggregates.group_by.order' capability is supported.
pub order_by: Option<GroupOrderBy>,
/// Optionally limit to N groups
/// Only used if the 'query.aggregates.group_by.paginate' capability is supported.
pub limit: Option<u32>,
/// Optionally offset from the Nth group
/// Only used if the 'query.aggregates.group_by.paginate' capability is supported.
pub offset: Option<u32>,
}
GroupByCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Group By Capabilities")]
pub struct GroupByCapabilities {
/// Does the connector support post-grouping predicates
pub filter: Option<LeafCapability>,
/// Does the connector support post-grouping ordering
pub order: Option<LeafCapability>,
/// Does the connector support post-grouping pagination
pub paginate: Option<LeafCapability>,
}
GroupOrderBy
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Group Order By")]
pub struct GroupOrderBy {
/// The elements to order by, in priority order
pub elements: Vec<GroupOrderByElement>,
}
GroupOrderByElement
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Group Order By Element")]
pub struct GroupOrderByElement {
pub order_direction: OrderDirection,
pub target: GroupOrderByTarget,
}
GroupOrderByTarget
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Group Order By Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum GroupOrderByTarget {
Dimension {
/// The index of the dimension to order by, selected from the
/// dimensions provided in the `Grouping` request.
index: usize,
},
Aggregate {
/// Aggregation method to apply
aggregate: Aggregate,
},
}
LeafCapability
/// A unit value to indicate a particular leaf capability is supported.
/// This is an empty struct to allow for future sub-capabilities.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
pub struct LeafCapability {}
MutationCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Capabilities")]
pub struct MutationCapabilities {
/// Does the connector support executing multiple mutations in a transaction.
pub transactional: Option<LeafCapability>,
/// Does the connector support explaining mutations
pub explain: Option<LeafCapability>,
}
MutationOperation
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Operation")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum MutationOperation {
Procedure {
/// The name of a procedure
name: ProcedureName,
/// Any named procedure arguments
arguments: BTreeMap<ArgumentName, serde_json::Value>,
/// The fields to return from the result, or null to return everything
fields: Option<NestedField>,
},
}
MutationOperationResults
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Operation Results")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum MutationOperationResults {
Procedure { result: serde_json::Value },
}
MutationRequest
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Request")]
pub struct MutationRequest {
/// The mutation operations to perform
pub operations: Vec<MutationOperation>,
/// The relationships between collections involved in the entire mutation request.
/// Only used if the 'relationships' capability is supported.
pub collection_relationships: BTreeMap<RelationshipName, Relationship>,
}
MutationResponse
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Response")]
pub struct MutationResponse {
/// The results of each mutation operation, in the same order as they were received
pub operation_results: Vec<MutationOperationResults>,
}
NestedArray
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedArray")]
pub struct NestedArray {
pub fields: Box<NestedField>,
}
NestedArrayFilterByCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Nested Array Filter By Capabilities")]
pub struct NestedArrayFilterByCapabilities {
/// Does the connector support filtering over nested arrays by checking if the array contains a value.
/// This must be supported for all types that can be contained in an array that implement an 'eq'
/// comparison operator.
pub contains: Option<LeafCapability>,
/// Does the connector support filtering over nested arrays by checking if the array is empty.
/// This must be supported no matter what type is contained in the array.
pub is_empty: Option<LeafCapability>,
}
NestedCollection
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedCollection")]
pub struct NestedCollection {
pub query: Query,
}
NestedField
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "NestedField")]
pub enum NestedField {
Object(NestedObject),
Array(NestedArray),
/// Perform a query over the nested array's rows.
/// Only used if the 'query.nested_fields.nested_collections' capability is supported.
Collection(NestedCollection),
}
NestedFieldCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Nested Field Capabilities")]
pub struct NestedFieldCapabilities {
/// Does the connector support filtering by values of nested fields
pub filter_by: Option<NestedFieldFilterByCapabilities>,
/// Does the connector support ordering by values of nested fields
pub order_by: Option<LeafCapability>,
/// Does the connector support aggregating values within nested fields
pub aggregates: Option<LeafCapability>,
/// Does the connector support nested collection queries using
/// `NestedField::NestedCollection`
pub nested_collections: Option<LeafCapability>,
}
NestedFieldFilterByCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Nested Field Filter By Capabilities")]
pub struct NestedFieldFilterByCapabilities {
/// Does the connector support filtering over nested arrays (ie. Expression::ArrayComparison)
pub nested_arrays: Option<NestedArrayFilterByCapabilities>,
}
NestedObject
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedObject")]
pub struct NestedObject {
pub fields: IndexMap<FieldName, Field>,
}
NestedRelationshipCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Nested Relationship Capabilities")]
pub struct NestedRelationshipCapabilities {
/// Does the connector support navigating a relationship from inside a nested object inside a nested array
pub array: Option<LeafCapability>,
/// Does the connector support filtering over a relationship that starts from inside a nested object
pub filtering: Option<LeafCapability>,
/// Does the connector support ordering over a relationship that starts from inside a nested object
pub ordering: Option<LeafCapability>,
}
ObjectField
/// The definition of an object field
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Object Field")]
pub struct ObjectField {
/// Description of this field
pub description: Option<String>,
/// The type of this field
#[serde(rename = "type")]
pub r#type: Type,
/// The arguments available to the field - Matches implementation from CollectionInfo
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
}
ObjectType
/// The definition of an object type
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Object Type")]
pub struct ObjectType {
/// Description of this type
pub description: Option<String>,
/// Fields defined on this object type
pub fields: BTreeMap<FieldName, ObjectField>,
/// Any foreign keys defined for this object type's columns
pub foreign_keys: BTreeMap<String, ForeignKeyConstraint>,
}
OrderBy
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By")]
pub struct OrderBy {
/// The elements to order by, in priority order
pub elements: Vec<OrderByElement>,
}
OrderByElement
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By Element")]
pub struct OrderByElement {
pub order_direction: OrderDirection,
pub target: OrderByTarget,
}
OrderByTarget
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum OrderByTarget {
/// The ordering is performed over a column.
Column {
/// Any (object) relationships to traverse to reach this column.
/// Only non-empty if the 'relationships' capability is supported.
/// 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.
path: Vec<PathElement>,
/// The name of the column
name: FieldName,
/// Arguments to satisfy the column specified by 'name'
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested field within an object column.
/// Only non-empty if the 'query.nested_fields.order_by' capability is supported.
field_path: Option<Vec<FieldName>>,
},
/// The ordering is performed over the result of an aggregation.
/// Only used if the 'relationships.order_by_aggregate' capability is supported.
Aggregate {
/// Non-empty collection of relationships to traverse.
/// Only non-empty if the 'relationships' capability is supported.
/// 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.
path: Vec<PathElement>,
/// The aggregation method to use
aggregate: Aggregate,
},
}
OrderDirection
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Order Direction")]
#[serde(rename_all = "snake_case")]
pub enum OrderDirection {
Asc,
Desc,
}
PathElement
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "Path Element")]
pub struct PathElement {
#[serde(skip_serializing_if = "Option::is_none", default)]
/// Path to a nested field within an object column that must be navigated
/// before the relationship is navigated.
/// Only non-empty if the 'relationships.nested' capability is supported
/// (plus perhaps one of the sub-capabilities, depending on the feature using the PathElement).
pub field_path: Option<Vec<FieldName>>,
/// The name of the relationship to follow
pub relationship: RelationshipName,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, RelationshipArgument>,
/// A predicate expression to apply to the target collection
pub predicate: Option<Box<Expression>>,
}
ProcedureInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Procedure Info")]
pub struct ProcedureInfo {
/// The name of the procedure
pub name: ProcedureName,
/// Column description
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the result type
pub result_type: Type,
}
Query
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query")]
pub struct Query {
/// Aggregate fields of the query.
/// Only used if the 'query.aggregates' capability is supported.
pub aggregates: Option<IndexMap<FieldName, Aggregate>>,
/// Fields of the query
pub fields: Option<IndexMap<FieldName, Field>>,
/// Optionally limit to N results
pub limit: Option<u32>,
/// Optionally offset from the Nth result
pub offset: Option<u32>,
/// Optionally specify how rows should be ordered
pub order_by: Option<OrderBy>,
/// Optionally specify a predicate to apply to the rows
pub predicate: Option<Expression>,
/// Optionally group and aggregate the selected rows.
/// Only used if the 'query.aggregates.group_by' capability is supported.
pub groups: Option<Grouping>,
}
QueryCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Capabilities")]
pub struct QueryCapabilities {
/// Does the connector support aggregate queries
pub aggregates: Option<AggregateCapabilities>,
/// Does the connector support queries which use variables
pub variables: Option<LeafCapability>,
/// Does the connector support explaining queries
pub explain: Option<LeafCapability>,
/// Does the connector support nested fields
#[serde(default)]
pub nested_fields: NestedFieldCapabilities,
/// Does the connector support EXISTS predicates
#[serde(default)]
pub exists: ExistsCapabilities,
}
QueryCapabilitiesSchemaInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Capabilities Schema Info")]
pub struct QueryCapabilitiesSchemaInfo {
/// Schema information relevant to aggregate query capabilities
pub aggregates: Option<AggregateCapabilitiesSchemaInfo>,
}
QueryRequest
/// This is the request body of the query POST endpoint
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Request")]
pub struct QueryRequest {
/// The name of a collection
pub collection: CollectionName,
/// The query syntax tree
pub query: Query,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, Argument>,
/// Any relationships between collections involved in the query request.
/// Only used if the 'relationships' capability is supported.
pub collection_relationships: BTreeMap<RelationshipName, Relationship>,
/// One set of named variables for each rowset to fetch. Each variable set
/// should be subtituted in turn, and a fresh set of rows returned.
/// Only used if the 'query.variables' capability is supported.
pub variables: Option<Vec<BTreeMap<VariableName, serde_json::Value>>>,
}
QueryResponse
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Response")]
/// Query responses may return multiple RowSets when using queries with variables.
/// Else, there should always be exactly one RowSet
pub struct QueryResponse(pub Vec<RowSet>);
Relationship
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship")]
pub struct Relationship {
/// A mapping between columns on the source row to columns on the target collection.
/// The column on the target collection is specified via a field path (ie. an array of field
/// names that descend through nested object fields). The field path will only contain a single item,
/// meaning a column on the target collection's type, unless the 'relationships.nested'
/// capability is supported, in which case multiple items denotes a nested object field.
pub column_mapping: BTreeMap<FieldName, Vec<FieldName>>,
pub relationship_type: RelationshipType,
/// The name of a collection
pub target_collection: CollectionName,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, RelationshipArgument>,
}
RelationshipArgument
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship Argument")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum RelationshipArgument {
/// The argument is provided by reference to a variable.
/// Only used if the 'query.variables' capability is supported.
Variable {
name: VariableName,
},
/// The argument is provided as a literal value
Literal {
value: serde_json::Value,
},
// The argument is provided based on a column of the source collection
Column {
name: FieldName,
},
}
RelationshipCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship Capabilities")]
pub struct RelationshipCapabilities {
/// Does the connector support comparisons that involve related collections (ie. joins)?
pub relation_comparisons: Option<LeafCapability>,
/// Does the connector support ordering by an aggregated array relationship?
pub order_by_aggregate: Option<LeafCapability>,
/// Does the connector support navigating a relationship from inside a nested object
pub nested: Option<NestedRelationshipCapabilities>,
}
RelationshipType
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Relationship Type")]
#[serde(rename_all = "snake_case")]
pub enum RelationshipType {
Object,
Array,
}
RowFieldValue
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Row Field Value")]
pub struct RowFieldValue(pub serde_json::Value);
impl RowFieldValue {
/// In the case where this field value was obtained using a
/// [`Field::Relationship`], the returned JSON will be a [`RowSet`].
/// We cannot express [`RowFieldValue`] as an enum, because
/// [`RowFieldValue`] overlaps with values which have object types.
pub fn as_rowset(self) -> Option<RowSet> {
serde_json::from_value(self.0).ok()
}
}
RowSet
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Row Set")]
pub struct RowSet {
/// The results of the aggregates returned by the query
pub aggregates: Option<IndexMap<FieldName, serde_json::Value>>,
/// The rows returned by the query, corresponding to the query's fields
pub rows: Option<Vec<IndexMap<FieldName, RowFieldValue>>>,
/// The results of any grouping operation
pub groups: Option<Vec<Group>>,
}
ScalarType
/// The definition of a scalar type, i.e. types that can be used as the types of columns.
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Scalar Type")]
pub struct ScalarType {
/// A description of valid values for this scalar type.
pub representation: TypeRepresentation,
/// A map from aggregate function names to their definitions. Result type names must be defined scalar types declared in ScalarTypesCapabilities.
pub aggregate_functions: BTreeMap<AggregateFunctionName, AggregateFunctionDefinition>,
/// A map from comparison operator names to their definitions. Argument type names must be defined scalar types declared in ScalarTypesCapabilities.
pub comparison_operators: BTreeMap<ComparisonOperatorName, ComparisonOperatorDefinition>,
/// A map from extraction function names to their definitions.
#[serde(default)]
pub extraction_functions: BTreeMap<ExtractionFunctionName, ExtractionFunctionDefinition>,
}
SchemaResponse
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Schema Response")]
pub struct SchemaResponse {
/// A list of scalar types which will be used as the types of collection columns
pub scalar_types: BTreeMap<ScalarTypeName, ScalarType>,
/// A list of object types which can be used as the types of arguments, or return types of procedures.
/// Names should not overlap with scalar type names.
pub object_types: BTreeMap<ObjectTypeName, ObjectType>,
/// Collections which are available for queries
pub collections: Vec<CollectionInfo>,
/// Functions (i.e. collections which return a single column and row)
pub functions: Vec<FunctionInfo>,
/// Procedures which are available for execution as part of mutations
pub procedures: Vec<ProcedureInfo>,
/// Schema data which is relevant to features enabled by capabilities
pub capabilities: Option<CapabilitySchemaInfo>,
}
CollectionInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Collection Info")]
pub struct CollectionInfo {
/// The name of the collection
///
/// Note: these names are abstract - there is no requirement that this name correspond to
/// the name of an actual collection in the database.
pub name: CollectionName,
/// Description of the collection
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the collection's object type
#[serde(rename = "type")]
pub collection_type: ObjectTypeName,
/// Any uniqueness constraints enforced on this collection
pub uniqueness_constraints: BTreeMap<String, UniquenessConstraint>,
}
Type
/// Types track the valid representations of values as JSON
#[derive(
Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Type")]
pub enum Type {
/// A named type
Named {
/// The name can refer to a scalar or object type
name: TypeName,
},
/// A nullable type
Nullable {
/// The type of the non-null inhabitants of this type
underlying_type: Box<Type>,
},
/// An array type
Array {
/// The type of the elements of the array
element_type: Box<Type>,
},
/// A predicate type for a given object type
Predicate {
/// The object type name
object_type_name: ObjectTypeName,
},
}
TypeRepresentation
/// Representations of scalar types
#[derive(
Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Type Representation")]
pub enum TypeRepresentation {
/// JSON booleans
Boolean,
/// Any JSON string
String,
/// A 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1
Int8,
/// A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1
Int16,
/// A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1
Int32,
/// A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1
Int64,
/// An IEEE-754 single-precision floating-point number
Float32,
/// An IEEE-754 double-precision floating-point number
Float64,
/// Arbitrary-precision integer string
#[serde(rename = "biginteger")]
BigInteger,
/// Arbitrary-precision decimal string
#[serde(rename = "bigdecimal")]
BigDecimal,
/// UUID string (8-4-4-4-12)
#[serde(rename = "uuid")]
UUID,
/// ISO 8601 date
Date,
/// ISO 8601 timestamp
Timestamp,
/// ISO 8601 timestamp-with-timezone
#[serde(rename = "timestamptz")]
TimestampTZ,
/// GeoJSON, per RFC 7946
Geography,
/// GeoJSON Geometry object, per RFC 7946
Geometry,
/// Base64-encoded bytes
Bytes,
/// Arbitrary JSON
#[serde(rename = "json")]
JSON,
/// One of the specified string values
Enum { one_of: Vec<String> },
}
UnaryComparisonOperator
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Unary Comparison Operator")]
#[serde(rename_all = "snake_case")]
pub enum UnaryComparisonOperator {
IsNull,
}
UniquenessConstraint
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Uniqueness Constraint")]
pub struct UniquenessConstraint {
/// A list of columns which this constraint requires to be unique
pub unique_columns: Vec<FieldName>,
}
JSON Schema
CapabilitiesResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Capabilities Response",
"type": "object",
"required": [
"capabilities",
"version"
],
"properties": {
"version": {
"type": "string"
},
"capabilities": {
"$ref": "#/definitions/Capabilities"
}
},
"definitions": {
"AggregateCapabilities": {
"title": "Aggregate Capabilities",
"type": "object",
"properties": {
"filter_by": {
"description": "Does the connector support filtering based on aggregated values",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"group_by": {
"description": "Does the connector support aggregations over groups",
"anyOf": [
{
"$ref": "#/definitions/GroupByCapabilities"
},
{
"type": "null"
}
]
}
}
},
"Capabilities": {
"title": "Capabilities",
"description": "Describes the features of the specification which a data connector implements.",
"type": "object",
"required": [
"mutation",
"query"
],
"properties": {
"query": {
"$ref": "#/definitions/QueryCapabilities"
},
"mutation": {
"$ref": "#/definitions/MutationCapabilities"
},
"relationships": {
"anyOf": [
{
"$ref": "#/definitions/RelationshipCapabilities"
},
{
"type": "null"
}
]
}
}
},
"ExistsCapabilities": {
"title": "Exists Capabilities",
"type": "object",
"properties": {
"named_scopes": {
"description": "Does the connector support named scopes in column references inside EXISTS predicates",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"unrelated": {
"description": "Does the connector support ExistsInCollection::Unrelated",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"nested_collections": {
"description": "Does the connector support ExistsInCollection::NestedCollection",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"nested_scalar_collections": {
"description": "Does the connector support filtering over nested scalar arrays using existential quantification. This means the connector must support ExistsInCollection::NestedScalarCollection.",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"GroupByCapabilities": {
"title": "Group By Capabilities",
"type": "object",
"properties": {
"filter": {
"description": "Does the connector support post-grouping predicates",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"order": {
"description": "Does the connector support post-grouping ordering",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"paginate": {
"description": "Does the connector support post-grouping pagination",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"LeafCapability": {
"description": "A unit value to indicate a particular leaf capability is supported. This is an empty struct to allow for future sub-capabilities.",
"type": "object"
},
"MutationCapabilities": {
"title": "Mutation Capabilities",
"type": "object",
"properties": {
"transactional": {
"description": "Does the connector support executing multiple mutations in a transaction.",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"explain": {
"description": "Does the connector support explaining mutations",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"NestedArrayFilterByCapabilities": {
"title": "Nested Array Filter By Capabilities",
"type": "object",
"properties": {
"contains": {
"description": "Does the connector support filtering over nested arrays by checking if the array contains a value. This must be supported for all types that can be contained in an array that implement an 'eq' comparison operator.",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"is_empty": {
"description": "Does the connector support filtering over nested arrays by checking if the array is empty. This must be supported no matter what type is contained in the array.",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"NestedFieldCapabilities": {
"title": "Nested Field Capabilities",
"type": "object",
"properties": {
"filter_by": {
"description": "Does the connector support filtering by values of nested fields",
"anyOf": [
{
"$ref": "#/definitions/NestedFieldFilterByCapabilities"
},
{
"type": "null"
}
]
},
"order_by": {
"description": "Does the connector support ordering by values of nested fields",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"aggregates": {
"description": "Does the connector support aggregating values within nested fields",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"nested_collections": {
"description": "Does the connector support nested collection queries using `NestedField::NestedCollection`",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"NestedFieldFilterByCapabilities": {
"title": "Nested Field Filter By Capabilities",
"type": "object",
"properties": {
"nested_arrays": {
"description": "Does the connector support filtering over nested arrays (ie. Expression::ArrayComparison)",
"anyOf": [
{
"$ref": "#/definitions/NestedArrayFilterByCapabilities"
},
{
"type": "null"
}
]
}
}
},
"NestedRelationshipCapabilities": {
"title": "Nested Relationship Capabilities",
"type": "object",
"properties": {
"array": {
"description": "Does the connector support navigating a relationship from inside a nested object inside a nested array",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"filtering": {
"description": "Does the connector support filtering over a relationship that starts from inside a nested object",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"ordering": {
"description": "Does the connector support ordering over a relationship that starts from inside a nested object",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"QueryCapabilities": {
"title": "Query Capabilities",
"type": "object",
"properties": {
"aggregates": {
"description": "Does the connector support aggregate queries",
"anyOf": [
{
"$ref": "#/definitions/AggregateCapabilities"
},
{
"type": "null"
}
]
},
"variables": {
"description": "Does the connector support queries which use variables",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"explain": {
"description": "Does the connector support explaining queries",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"nested_fields": {
"description": "Does the connector support nested fields",
"default": {},
"allOf": [
{
"$ref": "#/definitions/NestedFieldCapabilities"
}
]
},
"exists": {
"description": "Does the connector support EXISTS predicates",
"default": {},
"allOf": [
{
"$ref": "#/definitions/ExistsCapabilities"
}
]
}
}
},
"RelationshipCapabilities": {
"title": "Relationship Capabilities",
"type": "object",
"properties": {
"relation_comparisons": {
"description": "Does the connector support comparisons that involve related collections (ie. joins)?",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"order_by_aggregate": {
"description": "Does the connector support ordering by an aggregated array relationship?",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"nested": {
"description": "Does the connector support navigating a relationship from inside a nested object",
"anyOf": [
{
"$ref": "#/definitions/NestedRelationshipCapabilities"
},
{
"type": "null"
}
]
}
}
}
}
}
ErrorResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Error Response",
"type": "object",
"required": [
"details",
"message"
],
"properties": {
"message": {
"description": "A human-readable summary of the error",
"type": "string"
},
"details": {
"description": "Any additional structured information about the error"
}
}
}
ExplainResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Explain Response",
"type": "object",
"required": [
"details"
],
"properties": {
"details": {
"description": "A list of human-readable key-value pairs describing a query execution plan. For example, a connector for a relational database might return the generated SQL and/or the output of the `EXPLAIN` command. An API-based connector might encode a list of statically-known API calls which would be made.",
"type": "object",
"additionalProperties": {
"type": "string"
}
}
}
}
MutationRequest
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Mutation Request",
"type": "object",
"required": [
"collection_relationships",
"operations"
],
"properties": {
"operations": {
"description": "The mutation operations to perform",
"type": "array",
"items": {
"$ref": "#/definitions/MutationOperation"
}
},
"collection_relationships": {
"description": "The relationships between collections involved in the entire mutation request. Only used if the 'relationships' capability is supported.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Relationship"
}
}
},
"definitions": {
"Aggregate": {
"title": "Aggregate",
"oneOf": [
{
"type": "object",
"required": [
"column",
"distinct",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column_count"
]
},
"column": {
"description": "The column to apply the count aggregate function to",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"distinct": {
"description": "Whether or not only distinct items should be counted",
"type": "boolean"
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count"
]
}
}
}
]
},
"Argument": {
"title": "Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
}
]
},
"ArrayComparison": {
"title": "Array Comparison",
"oneOf": [
{
"description": "Check if the array contains the specified value. Only used if the 'query.nested_fields.filter_by.nested_arrays.contains' capability is supported.",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"contains"
]
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"description": "Check is the array is empty. Only used if the 'query.nested_fields.filter_by.nested_arrays.is_empty' capability is supported.",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"is_empty"
]
}
}
}
]
},
"ComparisonTarget": {
"title": "Comparison Target",
"oneOf": [
{
"description": "The comparison targets a column.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.filter_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
},
{
"description": "The comparison targets the result of aggregation. Only used if the 'query.aggregates.filter_by' capability is supported.",
"type": "object",
"required": [
"aggregate",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"aggregate": {
"description": "The aggregation method to use",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"ComparisonValue": {
"title": "Comparison Value",
"oneOf": [
{
"description": "The value to compare against should be drawn from another column",
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any relationships to traverse to reach this column. Only non-empty if the 'relationships.relation_comparisons' is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.filter_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"scope": {
"description": "The scope in which this column exists, identified by an top-down index into the stack of scopes. The stack grows inside each `Expression::Exists`, so scope 0 (the default) refers to the current collection, and each subsequent index refers to the collection outside its predecessor's immediately enclosing `Expression::Exists` expression. Only used if the 'query.exists.named_scopes' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
}
},
{
"description": "A scalar value to compare against",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"description": "A value to compare against that is to be drawn from the query's variables. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"Dimension": {
"title": "Dimension",
"oneOf": [
{
"type": "object",
"required": [
"column_name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any (object) relationships to traverse to reach this column. Only non-empty if the 'relationships' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"column_name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column_name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"extraction": {
"description": "The name of the extraction function to apply to the selected value, if any",
"type": [
"string",
"null"
]
}
}
}
]
},
"ExistsInCollection": {
"title": "Exists In Collection",
"oneOf": [
{
"description": "The rows to evaluate the exists predicate against come from a related collection. Only used if the 'relationships' capability is supported.",
"type": "object",
"required": [
"arguments",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"related"
]
},
"field_path": {
"description": "Path to a nested field within an object column that must be navigated before the relationship is navigated. Only non-empty if the 'relationships.nested.filtering' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"description": "The rows to evaluate the exists predicate against come from an unrelated collection Only used if the 'query.exists.unrelated' capability is supported.",
"type": "object",
"required": [
"arguments",
"collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unrelated"
]
},
"collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"description": "The rows to evaluate the exists predicate against come from a nested array field. Only used if the 'query.exists.nested_collections' capability is supported.",
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
},
{
"description": "Specifies a column that contains a nested array of scalars. The array will be brought into scope of the nested expression where each element becomes an object with one '__value' column that contains the element value. Only used if the 'query.exists.nested_scalar_collections' capability is supported.",
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_scalar_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"Expression": {
"title": "Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/Expression"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"description": "A comparison against a nested array column. Only used if the 'query.nested_fields.filter_by.nested_arrays' capability is supported.",
"type": "object",
"required": [
"column",
"comparison",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array_comparison"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"comparison": {
"$ref": "#/definitions/ArrayComparison"
}
}
},
{
"type": "object",
"required": [
"in_collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"exists"
]
},
"in_collection": {
"$ref": "#/definitions/ExistsInCollection"
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
}
]
},
"Field": {
"title": "Field",
"oneOf": [
{
"description": "A field satisfied by returning the value of a row's column.",
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"type": "string"
},
"fields": {
"description": "When the type of the column is a (possibly-nullable) array or object, the caller can request a subset of the complete column data, by specifying fields to fetch here. If omitted, the column data will be fetched in full.",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
}
}
},
{
"description": "A field satisfied by navigating a relationship from the current row to a related collection. Only used if the 'relationships' capability is supported.",
"type": "object",
"required": [
"arguments",
"query",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"relationship"
]
},
"query": {
"$ref": "#/definitions/Query"
},
"relationship": {
"description": "The name of the relationship to follow for the subquery",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
}
]
},
"GroupComparisonTarget": {
"title": "Aggregate Comparison Target",
"oneOf": [
{
"type": "object",
"required": [
"aggregate",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"aggregate": {
"$ref": "#/definitions/Aggregate"
}
}
}
]
},
"GroupComparisonValue": {
"title": "Aggregate Comparison Value",
"oneOf": [
{
"description": "A scalar value to compare against",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"description": "A value to compare against that is to be drawn from the query's variables. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"GroupExpression": {
"title": "Group Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/GroupExpression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/GroupExpression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/GroupExpression"
}
}
},
{
"type": "object",
"required": [
"operator",
"target",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"target": {
"$ref": "#/definitions/GroupComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"operator",
"target",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"target": {
"$ref": "#/definitions/GroupComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/GroupComparisonValue"
}
}
}
]
},
"GroupOrderBy": {
"title": "Group Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/GroupOrderByElement"
}
}
}
},
"GroupOrderByElement": {
"title": "Group Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/GroupOrderByTarget"
}
}
},
"GroupOrderByTarget": {
"title": "Group Order By Target",
"oneOf": [
{
"type": "object",
"required": [
"index",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"dimension"
]
},
"index": {
"description": "The index of the dimension to order by, selected from the dimensions provided in the `Grouping` request.",
"type": "integer",
"format": "uint",
"minimum": 0.0
}
}
},
{
"type": "object",
"required": [
"aggregate",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"aggregate": {
"description": "Aggregation method to apply",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"Grouping": {
"title": "Grouping",
"type": "object",
"required": [
"aggregates",
"dimensions"
],
"properties": {
"dimensions": {
"description": "Dimensions along which to partition the data",
"type": "array",
"items": {
"$ref": "#/definitions/Dimension"
}
},
"aggregates": {
"description": "Aggregates to compute in each group",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"predicate": {
"description": "Optionally specify a predicate to apply after grouping rows. Only used if the 'query.aggregates.group_by.filter' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/GroupExpression"
},
{
"type": "null"
}
]
},
"order_by": {
"description": "Optionally specify how groups should be ordered Only used if the 'query.aggregates.group_by.order' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/GroupOrderBy"
},
{
"type": "null"
}
]
},
"limit": {
"description": "Optionally limit to N groups Only used if the 'query.aggregates.group_by.paginate' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth group Only used if the 'query.aggregates.group_by.paginate' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
}
}
},
"MutationOperation": {
"title": "Mutation Operation",
"oneOf": [
{
"type": "object",
"required": [
"arguments",
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"procedure"
]
},
"name": {
"description": "The name of a procedure",
"type": "string"
},
"arguments": {
"description": "Any named procedure arguments",
"type": "object",
"additionalProperties": true
},
"fields": {
"description": "The fields to return from the result, or null to return everything",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
}
}
}
]
},
"NestedField": {
"title": "NestedField",
"oneOf": [
{
"title": "NestedObject",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"object"
]
},
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Field"
}
}
}
},
{
"title": "NestedArray",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"fields": {
"$ref": "#/definitions/NestedField"
}
}
},
{
"title": "NestedCollection",
"description": "Perform a query over the nested array's rows. Only used if the 'query.nested_fields.nested_collections' capability is supported.",
"type": "object",
"required": [
"query",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"collection"
]
},
"query": {
"$ref": "#/definitions/Query"
}
}
}
]
},
"OrderBy": {
"title": "Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/OrderByElement"
}
}
}
},
"OrderByElement": {
"title": "Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/OrderByTarget"
}
}
},
"OrderByTarget": {
"title": "Order By Target",
"oneOf": [
{
"description": "The ordering is performed over a column.",
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any (object) relationships to traverse to reach this column. Only non-empty if the 'relationships' capability is supported. 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.order_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
},
{
"description": "The ordering is performed over the result of an aggregation. Only used if the 'relationships.order_by_aggregate' capability is supported.",
"type": "object",
"required": [
"aggregate",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse. Only non-empty if the 'relationships' capability is supported. 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"aggregate": {
"description": "The aggregation method to use",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"OrderDirection": {
"title": "Order Direction",
"type": "string",
"enum": [
"asc",
"desc"
]
},
"PathElement": {
"title": "Path Element",
"type": "object",
"required": [
"arguments",
"relationship"
],
"properties": {
"field_path": {
"description": "Path to a nested field within an object column that must be navigated before the relationship is navigated. Only non-empty if the 'relationships.nested' capability is supported (plus perhaps one of the sub-capabilities, depending on the feature using the PathElement).",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
},
"predicate": {
"description": "A predicate expression to apply to the target collection",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Query": {
"title": "Query",
"type": "object",
"properties": {
"aggregates": {
"description": "Aggregate fields of the query. Only used if the 'query.aggregates' capability is supported.",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"fields": {
"description": "Fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Field"
}
},
"limit": {
"description": "Optionally limit to N results",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth result",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"order_by": {
"description": "Optionally specify how rows should be ordered",
"anyOf": [
{
"$ref": "#/definitions/OrderBy"
},
{
"type": "null"
}
]
},
"predicate": {
"description": "Optionally specify a predicate to apply to the rows",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
},
"groups": {
"description": "Optionally group and aggregate the selected rows. Only used if the 'query.aggregates.group_by' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/Grouping"
},
{
"type": "null"
}
]
}
}
},
"Relationship": {
"title": "Relationship",
"type": "object",
"required": [
"arguments",
"column_mapping",
"relationship_type",
"target_collection"
],
"properties": {
"column_mapping": {
"description": "A mapping between columns on the source row to columns on the target collection. The column on the target collection is specified via a field path (ie. an array of field names that descend through nested object fields). The field path will only contain a single item, meaning a column on the target collection's type, unless the 'relationships.nested' capability is supported, in which case multiple items denotes a nested object field.",
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
}
},
"relationship_type": {
"$ref": "#/definitions/RelationshipType"
},
"target_collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
"RelationshipArgument": {
"title": "Relationship Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"type": "string"
}
}
}
]
},
"RelationshipType": {
"title": "Relationship Type",
"type": "string",
"enum": [
"object",
"array"
]
},
"UnaryComparisonOperator": {
"title": "Unary Comparison Operator",
"type": "string",
"enum": [
"is_null"
]
}
}
}
MutationResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Mutation Response",
"type": "object",
"required": [
"operation_results"
],
"properties": {
"operation_results": {
"description": "The results of each mutation operation, in the same order as they were received",
"type": "array",
"items": {
"$ref": "#/definitions/MutationOperationResults"
}
}
},
"definitions": {
"MutationOperationResults": {
"title": "Mutation Operation Results",
"oneOf": [
{
"type": "object",
"required": [
"result",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"procedure"
]
},
"result": true
}
}
]
}
}
}
QueryRequest
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Query Request",
"description": "This is the request body of the query POST endpoint",
"type": "object",
"required": [
"arguments",
"collection",
"collection_relationships",
"query"
],
"properties": {
"collection": {
"description": "The name of a collection",
"type": "string"
},
"query": {
"description": "The query syntax tree",
"allOf": [
{
"$ref": "#/definitions/Query"
}
]
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"collection_relationships": {
"description": "Any relationships between collections involved in the query request. Only used if the 'relationships' capability is supported.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Relationship"
}
},
"variables": {
"description": "One set of named variables for each rowset to fetch. Each variable set should be subtituted in turn, and a fresh set of rows returned. Only used if the 'query.variables' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "object",
"additionalProperties": true
}
}
},
"definitions": {
"Aggregate": {
"title": "Aggregate",
"oneOf": [
{
"type": "object",
"required": [
"column",
"distinct",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column_count"
]
},
"column": {
"description": "The column to apply the count aggregate function to",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"distinct": {
"description": "Whether or not only distinct items should be counted",
"type": "boolean"
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count"
]
}
}
}
]
},
"Argument": {
"title": "Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
}
]
},
"ArrayComparison": {
"title": "Array Comparison",
"oneOf": [
{
"description": "Check if the array contains the specified value. Only used if the 'query.nested_fields.filter_by.nested_arrays.contains' capability is supported.",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"contains"
]
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"description": "Check is the array is empty. Only used if the 'query.nested_fields.filter_by.nested_arrays.is_empty' capability is supported.",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"is_empty"
]
}
}
}
]
},
"ComparisonTarget": {
"title": "Comparison Target",
"oneOf": [
{
"description": "The comparison targets a column.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.filter_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
},
{
"description": "The comparison targets the result of aggregation. Only used if the 'query.aggregates.filter_by' capability is supported.",
"type": "object",
"required": [
"aggregate",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"aggregate": {
"description": "The aggregation method to use",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"ComparisonValue": {
"title": "Comparison Value",
"oneOf": [
{
"description": "The value to compare against should be drawn from another column",
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any relationships to traverse to reach this column. Only non-empty if the 'relationships.relation_comparisons' is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.filter_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"scope": {
"description": "The scope in which this column exists, identified by an top-down index into the stack of scopes. The stack grows inside each `Expression::Exists`, so scope 0 (the default) refers to the current collection, and each subsequent index refers to the collection outside its predecessor's immediately enclosing `Expression::Exists` expression. Only used if the 'query.exists.named_scopes' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
}
},
{
"description": "A scalar value to compare against",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"description": "A value to compare against that is to be drawn from the query's variables. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"Dimension": {
"title": "Dimension",
"oneOf": [
{
"type": "object",
"required": [
"column_name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any (object) relationships to traverse to reach this column. Only non-empty if the 'relationships' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"column_name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column_name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"extraction": {
"description": "The name of the extraction function to apply to the selected value, if any",
"type": [
"string",
"null"
]
}
}
}
]
},
"ExistsInCollection": {
"title": "Exists In Collection",
"oneOf": [
{
"description": "The rows to evaluate the exists predicate against come from a related collection. Only used if the 'relationships' capability is supported.",
"type": "object",
"required": [
"arguments",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"related"
]
},
"field_path": {
"description": "Path to a nested field within an object column that must be navigated before the relationship is navigated. Only non-empty if the 'relationships.nested.filtering' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"description": "The rows to evaluate the exists predicate against come from an unrelated collection Only used if the 'query.exists.unrelated' capability is supported.",
"type": "object",
"required": [
"arguments",
"collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unrelated"
]
},
"collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"description": "The rows to evaluate the exists predicate against come from a nested array field. Only used if the 'query.exists.nested_collections' capability is supported.",
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
},
{
"description": "Specifies a column that contains a nested array of scalars. The array will be brought into scope of the nested expression where each element becomes an object with one '__value' column that contains the element value. Only used if the 'query.exists.nested_scalar_collections' capability is supported.",
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_scalar_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"Expression": {
"title": "Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/Expression"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"description": "A comparison against a nested array column. Only used if the 'query.nested_fields.filter_by.nested_arrays' capability is supported.",
"type": "object",
"required": [
"column",
"comparison",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array_comparison"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"comparison": {
"$ref": "#/definitions/ArrayComparison"
}
}
},
{
"type": "object",
"required": [
"in_collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"exists"
]
},
"in_collection": {
"$ref": "#/definitions/ExistsInCollection"
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
}
]
},
"Field": {
"title": "Field",
"oneOf": [
{
"description": "A field satisfied by returning the value of a row's column.",
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"type": "string"
},
"fields": {
"description": "When the type of the column is a (possibly-nullable) array or object, the caller can request a subset of the complete column data, by specifying fields to fetch here. If omitted, the column data will be fetched in full.",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
}
}
},
{
"description": "A field satisfied by navigating a relationship from the current row to a related collection. Only used if the 'relationships' capability is supported.",
"type": "object",
"required": [
"arguments",
"query",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"relationship"
]
},
"query": {
"$ref": "#/definitions/Query"
},
"relationship": {
"description": "The name of the relationship to follow for the subquery",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
}
]
},
"GroupComparisonTarget": {
"title": "Aggregate Comparison Target",
"oneOf": [
{
"type": "object",
"required": [
"aggregate",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"aggregate": {
"$ref": "#/definitions/Aggregate"
}
}
}
]
},
"GroupComparisonValue": {
"title": "Aggregate Comparison Value",
"oneOf": [
{
"description": "A scalar value to compare against",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"description": "A value to compare against that is to be drawn from the query's variables. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"GroupExpression": {
"title": "Group Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/GroupExpression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/GroupExpression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/GroupExpression"
}
}
},
{
"type": "object",
"required": [
"operator",
"target",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"target": {
"$ref": "#/definitions/GroupComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"operator",
"target",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"target": {
"$ref": "#/definitions/GroupComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/GroupComparisonValue"
}
}
}
]
},
"GroupOrderBy": {
"title": "Group Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/GroupOrderByElement"
}
}
}
},
"GroupOrderByElement": {
"title": "Group Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/GroupOrderByTarget"
}
}
},
"GroupOrderByTarget": {
"title": "Group Order By Target",
"oneOf": [
{
"type": "object",
"required": [
"index",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"dimension"
]
},
"index": {
"description": "The index of the dimension to order by, selected from the dimensions provided in the `Grouping` request.",
"type": "integer",
"format": "uint",
"minimum": 0.0
}
}
},
{
"type": "object",
"required": [
"aggregate",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"aggregate": {
"description": "Aggregation method to apply",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"Grouping": {
"title": "Grouping",
"type": "object",
"required": [
"aggregates",
"dimensions"
],
"properties": {
"dimensions": {
"description": "Dimensions along which to partition the data",
"type": "array",
"items": {
"$ref": "#/definitions/Dimension"
}
},
"aggregates": {
"description": "Aggregates to compute in each group",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"predicate": {
"description": "Optionally specify a predicate to apply after grouping rows. Only used if the 'query.aggregates.group_by.filter' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/GroupExpression"
},
{
"type": "null"
}
]
},
"order_by": {
"description": "Optionally specify how groups should be ordered Only used if the 'query.aggregates.group_by.order' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/GroupOrderBy"
},
{
"type": "null"
}
]
},
"limit": {
"description": "Optionally limit to N groups Only used if the 'query.aggregates.group_by.paginate' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth group Only used if the 'query.aggregates.group_by.paginate' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
}
}
},
"NestedField": {
"title": "NestedField",
"oneOf": [
{
"title": "NestedObject",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"object"
]
},
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Field"
}
}
}
},
{
"title": "NestedArray",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"fields": {
"$ref": "#/definitions/NestedField"
}
}
},
{
"title": "NestedCollection",
"description": "Perform a query over the nested array's rows. Only used if the 'query.nested_fields.nested_collections' capability is supported.",
"type": "object",
"required": [
"query",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"collection"
]
},
"query": {
"$ref": "#/definitions/Query"
}
}
}
]
},
"OrderBy": {
"title": "Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/OrderByElement"
}
}
}
},
"OrderByElement": {
"title": "Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/OrderByTarget"
}
}
},
"OrderByTarget": {
"title": "Order By Target",
"oneOf": [
{
"description": "The ordering is performed over a column.",
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any (object) relationships to traverse to reach this column. Only non-empty if the 'relationships' capability is supported. 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.order_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
},
{
"description": "The ordering is performed over the result of an aggregation. Only used if the 'relationships.order_by_aggregate' capability is supported.",
"type": "object",
"required": [
"aggregate",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse. Only non-empty if the 'relationships' capability is supported. 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"aggregate": {
"description": "The aggregation method to use",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"OrderDirection": {
"title": "Order Direction",
"type": "string",
"enum": [
"asc",
"desc"
]
},
"PathElement": {
"title": "Path Element",
"type": "object",
"required": [
"arguments",
"relationship"
],
"properties": {
"field_path": {
"description": "Path to a nested field within an object column that must be navigated before the relationship is navigated. Only non-empty if the 'relationships.nested' capability is supported (plus perhaps one of the sub-capabilities, depending on the feature using the PathElement).",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
},
"predicate": {
"description": "A predicate expression to apply to the target collection",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Query": {
"title": "Query",
"type": "object",
"properties": {
"aggregates": {
"description": "Aggregate fields of the query. Only used if the 'query.aggregates' capability is supported.",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"fields": {
"description": "Fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Field"
}
},
"limit": {
"description": "Optionally limit to N results",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth result",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"order_by": {
"description": "Optionally specify how rows should be ordered",
"anyOf": [
{
"$ref": "#/definitions/OrderBy"
},
{
"type": "null"
}
]
},
"predicate": {
"description": "Optionally specify a predicate to apply to the rows",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
},
"groups": {
"description": "Optionally group and aggregate the selected rows. Only used if the 'query.aggregates.group_by' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/Grouping"
},
{
"type": "null"
}
]
}
}
},
"Relationship": {
"title": "Relationship",
"type": "object",
"required": [
"arguments",
"column_mapping",
"relationship_type",
"target_collection"
],
"properties": {
"column_mapping": {
"description": "A mapping between columns on the source row to columns on the target collection. The column on the target collection is specified via a field path (ie. an array of field names that descend through nested object fields). The field path will only contain a single item, meaning a column on the target collection's type, unless the 'relationships.nested' capability is supported, in which case multiple items denotes a nested object field.",
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
}
},
"relationship_type": {
"$ref": "#/definitions/RelationshipType"
},
"target_collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
"RelationshipArgument": {
"title": "Relationship Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"type": "string"
}
}
}
]
},
"RelationshipType": {
"title": "Relationship Type",
"type": "string",
"enum": [
"object",
"array"
]
},
"UnaryComparisonOperator": {
"title": "Unary Comparison Operator",
"type": "string",
"enum": [
"is_null"
]
}
}
}
QueryResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Query Response",
"description": "Query responses may return multiple RowSets when using queries with variables. Else, there should always be exactly one RowSet",
"type": "array",
"items": {
"$ref": "#/definitions/RowSet"
},
"definitions": {
"Group": {
"title": "Group",
"type": "object",
"required": [
"aggregates",
"dimensions"
],
"properties": {
"dimensions": {
"description": "Values of dimensions which identify this group",
"type": "array",
"items": true
},
"aggregates": {
"description": "Aggregates computed within this group",
"type": "object",
"additionalProperties": true
}
}
},
"RowFieldValue": {
"title": "Row Field Value"
},
"RowSet": {
"title": "Row Set",
"type": "object",
"properties": {
"aggregates": {
"description": "The results of the aggregates returned by the query",
"type": [
"object",
"null"
],
"additionalProperties": true
},
"rows": {
"description": "The rows returned by the query, corresponding to the query's fields",
"type": [
"array",
"null"
],
"items": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RowFieldValue"
}
}
},
"groups": {
"description": "The results of any grouping operation",
"type": [
"array",
"null"
],
"items": {
"$ref": "#/definitions/Group"
}
}
}
}
}
}
QueryRequest
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Query Request",
"description": "This is the request body of the query POST endpoint",
"type": "object",
"required": [
"arguments",
"collection",
"collection_relationships",
"query"
],
"properties": {
"collection": {
"description": "The name of a collection",
"type": "string"
},
"query": {
"description": "The query syntax tree",
"allOf": [
{
"$ref": "#/definitions/Query"
}
]
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"collection_relationships": {
"description": "Any relationships between collections involved in the query request. Only used if the 'relationships' capability is supported.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Relationship"
}
},
"variables": {
"description": "One set of named variables for each rowset to fetch. Each variable set should be subtituted in turn, and a fresh set of rows returned. Only used if the 'query.variables' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "object",
"additionalProperties": true
}
}
},
"definitions": {
"Aggregate": {
"title": "Aggregate",
"oneOf": [
{
"type": "object",
"required": [
"column",
"distinct",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column_count"
]
},
"column": {
"description": "The column to apply the count aggregate function to",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"distinct": {
"description": "Whether or not only distinct items should be counted",
"type": "boolean"
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count"
]
}
}
}
]
},
"Argument": {
"title": "Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
}
]
},
"ArrayComparison": {
"title": "Array Comparison",
"oneOf": [
{
"description": "Check if the array contains the specified value. Only used if the 'query.nested_fields.filter_by.nested_arrays.contains' capability is supported.",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"contains"
]
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"description": "Check is the array is empty. Only used if the 'query.nested_fields.filter_by.nested_arrays.is_empty' capability is supported.",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"is_empty"
]
}
}
}
]
},
"ComparisonTarget": {
"title": "Comparison Target",
"oneOf": [
{
"description": "The comparison targets a column.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.filter_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
},
{
"description": "The comparison targets the result of aggregation. Only used if the 'query.aggregates.filter_by' capability is supported.",
"type": "object",
"required": [
"aggregate",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"aggregate": {
"description": "The aggregation method to use",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"ComparisonValue": {
"title": "Comparison Value",
"oneOf": [
{
"description": "The value to compare against should be drawn from another column",
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any relationships to traverse to reach this column. Only non-empty if the 'relationships.relation_comparisons' is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.filter_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"scope": {
"description": "The scope in which this column exists, identified by an top-down index into the stack of scopes. The stack grows inside each `Expression::Exists`, so scope 0 (the default) refers to the current collection, and each subsequent index refers to the collection outside its predecessor's immediately enclosing `Expression::Exists` expression. Only used if the 'query.exists.named_scopes' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint",
"minimum": 0.0
}
}
},
{
"description": "A scalar value to compare against",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"description": "A value to compare against that is to be drawn from the query's variables. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"Dimension": {
"title": "Dimension",
"oneOf": [
{
"type": "object",
"required": [
"column_name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any (object) relationships to traverse to reach this column. Only non-empty if the 'relationships' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"column_name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'column_name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"extraction": {
"description": "The name of the extraction function to apply to the selected value, if any",
"type": [
"string",
"null"
]
}
}
}
]
},
"ExistsInCollection": {
"title": "Exists In Collection",
"oneOf": [
{
"description": "The rows to evaluate the exists predicate against come from a related collection. Only used if the 'relationships' capability is supported.",
"type": "object",
"required": [
"arguments",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"related"
]
},
"field_path": {
"description": "Path to a nested field within an object column that must be navigated before the relationship is navigated. Only non-empty if the 'relationships.nested.filtering' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"description": "The rows to evaluate the exists predicate against come from an unrelated collection Only used if the 'query.exists.unrelated' capability is supported.",
"type": "object",
"required": [
"arguments",
"collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unrelated"
]
},
"collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"description": "The rows to evaluate the exists predicate against come from a nested array field. Only used if the 'query.exists.nested_collections' capability is supported.",
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
},
{
"description": "Specifies a column that contains a nested array of scalars. The array will be brought into scope of the nested expression where each element becomes an object with one '__value' column that contains the element value. Only used if the 'query.exists.nested_scalar_collections' capability is supported.",
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_scalar_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"Expression": {
"title": "Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/Expression"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"description": "A comparison against a nested array column. Only used if the 'query.nested_fields.filter_by.nested_arrays' capability is supported.",
"type": "object",
"required": [
"column",
"comparison",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array_comparison"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"comparison": {
"$ref": "#/definitions/ArrayComparison"
}
}
},
{
"type": "object",
"required": [
"in_collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"exists"
]
},
"in_collection": {
"$ref": "#/definitions/ExistsInCollection"
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
}
]
},
"Field": {
"title": "Field",
"oneOf": [
{
"description": "A field satisfied by returning the value of a row's column.",
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"type": "string"
},
"fields": {
"description": "When the type of the column is a (possibly-nullable) array or object, the caller can request a subset of the complete column data, by specifying fields to fetch here. If omitted, the column data will be fetched in full.",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
}
}
},
{
"description": "A field satisfied by navigating a relationship from the current row to a related collection. Only used if the 'relationships' capability is supported.",
"type": "object",
"required": [
"arguments",
"query",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"relationship"
]
},
"query": {
"$ref": "#/definitions/Query"
},
"relationship": {
"description": "The name of the relationship to follow for the subquery",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
}
]
},
"GroupComparisonTarget": {
"title": "Aggregate Comparison Target",
"oneOf": [
{
"type": "object",
"required": [
"aggregate",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"aggregate": {
"$ref": "#/definitions/Aggregate"
}
}
}
]
},
"GroupComparisonValue": {
"title": "Aggregate Comparison Value",
"oneOf": [
{
"description": "A scalar value to compare against",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"description": "A value to compare against that is to be drawn from the query's variables. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"GroupExpression": {
"title": "Group Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/GroupExpression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/GroupExpression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/GroupExpression"
}
}
},
{
"type": "object",
"required": [
"operator",
"target",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"target": {
"$ref": "#/definitions/GroupComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"operator",
"target",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"target": {
"$ref": "#/definitions/GroupComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/GroupComparisonValue"
}
}
}
]
},
"GroupOrderBy": {
"title": "Group Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/GroupOrderByElement"
}
}
}
},
"GroupOrderByElement": {
"title": "Group Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/GroupOrderByTarget"
}
}
},
"GroupOrderByTarget": {
"title": "Group Order By Target",
"oneOf": [
{
"type": "object",
"required": [
"index",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"dimension"
]
},
"index": {
"description": "The index of the dimension to order by, selected from the dimensions provided in the `Grouping` request.",
"type": "integer",
"format": "uint",
"minimum": 0.0
}
}
},
{
"type": "object",
"required": [
"aggregate",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"aggregate": {
"description": "Aggregation method to apply",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"Grouping": {
"title": "Grouping",
"type": "object",
"required": [
"aggregates",
"dimensions"
],
"properties": {
"dimensions": {
"description": "Dimensions along which to partition the data",
"type": "array",
"items": {
"$ref": "#/definitions/Dimension"
}
},
"aggregates": {
"description": "Aggregates to compute in each group",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"predicate": {
"description": "Optionally specify a predicate to apply after grouping rows. Only used if the 'query.aggregates.group_by.filter' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/GroupExpression"
},
{
"type": "null"
}
]
},
"order_by": {
"description": "Optionally specify how groups should be ordered Only used if the 'query.aggregates.group_by.order' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/GroupOrderBy"
},
{
"type": "null"
}
]
},
"limit": {
"description": "Optionally limit to N groups Only used if the 'query.aggregates.group_by.paginate' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth group Only used if the 'query.aggregates.group_by.paginate' capability is supported.",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
}
}
},
"NestedField": {
"title": "NestedField",
"oneOf": [
{
"title": "NestedObject",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"object"
]
},
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Field"
}
}
}
},
{
"title": "NestedArray",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"fields": {
"$ref": "#/definitions/NestedField"
}
}
},
{
"title": "NestedCollection",
"description": "Perform a query over the nested array's rows. Only used if the 'query.nested_fields.nested_collections' capability is supported.",
"type": "object",
"required": [
"query",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"collection"
]
},
"query": {
"$ref": "#/definitions/Query"
}
}
}
]
},
"OrderBy": {
"title": "Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/OrderByElement"
}
}
}
},
"OrderByElement": {
"title": "Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/OrderByTarget"
}
}
},
"OrderByTarget": {
"title": "Order By Target",
"oneOf": [
{
"description": "The ordering is performed over a column.",
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"path": {
"description": "Any (object) relationships to traverse to reach this column. Only non-empty if the 'relationships' capability is supported. 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"name": {
"description": "The name of the column",
"type": "string"
},
"arguments": {
"description": "Arguments to satisfy the column specified by 'name'",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested field within an object column. Only non-empty if the 'query.nested_fields.order_by' capability is supported.",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
},
{
"description": "The ordering is performed over the result of an aggregation. Only used if the 'relationships.order_by_aggregate' capability is supported.",
"type": "object",
"required": [
"aggregate",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse. Only non-empty if the 'relationships' capability is supported. 'PathElement.field_path' will only be non-empty if the 'relationships.nested.ordering' capability is supported.",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
},
"aggregate": {
"description": "The aggregation method to use",
"allOf": [
{
"$ref": "#/definitions/Aggregate"
}
]
}
}
}
]
},
"OrderDirection": {
"title": "Order Direction",
"type": "string",
"enum": [
"asc",
"desc"
]
},
"PathElement": {
"title": "Path Element",
"type": "object",
"required": [
"arguments",
"relationship"
],
"properties": {
"field_path": {
"description": "Path to a nested field within an object column that must be navigated before the relationship is navigated. Only non-empty if the 'relationships.nested' capability is supported (plus perhaps one of the sub-capabilities, depending on the feature using the PathElement).",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
},
"predicate": {
"description": "A predicate expression to apply to the target collection",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Query": {
"title": "Query",
"type": "object",
"properties": {
"aggregates": {
"description": "Aggregate fields of the query. Only used if the 'query.aggregates' capability is supported.",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"fields": {
"description": "Fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Field"
}
},
"limit": {
"description": "Optionally limit to N results",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth result",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"order_by": {
"description": "Optionally specify how rows should be ordered",
"anyOf": [
{
"$ref": "#/definitions/OrderBy"
},
{
"type": "null"
}
]
},
"predicate": {
"description": "Optionally specify a predicate to apply to the rows",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
},
"groups": {
"description": "Optionally group and aggregate the selected rows. Only used if the 'query.aggregates.group_by' capability is supported.",
"anyOf": [
{
"$ref": "#/definitions/Grouping"
},
{
"type": "null"
}
]
}
}
},
"Relationship": {
"title": "Relationship",
"type": "object",
"required": [
"arguments",
"column_mapping",
"relationship_type",
"target_collection"
],
"properties": {
"column_mapping": {
"description": "A mapping between columns on the source row to columns on the target collection. The column on the target collection is specified via a field path (ie. an array of field names that descend through nested object fields). The field path will only contain a single item, meaning a column on the target collection's type, unless the 'relationships.nested' capability is supported, in which case multiple items denotes a nested object field.",
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
}
},
"relationship_type": {
"$ref": "#/definitions/RelationshipType"
},
"target_collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
"RelationshipArgument": {
"title": "Relationship Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable. Only used if the 'query.variables' capability is supported.",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"type": "string"
}
}
}
]
},
"RelationshipType": {
"title": "Relationship Type",
"type": "string",
"enum": [
"object",
"array"
]
},
"UnaryComparisonOperator": {
"title": "Unary Comparison Operator",
"type": "string",
"enum": [
"is_null"
]
}
}
}
SchemaResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Schema Response",
"type": "object",
"required": [
"collections",
"functions",
"object_types",
"procedures",
"scalar_types"
],
"properties": {
"scalar_types": {
"description": "A list of scalar types which will be used as the types of collection columns",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ScalarType"
}
},
"object_types": {
"description": "A list of object types which can be used as the types of arguments, or return types of procedures. Names should not overlap with scalar type names.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ObjectType"
}
},
"collections": {
"description": "Collections which are available for queries",
"type": "array",
"items": {
"$ref": "#/definitions/CollectionInfo"
}
},
"functions": {
"description": "Functions (i.e. collections which return a single column and row)",
"type": "array",
"items": {
"$ref": "#/definitions/FunctionInfo"
}
},
"procedures": {
"description": "Procedures which are available for execution as part of mutations",
"type": "array",
"items": {
"$ref": "#/definitions/ProcedureInfo"
}
},
"capabilities": {
"description": "Schema data which is relevant to features enabled by capabilities",
"anyOf": [
{
"$ref": "#/definitions/CapabilitySchemaInfo"
},
{
"type": "null"
}
]
}
},
"definitions": {
"AggregateCapabilitiesSchemaInfo": {
"title": "Aggregate Capabilities Schema Info",
"type": "object",
"required": [
"count_scalar_type"
],
"properties": {
"count_scalar_type": {
"description": "The scalar type which should be used for the return type of count (star_count and column_count) operations.",
"type": "string"
}
}
},
"AggregateFunctionDefinition": {
"title": "Aggregate Function Definition",
"description": "The definition of an aggregation function on a scalar type",
"oneOf": [
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"min"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"max"
]
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"sum"
]
},
"result_type": {
"description": "The scalar type of the result of this function, which should have one of the type representations Int64 or Float64, depending on whether this function is defined on a scalar type with an integer or floating-point representation, respectively.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"average"
]
},
"result_type": {
"description": "The scalar type of the result of this function, which should have the type representation Float64",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"custom"
]
},
"result_type": {
"description": "The scalar or object type of the result of this function",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
}
]
},
"ArgumentInfo": {
"title": "Argument Info",
"type": "object",
"required": [
"type"
],
"properties": {
"description": {
"description": "Argument description",
"type": [
"string",
"null"
]
},
"type": {
"description": "The name of the type of this argument",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
"CapabilitySchemaInfo": {
"title": "Capability Schema Info",
"type": "object",
"properties": {
"query": {
"description": "Schema information relevant to query capabilities",
"anyOf": [
{
"$ref": "#/definitions/QueryCapabilitiesSchemaInfo"
},
{
"type": "null"
}
]
}
}
},
"CollectionInfo": {
"title": "Collection Info",
"type": "object",
"required": [
"arguments",
"name",
"type",
"uniqueness_constraints"
],
"properties": {
"name": {
"description": "The name of the collection\n\nNote: these names are abstract - there is no requirement that this name correspond to the name of an actual collection in the database.",
"type": "string"
},
"description": {
"description": "Description of the collection",
"type": [
"string",
"null"
]
},
"arguments": {
"description": "Any arguments that this collection requires",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
},
"type": {
"description": "The name of the collection's object type",
"type": "string"
},
"uniqueness_constraints": {
"description": "Any uniqueness constraints enforced on this collection",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/UniquenessConstraint"
}
}
}
},
"ComparisonOperatorDefinition": {
"title": "Comparison Operator Definition",
"description": "The definition of a comparison operator on a scalar type",
"oneOf": [
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"equal"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"in"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"less_than"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"less_than_or_equal"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"greater_than"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"greater_than_or_equal"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"contains"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"contains_insensitive"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"starts_with"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"starts_with_insensitive"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"ends_with"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"ends_with_insensitive"
]
}
}
},
{
"type": "object",
"required": [
"argument_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"custom"
]
},
"argument_type": {
"description": "The type of the argument to this operator",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
}
]
},
"ExtractionFunctionDefinition": {
"title": "Extraction Function Definition",
"description": "The definition of an aggregation function on a scalar type",
"oneOf": [
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nanosecond"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"microsecond"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"second"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"minute"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"hour"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"day"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"week"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"month"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"quarter"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"year"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"day_of_week"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"day_of_year"
]
},
"result_type": {
"description": "The result type, which must be a defined scalar type in the schema response.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"result_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"custom"
]
},
"result_type": {
"description": "The scalar or object type of the result of this function",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
}
]
},
"ForeignKeyConstraint": {
"title": "Foreign Key Constraint",
"type": "object",
"required": [
"column_mapping",
"foreign_collection"
],
"properties": {
"column_mapping": {
"description": "The columns on which you want want to define the foreign key. This is a mapping between fields on object type to columns on the foreign collection. The column on the foreign collection is specified via a field path (ie. an array of field names that descend through nested object fields). The field path must only contain a single item, meaning a column on the foreign collection's type, unless the 'relationships.nested' capability is supported, in which case multiple items can be used to denote a nested object field.",
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
}
},
"foreign_collection": {
"description": "The name of a collection",
"type": "string"
}
}
},
"FunctionInfo": {
"title": "Function Info",
"type": "object",
"required": [
"arguments",
"name",
"result_type"
],
"properties": {
"name": {
"description": "The name of the function",
"type": "string"
},
"description": {
"description": "Description of the function",
"type": [
"string",
"null"
]
},
"arguments": {
"description": "Any arguments that this collection requires",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
},
"result_type": {
"description": "The name of the function's result type",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
"ObjectField": {
"title": "Object Field",
"description": "The definition of an object field",
"type": "object",
"required": [
"type"
],
"properties": {
"description": {
"description": "Description of this field",
"type": [
"string",
"null"
]
},
"type": {
"description": "The type of this field",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
},
"arguments": {
"description": "The arguments available to the field - Matches implementation from CollectionInfo",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
}
}
},
"ObjectType": {
"title": "Object Type",
"description": "The definition of an object type",
"type": "object",
"required": [
"fields",
"foreign_keys"
],
"properties": {
"description": {
"description": "Description of this type",
"type": [
"string",
"null"
]
},
"fields": {
"description": "Fields defined on this object type",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ObjectField"
}
},
"foreign_keys": {
"description": "Any foreign keys defined for this object type's columns",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ForeignKeyConstraint"
}
}
}
},
"ProcedureInfo": {
"title": "Procedure Info",
"type": "object",
"required": [
"arguments",
"name",
"result_type"
],
"properties": {
"name": {
"description": "The name of the procedure",
"type": "string"
},
"description": {
"description": "Column description",
"type": [
"string",
"null"
]
},
"arguments": {
"description": "Any arguments that this collection requires",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
},
"result_type": {
"description": "The name of the result type",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
"QueryCapabilitiesSchemaInfo": {
"title": "Query Capabilities Schema Info",
"type": "object",
"properties": {
"aggregates": {
"description": "Schema information relevant to aggregate query capabilities",
"anyOf": [
{
"$ref": "#/definitions/AggregateCapabilitiesSchemaInfo"
},
{
"type": "null"
}
]
}
}
},
"ScalarType": {
"title": "Scalar Type",
"description": "The definition of a scalar type, i.e. types that can be used as the types of columns.",
"type": "object",
"required": [
"aggregate_functions",
"comparison_operators",
"representation"
],
"properties": {
"representation": {
"description": "A description of valid values for this scalar type.",
"allOf": [
{
"$ref": "#/definitions/TypeRepresentation"
}
]
},
"aggregate_functions": {
"description": "A map from aggregate function names to their definitions. Result type names must be defined scalar types declared in ScalarTypesCapabilities.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/AggregateFunctionDefinition"
}
},
"comparison_operators": {
"description": "A map from comparison operator names to their definitions. Argument type names must be defined scalar types declared in ScalarTypesCapabilities.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ComparisonOperatorDefinition"
}
},
"extraction_functions": {
"description": "A map from extraction function names to their definitions.",
"default": {},
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ExtractionFunctionDefinition"
}
}
}
},
"Type": {
"title": "Type",
"description": "Types track the valid representations of values as JSON",
"oneOf": [
{
"description": "A named type",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"named"
]
},
"name": {
"description": "The name can refer to a scalar or object type",
"type": "string"
}
}
},
{
"description": "A nullable type",
"type": "object",
"required": [
"type",
"underlying_type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nullable"
]
},
"underlying_type": {
"description": "The type of the non-null inhabitants of this type",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
{
"description": "An array type",
"type": "object",
"required": [
"element_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"element_type": {
"description": "The type of the elements of the array",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
{
"description": "A predicate type for a given object type",
"type": "object",
"required": [
"object_type_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"predicate"
]
},
"object_type_name": {
"description": "The object type name",
"type": "string"
}
}
}
]
},
"TypeRepresentation": {
"title": "Type Representation",
"description": "Representations of scalar types",
"oneOf": [
{
"description": "JSON booleans",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"boolean"
]
}
}
},
{
"description": "Any JSON string",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"string"
]
}
}
},
{
"description": "A 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int8"
]
}
}
},
{
"description": "A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int16"
]
}
}
},
{
"description": "A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int32"
]
}
}
},
{
"description": "A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int64"
]
}
}
},
{
"description": "An IEEE-754 single-precision floating-point number",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"float32"
]
}
}
},
{
"description": "An IEEE-754 double-precision floating-point number",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"float64"
]
}
}
},
{
"description": "Arbitrary-precision integer string",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"biginteger"
]
}
}
},
{
"description": "Arbitrary-precision decimal string",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"bigdecimal"
]
}
}
},
{
"description": "UUID string (8-4-4-4-12)",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"uuid"
]
}
}
},
{
"description": "ISO 8601 date",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"date"
]
}
}
},
{
"description": "ISO 8601 timestamp",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"timestamp"
]
}
}
},
{
"description": "ISO 8601 timestamp-with-timezone",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"timestamptz"
]
}
}
},
{
"description": "GeoJSON, per RFC 7946",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"geography"
]
}
}
},
{
"description": "GeoJSON Geometry object, per RFC 7946",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"geometry"
]
}
}
},
{
"description": "Base64-encoded bytes",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"bytes"
]
}
}
},
{
"description": "Arbitrary JSON",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"json"
]
}
}
},
{
"description": "One of the specified string values",
"type": "object",
"required": [
"one_of",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"enum"
]
},
"one_of": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"UniquenessConstraint": {
"title": "Uniqueness Constraint",
"type": "object",
"required": [
"unique_columns"
],
"properties": {
"unique_columns": {
"description": "A list of columns which this constraint requires to be unique",
"type": "array",
"items": {
"type": "string"
}
}
}
}
}
}