Overview
NOTE
This specification contains the low-level details for connector authors, and is intended as a complete reference.
Users looking to build their own connectors might want to also look at some additional resources:
- Hasura Connector Hub contains a list of currently available connectors
- Let's Build a Connector is a step-by-step to creating a connector using TypeScript
Hasura data connectors allow you to extend the functionality of the Hasura server by providing web services which can resolve new sources of data. By following this specification, those sources of data can be added to your Hasura graph, and the usual Hasura features such as relationships and permissions will be supported for your data source.
This specification is designed to be as general as possible, supporting many different types of data source, while still being targeted enough to provide useful features with high performance guarantees. It is important to note that data connectors are designed for tabular data which supports efficient filtering and sorting. If you are able to model your data source given these constraints, then it will be a good fit for a data connector, but if not, you might like to consider a GraphQL remote source integration with Hasura instead.
API Specification
Version |
---|
0.1.0 |
A data connector encapsulates a data source by implementing the protocol in this specification.
A data connector must implement several web service endpoints:
- A capabilities endpoint, which describes which features the data source is capable of implementing.
- A schema endpoint, which describes the resources provided by the data source, and the shape of the data they contain.
- A query endpoint, which reads data from one of the relations described by the schema endpoint.
- A query/explain endpoint, which explains a query plan, without actually executing it.
- A mutation endpoint, which modifies the data in one of the relations described by the schema endpoint.
- A mutation/explain endpoint, which explains a mutation plan, without actually executing it.
- A metrics endpoint, which exposes runtime metrics about the data connector.
Changelog
0.1.6
Specification
EXISTS
expressions can now query nested collections
0.1.5
Rust Libraries
- Add newtypes for string types
- Expose the specification version in
ndc-models
0.1.4
Specification
- Aggregates over nested fields
ndc-test
- Replay test folders in alphabetical order
Fixes
- Add
impl Default
forNestedFieldCapabilities
0.1.3
Specification
- Support field-level arguments
- Support filtering and ordering by values of nested fields
- Added a
biginteger
type representation
ndc-test
- Validate all response types
- Release pipeline for ndc-test CLI
Rust Libraries
- Upgrade Rust to v1.78.0, and the Rust dependencies to their latest versions
- Add back features for native-tls vs rustls
0.1.2
Specification
- More type representations were added, and some were deprecated.
Rust Libraries
- Upgrade to Rust v1.77
- The
ndc-client
library was removed. Clients are advised to use the newndc-models
library for type definitions, and to use a HTTP client library of their choice directly.
0.1.1
Specification
- Equality operators were more precisely specified
- Scalar types can now specify representations
ndc-test
- Aggregate tests are gated behind the aggregates capability
- Automatic tests are now generated for exists predicates
- Automatic tests are now generated for
single_column
aggregates
Rust Libraries
rustls
is supported instead ofnative-tls
using a Cargo feature.- Upgrade
opentelemetry
to v0.22.0 colored
dependency removed in favor ofcolorful
0.1.0
Terminology
Tables are now known as collections.
Collection Names
Collection names are now single strings instead of arrays of strings. The array structure was previously used to represent qualification by a schema or database name, but the structure was not used anywhere on the client side, and had no semantic meaning. GDC now abstracts over these concepts, and expects relations to be named by strings.
No Configuration
The configuration header convention was removed. Connectors are now expected to manage their own configuration, and a connector URL fully represents that connector with its pre-specified configuration.
No Database Concepts in GDC
GDC no longer sends any metadata to indicate database-specific concepts. For example, a Collection used to indicate whether it was a Collection or view. Such metadata would be passed back in the query IR, to help the connector disambiguate which database object to query. When we proposed adding functions, we would have had to add a new type to disambiguate nullary functions from collections, etc. Instead, we now expect connectors to understand their own schema, and understand the query IR that they receive, as long as it is compatible with their GDC schema.
Column types are no longer sent in the query and mutation requests.
Tables, views and functions are unified under a single concept called "collections". GDC does not care how queries and mutations on relations are implemented.
Collection Arguments
Collection arguments were added to relations in order to support use cases like table-valued functions and certain REST endpoints. Relationships can determine collection arguments.
Functions
Collections which return a single column and a single row are also called "functions", and identified separately in the schema response.
Field Arguments
Field arguments were added to fields in order to support use cases like computed fields.
Operators
The equality operator is now expected on every scalar type implicitly.
Note: it was already implicitly supported by any connector advertising the variables
capability, which imposes column equality constraints in each row set fetched in a forall query.
The equality operator will have semantics assigned for the purposes of testing.
Scalars can define additional operators, whose semantics are opaque.
Procedures
Proceduress were added to the list of available mutation operation types
Schema
- Scalar types were moved to the schema endpoint
- The
object_types
field was added to the schema endpoint
Raw Queries
The raw query endpoint was removed, since it cannot be given any useful semantics across all implementations.
Datasets
The datasets endpoints were removed from the specification, because there was no way to usefully use it without prior knowledge of its implementation.
Basics
Data connectors are implemented as HTTP services. To refer to a running data connector, it suffices to specify its base URL. All required endpoints are specified relative to this base URL.
All endpoints should accept JSON (in the case of POST request bodies) and return JSON using the application/json
content type. The particular format of each JSON document will be specified for each endpoint.
Versioning
This specification is versioned using semantic versioning, and a data connector claims compatibility with a semantic version range via its capabilities endpoint.
Non-breaking changes to the specification may be achieved via the addition of new capabilities, which a connector will be assumed not to implement if the corresponding field is not present in its capabilities endpoint.
Error Handling
Status Codes
Data connectors should use standard HTTP error codes to signal error conditions back to the Hasura server. In particular, the following error codes should be used in the indicated scenarios:
Response Code | Meaning | Used when |
---|---|---|
200 | OK | The request was handled successfully according to this specification . |
400 | Bad Request | The request did not match the data connector's expectation based on this specification. |
403 | Forbidden | The request could not be handled because a permission check failed - for example, a mutation might fail because a check constraint was not met. |
409 | Conflict | The request could not be handled because it would create a conflicting state for the data source - for example, a mutation might fail because a foreign key constraint was not met. |
422 | Unprocessable Content | The request could not be handled because, while the request was well-formed, it was not semantically correct. For example, a value for a custom scalar type was provided, but with an incorrect type. |
500 | Internal Server Error | The request could not be handled because of an error on the server |
501 | Not Supported | The request could not be handled because it relies on an unsupported capability. Note: this ought to indicate an error on the caller side, since the caller should not generate requests which are incompatible with the indicated capabilities. |
502 | Bad Gateway | The request could not be handled because an upstream service was unavailable or returned an unexpected response, e.g., a connection to a database server failed |
Response Body
Data connectors should return an ErrorResponse
as JSON in the response body, in the case of an error.
Service Health
Data connectors must provide a health endpoint which can be used to indicate service health and readiness to any client applications.
Request
GET /health
Response
If the data connector is available and ready to accept requests, then the health endpoint should return status code 200 OK
.
Otherwise, it should ideally return a status code 503 Service Unavailable
, or some other appropriate HTTP error code.
Metrics
Data connectors should provide a metrics endpoint which reports relevant metrics in a textual format. Data connectors can report any metrics which are deemed relevant, or none at all, with the exception of any reserved keys.
Request
GET /metrics
Response
The metrics endpoint should return a content type of text/plain
, and return any metrics in the Prometheus textual format.
Reserved keys
Metric names prefixed with hasura_
are reserved for future use, and should not be included in the response.
Example
# HELP query_total The number of /query requests served
# TYPE query_total counter
query_total 10000 1685405427000
# HELP mutation_total The number of /mutation requests served
# TYPE mutation_total counter
mutation_total 5000 1685405427000
Telemetry
Hasura uses OpenTelemetry to coordinate the collection of traces and metrics with data connectors.
Trace Collection
Trace collection is out of the scope of this specification currently. This may change in a future revision.
Trace Propagation
Hasura uses the W3C TraceContext specification to implement trace propagation. Data connectors should propagate tracing headers in this format to any downstream services.
Capabilities
The capabilities endpoint provides metadata about the features which the data connector (and data source) support.
Request
GET /capabilities
Response
Example
{
"version": "0.1.6",
"capabilities": {
"query": {
"aggregates": {},
"variables": {},
"nested_fields": {
"filter_by": {},
"order_by": {},
"aggregates": {}
},
"exists": {
"nested_collections": {}
}
},
"mutation": {},
"relationships": {
"relation_comparisons": {},
"order_by_aggregate": {}
}
}
}
Response Fields
Name | Description |
---|---|
version | A semantic version number of this specification which the data connector claims to implement |
capabilities.query.aggregates | Whether the data connector supports aggregate queries |
capabilities.query.exists.nested_collections | Whether the data connector supports exists expressions against nested collections |
capabilities.query.variables | Whether the data connector supports queries with variables |
capabilities.query.explain | Whether the data connector is capable of describing query plans |
capabilities.query.nested_fields.filter_by | Whether the data connector is capable of filtering by nested fields |
capabilities.query.nested_fields.order_by | Whether the data connector is capable of ordering by nested fields |
capabilities.mutation.transactional | Whether the data connector is capable of executing multiple mutations in a transaction |
capabilities.mutation.explain | Whether the data connector is capable of describing mutation plans |
capabilities.relationships | Whether the data connector supports relationships |
capabilities.relationships.order_by_aggregate | Whether order by clauses can include aggregates |
capabilities.relationships.relation_comparisons | Whether comparisons can include columns reachable via relationships |
See also
- Type
Capabilities
- Type
CapabilitiesResponse
- Type
QueryCapabilities
- Type
NestedFieldCapabilities
- Type
MutationCapabilities
- Type
RelationshipCapabilities
Types
Several definitions in this specification make mention of types. Types are used to categorize the sorts of data returned and accepted by a data connector.
Scalar and named object types are defined in the schema response , and referred to by name at the point of use.
Array types, nullable types and predicate types are constructed at the point of use.
Named Types
To refer to a named (scalar or object) type, use the type named
, and provide the name:
{
"type": "named",
"name": "String"
}
Array Types
To refer to an array type, use the type array
, and refer to the type of the elements of the array in the element_type
field:
{
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
}
Nullable Types
To refer to a nullable type, use the type nullable
, and refer to the type of the underlying (non-null) inhabitants in the underlying_type
field:
{
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "String"
}
}
Nullable and array types can be nested. For example, to refer to a nullable array of nullable strings:
{
"type": "nullable",
"underlying_type": {
"type": "array",
"element_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "String"
}
}
}
}
Predicate Types
A predicate type can be used to represent valid predicates (of type Expression
) for an object type. A value of a predicate type is represented, in inputs and return values, as a JSON value which parses as an Expression
. Valid expressions are those which refer to the columns of the object type.
To refer to a predicate type, use the type predicate
, and provide the name of the object type:
{
"type": "predicate",
"object_type_name": "article"
}
Note: predicate types are intended primarily for use in arguments to functions and procedures, but they can be used anywhere a Type
is expected, including in output types.
See also
- Type
Type
- Scalar types
- Object types
Schema
The schema endpoint defines any types used by the data connector, and describes the collections and their columns, functions, and any procedures.
The schema endpoint is used to specify the behavior of a data connector, so that it can be tested, verified, and used by tools such as code generators. It is primarily provided by data connector implementors as a development and specification tool, and it is not expected to be used at "runtime", in the same sense that the /query
and /mutation
endpoints would be.
Request
GET /schema
Response
See SchemaResponse
Example
{
"scalar_types": {
"Int": {
"representation": {
"type": "int32"
},
"aggregate_functions": {
"max": {
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
},
"min": {
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
},
"comparison_operators": {
"eq": {
"type": "equal"
},
"in": {
"type": "in"
}
}
},
"String": {
"representation": {
"type": "string"
},
"aggregate_functions": {},
"comparison_operators": {
"eq": {
"type": "equal"
},
"in": {
"type": "in"
},
"like": {
"type": "custom",
"argument_type": {
"type": "named",
"name": "String"
}
}
}
}
},
"object_types": {
"article": {
"description": "An article",
"fields": {
"author_id": {
"description": "The article's author ID",
"type": {
"type": "named",
"name": "Int"
}
},
"id": {
"description": "The article's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"title": {
"description": "The article's title",
"type": {
"type": "named",
"name": "String"
}
}
}
},
"author": {
"description": "An author",
"fields": {
"first_name": {
"description": "The author's first name",
"type": {
"type": "named",
"name": "String"
}
},
"id": {
"description": "The author's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"last_name": {
"description": "The author's last name",
"type": {
"type": "named",
"name": "String"
}
}
}
},
"institution": {
"description": "An institution",
"fields": {
"departments": {
"description": "The institution's departments",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
},
"id": {
"description": "The institution's primary key",
"type": {
"type": "named",
"name": "Int"
}
},
"location": {
"description": "The institution's location",
"type": {
"type": "named",
"name": "location"
}
},
"name": {
"description": "The institution's name",
"type": {
"type": "named",
"name": "String"
}
},
"staff": {
"description": "The institution's staff",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "staff_member"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
}
}
},
"location": {
"description": "A location",
"fields": {
"campuses": {
"description": "The location's campuses",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
},
"city": {
"description": "The location's city",
"type": {
"type": "named",
"name": "String"
}
},
"country": {
"description": "The location's country",
"type": {
"type": "named",
"name": "String"
}
}
}
},
"staff_member": {
"description": "A staff member",
"fields": {
"first_name": {
"description": "The staff member's first name",
"type": {
"type": "named",
"name": "String"
}
},
"last_name": {
"description": "The staff member's last name",
"type": {
"type": "named",
"name": "String"
}
},
"specialities": {
"description": "The staff member's specialities",
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "String"
}
},
"arguments": {
"limit": {
"type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
}
}
}
}
},
"collections": [
{
"name": "articles",
"description": "A collection of articles",
"arguments": {},
"type": "article",
"uniqueness_constraints": {
"ArticleByID": {
"unique_columns": [
"id"
]
}
},
"foreign_keys": {
"Article_AuthorID": {
"column_mapping": {
"author_id": "id"
},
"foreign_collection": "authors"
}
}
},
{
"name": "authors",
"description": "A collection of authors",
"arguments": {},
"type": "author",
"uniqueness_constraints": {
"AuthorByID": {
"unique_columns": [
"id"
]
}
},
"foreign_keys": {}
},
{
"name": "institutions",
"description": "A collection of institutions",
"arguments": {},
"type": "institution",
"uniqueness_constraints": {
"InstitutionByID": {
"unique_columns": [
"id"
]
}
},
"foreign_keys": {}
},
{
"name": "articles_by_author",
"description": "Articles parameterized by author",
"arguments": {
"author_id": {
"type": {
"type": "named",
"name": "Int"
}
}
},
"type": "article",
"uniqueness_constraints": {},
"foreign_keys": {}
}
],
"functions": [
{
"name": "latest_article_id",
"description": "Get the ID of the most recent article",
"arguments": {},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
},
{
"name": "latest_article",
"description": "Get the most recent article",
"arguments": {},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "article"
}
}
}
],
"procedures": [
{
"name": "upsert_article",
"description": "Insert or update an article",
"arguments": {
"article": {
"description": "The article to insert or update",
"type": {
"type": "named",
"name": "article"
}
}
},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "article"
}
}
},
{
"name": "delete_articles",
"description": "Delete articles which match a predicate",
"arguments": {
"where": {
"description": "The predicate",
"type": {
"type": "predicate",
"object_type_name": "article"
}
}
},
"result_type": {
"type": "array",
"element_type": {
"type": "named",
"name": "article"
}
}
}
]
}
Response Fields
Name | Description |
---|---|
scalar_types | Scalar Types |
object_types | Object Types |
collections | Collection |
functions | Functions |
procedures | Procedures |
Scalar Types
The schema should describe any irreducible scalar types. Scalar types can be used as the types of columns, or in general as the types of object fields.
Scalar types define several types of operations, which extend the capabilities of the query and mutation APIs: comparison operators and aggregation functions.
Type Representations
A scalar type definition can include an optional type representation. The representation, if provided, indicates to potential callers what values can be expected in responses, and what values are considered acceptable in requests.
If the representation is omitted, it defaults to json
.
Supported Representations
type | Description | JSON representation |
---|---|---|
boolean | Boolean | Boolean |
string | String | String |
int8 | An 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1 | Number |
int16 | A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1 | Number |
int32 | A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1 | Number |
int64 | A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1 | String |
float32 | An IEEE-754 single-precision floating-point number | Number |
float64 | An IEEE-754 double-precision floating-point number | Number |
biginteger | Arbitrary-precision integer string | String |
bigdecimal | Arbitrary-precision decimal string | String |
uuid | UUID string (8-4-4-4-12 format) | String |
date | ISO 8601 date | String |
timestamp | ISO 8601 timestamp | String |
timestamptz | ISO 8601 timestamp-with-timezone | String |
geography | GeoJSON, per RFC 7946 | JSON |
geometry | GeoJSON Geometry object, per RFC 7946 | JSON |
bytes | Base64-encoded bytes | String |
json | Arbitrary JSON | JSON |
Enum Representations
A scalar type with a representation of type enum
accepts one of a set of string values, specified by the one_of
argument.
For example, this representation indicates that the only three valid values are the strings "foo"
, "bar"
and "baz"
:
{
"type": "enum",
"one_of": ["foo", "bar", "baz"]
}
Deprecated Representations
The following representations are deprecated as of version 0.1.2:
type | Description | JSON representation |
---|---|---|
number | Any JSON number | Number |
integer | Any JSON number with no decimal part | Number |
Connectors should use the sized integer and floating-point types instead.
Comparison Operators
Comparison operators extend the query AST with the ability to express new binary comparison expressions in the predicate.
For example, a data connector might augment a String
scalar type with a LIKE
operator which tests for a fuzzy match based on a regular expression.
A comparison operator is either a standard operator, or a custom operator.
To define a comparison operator, add a ComparisonOperatorDefinition
to the comparison_operators
field of the schema response.
For example:
{
"scalar_types": {
"String": {
"aggregate_functions": {},
"comparison_operators": {
"like": {
"type": "custom",
"argument_type": {
"type": "named",
"name": "String"
}
}
}
}
},
...
}
Standard Comparison Operators
Equal
An operator defined using type equal
tests if a column value is equal to a scalar value, another column value, or a variable.
Note: syntactic equality
Specifically, a predicate expression which uses an operator of type equal
should implement syntactic equality:
- An expression which tests for equality of a column with a scalar value or variable should return that scalar value exactly (equal as JSON values) for all rows in each corresponding row set, whenever the same column is selected.
- An expression which tests for equality of a column with another column should return the same values in both columns (equal as JSON values) for all rows in each corresponding row set, whenever both of those those columns are selected.
This type of equality is quite strict, and it might not be possible to implement such an operator for all scalar types. For example, a case-insensitive string type's natural case-insensitive equality operator would not meet the criteria above. In such cases, the scalar type should not provide an equal operator.
In
An operator defined using type in
tests if a column value is a member of an array of values. The array is specified either as a scalar, a variable, or as the value of another column.
It should accept an array type as its argument, whose element type is the scalar type for which it is defined. It should be equivalent to a disjunction of individual equality tests on the elements of the provided array, where the equality test is an equivalence relation in the same sense as above.
Custom Comparison Operators
Data connectors can also define custom comparison operators using type custom
. A custom operator is defined by its argument type, and its semantics is undefined.
Aggregation Functions
Aggregation functions extend the query AST with the ability to express new aggregates within the aggregates
portion of a query. They also allow sorting the query results via the order_by
query field.
Note: data connectors are required to implement the count and count-distinct aggregations for columns of all scalar types, and those operator is distinguished in the query AST. There is no need to define these aggregates as aggregation functions.
For example, a data connector might augment a Float
scalar type with a SUM
function which aggregates a sum of a collection of floating-point numbers.
An aggregation function is defined by its result type - that is, the type of the aggregated data.
To define an aggregation function, add a AggregateFunctionDefinition
to the aggregate_functions
field of the schema response.
For example:
{
"scalar_types": {
"Float": {
"aggregate_functions": {
"sum": {
"result_type": {
"type": "named",
"name": "Float"
}
}
},
"comparison_operators": {}
}
},
...
}
See also
Object Types
The schema should define any named object types which will be used as the types of collection row sets, or procedure inputs or outputs.
An object type consists of a name and a collection of named fields. Each field is defined by its type, and any arguments.
Note: field arguments are only used in a query context. Objects with field arguments cannot be used as input types, and fields with arguments cannot be used to define column mappings, or in nested field references.
To define an object type, add an ObjectType
to the object_types
field of the schema response.
Example
{
"object_types": {
"coords": {
"description": "Latitude and longitude",
"fields": {
"latitude": {
"description": "Latitude in degrees north of the equator",
"arguments": {},
"type": {
"type": "named",
"name": "Float"
}
},
"longitude": {
"description": "Longitude in degrees east of the Greenwich meridian",
"arguments": {},
"type": {
"type": "named",
"name": "Float"
}
}
}
},
...
},
...
}
Extended Example
Object types can refer to other object types in the types of their fields, and make use of other type structure such as array types and nullable types.
In the context of array types, it can be useful to use arguments on fields to allow the caller to customize the response.
For example, here we define a type widget
, and a second type which contains a widgets
field, parameterized by a limit
argument:
{
"object_types": {
"widget": {
"description": "Description of a widget",
"fields": {
"id": {
"description": "Primary key",
"arguments": {},
"type": {
"type": "named",
"name": "ID"
}
},
"name": {
"description": "Name of this widget",
"arguments": {},
"type": {
"type": "named",
"name": "String"
}
}
}
},
"inventory": {
"description": "The items in stock",
"fields": {
"widgets": {
"description": "Those widgets currently in stock",
"arguments": {
"limit": {
"description": "The maximum number of widgets to fetch",
"argument_type": {
"type": "named",
"name": "Int"
}
}
},
"type": {
"type": "array",
"element_type": {
"type": "named",
"name": "widget"
}
}
}
}
}
},
...
}
See also
- Type
ObjectType
- Type
ObjectField
Collections
The schema should define the metadata for any collections which can be queried using the query endpoint, or mutated using the mutation endpoint.
Each collection is defined by its name, any collection arguments, the object type of its rows, and some additional metadata related to permissions and constraints.
To describe a collection, add a CollectionInfo
structure to the collections
field of the schema response.
Requirements
- The
type
field should name an object type which is defined in the schema response.
Example
{
"collections": [
{
"name": "articles",
"description": "A collection of articles",
"arguments": {},
"type": "article",
"deletable": false,
"uniqueness_constraints": {
"ArticleByID": {
"unique_columns": [
"id"
]
}
},
"foreign_keys": {}
},
{
"name": "authors",
"description": "A collection of authors",
"arguments": {},
"type": "author",
"deletable": false,
"uniqueness_constraints": {
"AuthorByID": {
"unique_columns": [
"id"
]
}
},
"foreign_keys": {}
}
],
...
}
See also
- Type
CollectionInfo
Functions
Functions are a special case of collections, which are identified separately in the schema for convenience.
A function is a collection which returns a single row and a single column, named __value
. Like collections, functions can have arguments. Unlike collections, functions cannot be used by the mutations endpoint, do not describe constraints, and only provide a type for the __value
column, not the name of an object type.
Note: even though a function acts like a collection returning a row type with a single column, there is no need to define and name such a type in the object_types
section of the schema response.
To describe a function, add a FunctionInfo
structure to the functions
field of the schema response.
Example
{
"functions": [
{
"name": "latest_article_id",
"description": "Get the ID of the most recent article",
"arguments": {},
"result_type": {
"type": "nullable",
"underlying_type": {
"type": "named",
"name": "Int"
}
}
}
],
...
}
See also
- Type
FunctionInfo
Procedures
The schema should define metadata for each procedure which the data connector implements.
Each procedure is defined by its name, any arguments types and a result type.
To describe a procedure, add a ProcedureInfo
structure to the procedure
field of the schema response.
Example
{
"procedures": [
{
"name": "upsert_article",
"description": "Insert or update an article",
"arguments": {
"article": {
"description": "The article to insert or update",
"type": {
"type": "named",
"name": "article"
}
}
},
"result_type": {
"type": "named",
"name": "article"
}
}
],
...
}
See also
- Type
ProcedureInfo
Queries
The query endpoint accepts a query request, containing expressions to be evaluated in the context the data source, and returns a response consisting of relevant rows of data.
The structure and requirements for specific fields listed below will be covered in subsequent chapters.
Request
POST /query
Request
See QueryRequest
Request Fields
Name | Description |
---|---|
collection | The name of a collection to query |
query | The query syntax tree |
arguments | Values to be provided to any top-level collection arguments |
collection_relationships | Any relationships between collections involved in the query request |
variables | One set of named variables for each rowset to fetch. Each variable set should be subtituted in turn, and a fresh set of rows returned. |
Response
See QueryResponse
Requirements
- If the request specifies
variables
, then the response must contain oneRowSet
for each collection of variables provided. If not, the data connector should respond as ifvariables
were set to a single empty collection of variables:[{}]
. - If the request specifies
fields
, then the response must containrows
according to the schema advertised for the requestedcollection
. - If the request specifies
aggregates
then the response must containaggregates
, with one response key per requested aggregate, using the same keys. See aggregates. - If the request specifies
arguments
, then the implementation must validate the provided arguments against the types specified by the collection's schema. See arguments.
Field Selection
A Query
can specify which fields to fetch. The available fields are either
- the columns on the selected collection (i.e. those advertised in the corresponding
CollectionInfo
structure in the schema response), or - fields from related collections
The requested fields are specified as a collection of Field
structures in the field
property on the Query
.
Field Arguments
Arguments can be supplied to fields via the arguments
key. These match the format described in the arguments documentation.
The schema response will specify which fields take arguments via its respective arguments
key.
If a field has any arguments defined, then the arguments
field must be provided wherever that field is referenced. All fields are required, including nullable fields.
Nested Fields
Queries can specify nested field selections for columns which have structured types (that is, not simply a scalar type or a nullable scalar type).
In order to specify nested field selections, the fields
property of the Field
structure, which is a NestedField
structure.
If fields
is omitted, the entire structure of the column's data should be returned.
If fields
is provided, its value should be compatible with the type of the column:
- For an object-typed column (whether nullable or not), the
fields
property should contain aNestedField
with typeobject
. Thefields
property of theNestedField
specifies aField
structure for each requested nested field from the objects. - For an array-typed column (whether nullable or not), the
fields
property should contain aNestedField
with typearray
. Thefields
property of theNestedField
should contain anotherNestedField
structure, compatible with the type of the elements of the array. The selection function denoted by this nestedNestedField
structure should be applied to each element of each array.
Examples
Simple column selection
Here is an example of a query which selects some columns from the articles
collection of the reference data connector:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
},
"collection_relationships": {}
}
Example with Nested Object Types
Here is an example of a query which selects some columns from a nested object inside the rows of the institutions
collection of the reference data connector:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"location": {
"type": "column",
"column": "location",
"fields": {
"type": "object",
"fields": {
"city": {
"type": "column",
"column": "city"
},
"campuses": {
"type": "column",
"column": "campuses",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
}
},
"location_all": {
"type": "column",
"column": "location"
}
}
},
"collection_relationships": {}
}
Notice that the location
column is fetched twice: once to illustrate the use of the fields
property, to fetch a subset of data, and again in the location_all
field, which omits the fields
property and fetches the entire structure.
Example with Nested Array Types
Here is an example of a query which selects some columns from a nested array inside the rows of the institutions
collection of the reference data connector:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"fields": {
"type": "array",
"fields": {
"type": "object",
"fields": {
"last_name": {
"type": "column",
"column": "last_name"
},
"fields_of_study": {
"type": "column",
"column": "specialities",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
}
}
},
"departments": {
"type": "column",
"column": "departments",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
},
"collection_relationships": {}
}
Notice that the staff
column is fetched using a fields
property of type array
. For each staff member in each institution row, we apply the selection function denoted by its fields
property (of type object
). Specifically, the last_name
and specialities
properties are selected for each staff member.
Example with Field Arguments
Here is an example of a query which selects some columns from a nested array inside the rows of the institutions
collection of the reference data connector and uses the limit
field argument to limit the number of items returned:
{
"$schema": "../../../../ndc-models/tests/json_schema/query_request.jsonschema",
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": 1
}
},
"fields": {
"type": "array",
"fields": {
"type": "object",
"fields": {
"last_name": {
"type": "column",
"column": "last_name"
},
"fields_of_study": {
"type": "column",
"column": "specialities",
"arguments": {
"limit": {
"type": "literal",
"value": 2
}
}
}
}
}
}
},
"departments": {
"type":"column",
"column": "departments",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
}
},
"collection_relationships": {}
}
Requirements
- If the
QueryRequest
contains aQuery
which specifiesfields
, then eachRowSet
in the response should contain therows
property, and each row should contain all of the requested fields.
See also
- Type
Query
- Type
RowFieldValue
- Type
RowSet
Filtering
A Query
can specify a predicate expression which should be used to filter rows in the response.
A predicate expression can be one of
- An application of a comparison operator to a column and a value, or
- An
EXISTS
expression, or - A conjunction of other expressions, or
- A disjunction of other expressions, or
- A negation of another expression
The predicate expression is specified in the predicate
field of the Query
object.
Comparison Operators
Unary Operators
Unary comparison operators are denoted by expressions with a type
field of unary_comparison_operator
.
The only supported unary operator currently is is_null
, which return true
when a column value is null
:
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "title"
}
}
Binary Operators
Binary comparison operators are denoted by expressions with a type
field of binary_comparison_operator
.
The set of available operators depends on the type of the column involved in the expression. The operator
property should specify the name of one of the binary operators from the field's scalar type definition.
The type ComparisonValue
describes the valid inhabitants of the value
field. The value
field should be an expression which evaluates to a value whose type is compatible with the definition of the comparison operator.
Equality Operators
This example makes use of an eq
operator, which is defined using the equal
semantics, to test a single column for equality with a scalar value:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "id",
"path": []
},
"operator": "eq",
"value": {
"type": "scalar",
"value": 1
}
}
},
"collection_relationships": {}
}
Set Membership Operators
This example uses an in
operator, which is defined using the in
semantics, to test a single column for membership in a set of values:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "author_id",
"path": []
},
"operator": "in",
"value": {
"type": "scalar",
"value": [1, 2]
}
}
},
"collection_relationships": {}
}
Custom Operators
This example uses a custom like
operator:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": []
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
},
"collection_relationships": {}
}
Columns in Operators
Comparison operators compare columns to values. The column on the left hand side of any operator is described by a ComparisonTarget
, and the various cases will be explained next.
Referencing a column from the same collection
If the ComparisonTarget
has type column
, and the path
property is empty, then the name
property refers to a column in the current collection.
Referencing a column from a related collection
If the ComparisonTarget
has type column
, and the path
property is non-empty, then the name
property refers to column in a related collection. The path consists of a collection of PathElement
s, each of which references a named relationship, any collection arguments, and a predicate expression to be applied to any relevant rows in the related collection.
When a PathElement
references an array relationship, the enclosing operator should be considered existentially quantified over all related rows.
Referencing a column from the root collection
If the ComparisonTarget
has type root_collection_column
, then the name
property refers to a column in the root collection.
The root collection is defined as the collection in scope at the nearest enclosing Query
, and the column should be chosen from the row in that collection which was in scope when that Query
was being evaluated.
Referencing nested fields within columns
If the field_path
property is empty or not present then the target is the value of the named column.
If field_path
is non-empty then it refers to a path to a nested field within the named column.
(A ComparisonTarget
may only have a non-empty field_path
if the connector supports capability query.nested_fields.filter_by
.)
Values in Binary Operators
Binary (including array-valued) operators compare columns to values, but there are several types of valid values:
- Scalar values, as seen in the examples above, compare the column to a specific value,
- Variable values compare the column to the current value of a variable,
- Column values compare the column to another column, possibly selected from a different collection. Column values are also described by a
ComparisonTarget
.
EXISTS
expressions
An EXISTS
expression tests whether a row exists in some possibly-related collection, and is denoted by an expression with a type
field of exists
.
EXISTS
expressions can query related or unrelated collections.
Related Collections
Related collections are related to the original collection by a relationship in the collection_relationships
field of the top-level QueryRequest
.
For example, this query fetches authors who have written articles whose titles contain the string "Functional"
:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"arguments": {},
"relationship": "author_articles"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": []
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Unrelated Collections
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "unrelated",
"arguments": {},
"collection": "articles"
},
"predicate": {
"type": "and",
"expressions": [
{
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "author_id",
"path": []
},
"operator": "eq",
"value": {
"type": "column",
"column": {
"type": "root_collection_column",
"name": "id"
}
}
},
{
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": []
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
]
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Nested Collections
If the query.exists.nested_collections
capability is enabled, then exists expressions can reference nested collections.
For example, this query finds institutions
which employ at least one staff member whose last name contains the letter s
:
{
"collection": "institutions",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"name": {
"type": "column",
"column": "name"
},
"staff": {
"type": "column",
"column": "staff",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "nested_collection",
"arguments": {
"limit": {
"type": "literal",
"value": null
}
},
"column_name": "staff"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "last_name",
"path": []
},
"operator": "like",
"value": {
"type": "scalar",
"value": "s"
}
}
}
},
"collection_relationships": {
}
}
Conjunction of expressions
To express the conjunction of multiple expressions, specify a type
field of and
, and provide the expressions in the expressions
field.
For example, to test if the first_name
column is null and the last_name
column is also null:
{
"type": "and",
"expressions": [
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "first_name"
}
},
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "last_name"
}
}
]
}
Disjunction of expressions
To express the disjunction of multiple expressions, specify a type
field of or
, and provide the expressions in the expressions
field.
For example, to test if the first_name
column is null or the last_name
column is also null:
{
"type": "or",
"expressions": [
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "first_name"
}
},
{
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "last_name"
}
}
]
}
Negation
To express the negation of an expressions, specify a type
field of not
, and provide that expression in the expression
field.
For example, to test if the first_name
column is not null:
{
"type": "not",
"expression": {
"type": "unary_comparison_operator",
"operator": "is_null",
"column": {
"name": "first_name"
}
}
}
See also
- Type
Expression
Sorting
A Query
can specify how rows should be sorted in the response.
The requested ordering can be found in the order_by
field of the Query
object.
Computing the Ordering
To compute the ordering from the order_by
field, data connectors should implement the following ordering between rows:
- Consider each element of the
order_by.elements
array in turn. - For each
OrderByElement
:- If
element.target.type
iscolumn
, then to compare two rows, compare the value in the selected column. See typecolumn
below. - If
element.target.type
isstar_count_aggregate
, compare two rows by comparing the row count of a related collection. See typestar_count_aggregate
below. - If
element.target.type
issingle_column_aggregate
, compare two rows by comparing a single column aggregate. See typesingle_column_aggregate
below.
- If
Type column
The property element.target.name
refers to a column name.
If the connector supports capability query.nested_fields.order_by
then the target may also reference nested fields within a column using the field_path
property.
If element.order_direction
is asc
, then the row with the smaller column comes first.
If element.order_direction
is asc
, then the row with the smaller column comes second.
If the column values are incomparable, continue to the next OrderByElement
.
The data connector should document, for each scalar type, a comparison function to use for any two values of that scalar type.
For example, a data connector might choose to use the obvious ordering for a scalar integer-valued type, but to use the database-given ordering for a string-valued type, based on a certain choice of collation.
For example, the following query
requests that a collection of articles be ordered by title
descending:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"order_by": {
"elements": [
{
"target": {
"type": "column",
"name": "title",
"path": []
},
"order_direction": "desc"
}
]
}
},
"collection_relationships": {}
}
The selected column can be chosen from a related collection by specifying the path
property. path
consists of a list of named relationships.
For example, this query sorts articles by their author's last names, and then by their first names, by traversing the relationship from articles to authors:
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
},
"author": {
"type": "relationship",
"arguments": {},
"relationship": "article_author",
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
}
}
}
}
},
"order_by": {
"elements": [
{
"target": {
"type": "column",
"name": "last_name",
"path": [
{
"arguments": {},
"relationship": "article_author",
"predicate": {
"type": "and",
"expressions": []
}
}
]
},
"order_direction": "asc"
},
{
"target": {
"type": "column",
"name": "first_name",
"path": [
{
"arguments": {},
"relationship": "article_author",
"predicate": {
"type": "and",
"expressions": []
}
}
]
},
"order_direction": "asc"
}
]
}
},
"collection_relationships": {
"article_author": {
"arguments": {},
"column_mapping": {
"author_id": "id"
},
"relationship_type": "object",
"source_collection_or_type": "article",
"target_collection": "authors"
}
}
}
Type star_count_aggregate
An ordering of type star_count_aggregate
orders rows by a count of rows in some related collection. If the respective counts are incomparable, the ordering should continue to the next OrderByElement
.
For example, this query sorts article authors by their total article count:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"count": {
"type": "star_count"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "star_count_aggregate",
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Type single_column_aggregate
An ordering of type single_column_aggregate
orders rows by an aggregate computed over rows in some related collection. If the respective aggregates are incomparable, the ordering should continue to the next OrderByElement
.
For example, this query sorts article authors by their maximum article ID:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"max_id": {
"type": "single_column",
"column": "id",
"function": "max"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "asc",
"target": {
"type": "single_column_aggregate",
"column": "id",
"function": "max",
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Requirements
- Rows in the response should be ordered according to the algorithm described above.
- The
order_by
field should not affect the set of collection which are returned, except for their order. - If the
order_by
field is not provided then rows should be returned in an unspecified but deterministic order. For example, an implementation might choose to return rows in the order of their primary key or creation timestamp by default.
See also
- Type
OrderBy
- Type
OrderByElement
- Type
OrderByTarget
Pagination
The limit
and offset
parameters on the Query
object control pagination:
limit
specifies the maximum number of rows to return from a query in the rows property.offset
: The index of the first row to return.
Both limit
and offset
affect the rows returned, and also the rows considered by aggregations.
Requirements
- If
limit
is specified, the response should contain at most that many rows.
See also
- Type
Query
Aggregates
In addition to fetching multiple rows of raw data from a collection, the query API supports fetching aggregated data.
Aggregates are requested in the aggregates
field of the Query
object.
There are three types of aggregate:
single_column
aggregates apply an aggregation function (as defined by the column's scalar type in the schema response) to a column,column_count
aggregates count the number of rows with non-null values in the specified columns. If thedistinct
flag is set, then the count should only count unique non-null values of those columns,star_count
aggregates count all matched rows.
If the connector supports capability query.nested_fields.aggregates
then single_column
and column_count
aggregates may also reference nested fields within a column using the field_path
property.
Example
The following query object requests the aggregated sum of all order totals, along with the count of all orders, and the count of all orders which have associated invoices (via the nullable invoice_id
column):
{
"collection": ["orders"],
"collection_relationships": {},
"query": {
"aggregates": {
"orders_total": {
"type": "single_column",
"function": "sum",
"column": "total"
},
"invoiced_orders_count": {
"type": "column_count",
"columns": ["invoice_id"]
},
"orders_count": {
"type": "star_count"
}
}
}
}
In this case, the query has no predicate function, so all three aggregates would be computed over all rows.
Requirements
- Each aggregate should be computed over all rows that match the
Query
. - Each requested aggregate must be returned in the
aggregates
property on theQueryResponse
object, using the same key as used to request it.
See also
- Type
Aggregate
Arguments
Collection arguments parameterize an entire collection, and must be provided in queries wherever the collection is referenced, either directly, or via relationships.
Field arguments parameterize a single field, and must be provided wherever that field is referenced.
Collection Arguments
Collection arguments should be provided in the QueryRequest
anywhere a collection is referenced. The set of provided arguments should be compatible with the list of arguments required by the corresponding collection in the schema response.
Specifying arguments to the top-level collection
Collection arguments should be provided as key-value pairs in the arguments
property of the top-level QueryRequest
object:
{
"collection": "articles_by_author",
"arguments": {
"author_id": {
"type": "literal",
"value": 1
}
},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
},
"collection_relationships": {}
}
Relationships
Relationships can specify values for arguments on their target collection:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"column_mapping": {
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Any arguments which are not defined by the relationship itself should be specified where the relationship is used. For example, here the author_id
argument can be moved from the relationship definition to the field which uses it:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Collection arguments in predicates
Arguments must be specified in predicates whenever a reference to a secondary collection is required.
For example, in an EXISTS
expression, if the target collection has arguments:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"relationship": "author_articles"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": []
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Or when a predicate expression matches a column from a related collection:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": [
{
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Collection arguments in order_by
Arguments must be specified when an OrderByElement
references a related collection.
For example, when ordering by an aggregate of rows in a related collection, and that collection has arguments:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "star_count_aggregate",
"path": [
{
"arguments": {
"author_id": {
"type": "column",
"name": "id"
}
},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles_by_author"
}
}
}
Field Arguments
Field arguments can be provided to any field requested (in addition to those described for top-level collections). These are specified in the schema response and their use is described in field selection. Their specification and usage matches that of collection arguments above.
Relationships
Queries can request data from other collections via relationships. A relationship identifies rows in one collection (the "source collection") with possibly-many related rows in a second collection (the "target collection") in two ways:
- Columns in the two collections can be related via column mappings, and
- Collection arguments to the target collection can be computed via the row of the source collection.
Defining Relationships
Relationships are defined (and given names) in the top-level QueryRequest
object, and then referred to by name everywhere they are used. To define a relationship, add a Relationship
object to the collection_relationships
property of the QueryRequest
object.
Column Mappings
A column mapping is a set of pairs of columns - each consisting of one column from the source collection and one column from the target collection - which must be pairwise equal in order for a pair of rows to be considered equal.
For example, we can fetch each author
with its list of related articles
by establishing a column mapping between the author's primary key and the article's author_id
column:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Collection Arguments
See collection arguments for examples.
Advanced relationship use cases
Relationships are not used only for fetching data - they are used in practically all features of data connectors, as we will see below.
Relationships in predicates
Filters can reference columns across relationships. For example, here we fetch all authors who have written articles with the word "Functional"
in the title:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": [{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}]
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
EXISTS
expressions in predicates can query related collections. Here we find all authors who have written any article with "Functional"
in the title:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
},
"predicate": {
"type": "exists",
"in_collection": {
"type": "related",
"arguments": {},
"relationship": "author_articles"
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "title",
"path": []
},
"operator": "like",
"value": {
"type": "scalar",
"value": "Functional"
}
}
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Relationships in order_by
Sorting can be defined in terms of row counts and aggregates over related collections.
For example, here we order authors by the number of articles they have written:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"count": {
"type": "star_count"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "desc",
"target": {
"type": "star_count_aggregate",
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
We can also order by custom aggregate functions applied to related collections. For example, here we order authors by their most recent (maximum) article ID:
{
"collection": "authors",
"arguments": {},
"query": {
"fields": {
"first_name": {
"type": "column",
"column": "first_name"
},
"last_name": {
"type": "column",
"column": "last_name"
},
"articles_aggregate": {
"type": "relationship",
"arguments": {},
"relationship": "author_articles",
"query": {
"aggregates": {
"max_id": {
"type": "single_column",
"column": "id",
"function": "max"
}
}
}
}
},
"order_by": {
"elements": [
{
"order_direction": "asc",
"target": {
"type": "single_column_aggregate",
"column": "id",
"function": "max",
"path": [
{
"arguments": {},
"relationship": "author_articles",
"predicate": {
"type": "and",
"expressions": []
}
}
]
}
}
]
}
},
"collection_relationships": {
"author_articles": {
"arguments": {},
"column_mapping": {
"id": "author_id"
},
"relationship_type": "array",
"source_collection_or_type": "author",
"target_collection": "articles"
}
}
}
Variables
A QueryRequest
can optionally specify one or more sets of variables which can be referenced throughout the Query
object.
Query variables will only be provided if the query.variables
capability is advertised in the capabilities response.
The intent is that the data connector should attempt to perform multiple versions of the query in parallel - one instance of the query for each set of variables. For each set of variables, each variable value should be substituted wherever it is referenced in the query - for example in a ComparisonValue
.
Example
In the following query, we fetch two rowsets of article data. In each rowset, the rows are filtered based on the author_id
column, and the prescribed author_id
is determined by a variable. The choice of author_id
varies between rowsets.
The result contains one rowset containing articles from the author with ID 1
, and a second for the author with ID 2
.
{
"collection": "articles",
"arguments": {},
"query": {
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
},
"predicate": {
"type": "binary_comparison_operator",
"column": {
"type": "column",
"name": "id",
"path": []
},
"operator": "eq",
"value": {
"type": "variable",
"name": "$article_id"
}
}
},
"collection_relationships": {},
"variables": [
{
"$article_id": 1
},
{
"$article_id": 2
}
]
}
Requirements
- If
variables
are provided in theQueryRequest
, then theQueryResponse
should contain oneRowSet
for each set of variables. - If
variables
are not provided, the data connector should return a singleRowSet
.
Functions
A function is invoked in a query request in exactly the same way as any other collection - recall that a function is simply a collection which returns a single row, and a single column, named __value
.
Because a function returns a single row, many query capabilities are limited in their usefulness:
- It would not make sense to specify
limit
oroffset
, - Sorting has no effect
- Filtering can only remove the whole result row, based on some condition expressed in terms of the result.
However, some query functions are still useful in the context of functions:
- The caller can request a subset of the full result, by using nested field queries,
- A function can be the source or target of a relationship,
- Function arguments are specified in the same way as collection arguments, and can also be specified using variables.
Examples
A function returning a scalar value
This example uses the latest_article_id
function, which returns a scalar type:
{
"arguments": {},
"query": {
"fields": {
"__value": {
"type": "column",
"column": "__value"
}
}
},
"collection_relationships": {}
}
The response JSON includes the requested data in the special __value
field:
[
{
"rows": [
{
"__value": 3
}
]
}
]
A function returning an object type
This example uses the latest_article
function instead, which returns the full article
object. To query the object structure, it uses a nested field request:
{
"arguments": {},
"query": {
"fields": {
"__value": {
"type": "column",
"column": "__value",
"fields": {
"type": "object",
"fields": {
"id": {
"type": "column",
"column": "id"
},
"title": {
"type": "column",
"column": "title"
}
}
}
}
}
},
"collection_relationships": {}
}
Again, the response is sent in the __value
field:
[
{
"rows": [
{
"__value": {
"id": 3,
"title": "The Design And Implementation Of Programming Languages"
}
}
]
}
]
Mutations
The mutation endpoint accepts a mutation request, containing a collection of mutation operations to be performed transactionally in the context the data source, and returns a response containing a result for each operation.
The structure and requirements for specific fields listed below will be covered in subsequent chapters.
Request
POST /mutation
Request
See MutationRequest
Request Fields
Name | Description |
---|---|
operations | A list of mutation operations to perform |
collection_relationships | Any relationships between collections involved in the mutation request |
Mutation Operations
Each operation is described by a MutationOperation
structure, which can be one of several types. However, currently procedures are the only supported operation type.
Multiple Operations
If the mutation.transactional
capability is enabled, then the caller may provide multiple operations in a single request.
Otherwise, the caller must provide exactly one operation.
The intent is that multiple operations ought to be performed together in a single transaction.
That is, they should all succeed, or all fail together. If any operation fails, then a single ErrorResponse
should capture
the failure, and none of the operations should effect any changes to the data source.
Response
See MutationResponse
Requirements
- The
operation_results
field of theMutationResponse
should contain oneMutationOperationResults
structure for each requested operation in theMutationRequest
.
Procedures
A procedure which is described in the schema can be invoked using a MutationOperation
.
The operation should specify the procedure name, any arguments, and a list of Field
s to be returned.
Note: just as for functions, fields to return can include relationships or nested fields. However, unlike functions, procedures do not need to wrap their result in a __value
field, so top-level fields can be extracted without use of nested field queries.
Requirements
- The
MutationResponse
structure will contain aMutationOperationResults
structure for the procedure response. This structure should have typeprocedure
and contain aresult
field with a result of the type indicated in the schema response.
Explain
There are two endpoints related to explain:
- The
/query/explain
endpoint, which accepts a query request. - The
/mutation/explain
endpoint, which accepts a mutation request.
Both endpoints return a representation of the execution plan without actually executing the query or mutation.
Request
POST /query/explain
See QueryRequest
Request
POST /mutation/explain
See MutationRequest
Response
See ExplainResponse
Tutorial
In this tutorial, we will walk through the reference implementation of the specification, which will illustrate how to implement data connectors from scratch.
The reference implementation is written in Rust, but it should be possible to follow along using any language of your choice, as long as you can implement a basic web server and implement serializers and deserializers for the data formats involved.
It is recommended that you follow along chapter-by-chapter, as each will build on the last.
Setup
To compile and run the reference implementation, you will need to install a Rust toolchain, and then run:
git clone git@github.com:hasura/ndc-spec.git
cd ndc-spec/ndc-reference
cargo build
cargo run
Alternatively, you can run the reference implementation entirely inside a Docker container:
git clone git@github.com:hasura/ndc-spec.git
cd ndc-spec
docker build -t reference_connector .
docker run -it reference_connector
Either way, you should have a working data connector running on http://localhost:8100/, which you can test as follows:
curl http://localhost:8100/schema
Testing
Testing tools are provided in the specification repository to aid in the development of connectors.
ndc-test
The ndc-test
executable performs basic validation of the data returned by the capabilities and schema endpoints, and performs some basic queries.
To test a connector, provide its endpoint to ndc-test
on the command line:
ndc-test --endpoint <ENDPOINT>
For example, running the reference connector and passing its URL to ndc-test
, we will see that it issues test queries against the articles
and authors
collections:
ndc-test test --endpoint http://localhost:8100
Capabilities
├ Fetching /capabilities ... ... OK
├ Validating capabilities ... OK
Schema
├ Fetching /schema ... OK
├ Validating schema ...
│ ├ object_types ... OK
│ ├ Collections ...
│ │ ├ articles ...
│ │ │ ├ Arguments ... OK
│ │ │ ├ Collection type ... OK
│ │ ├ authors ...
│ │ │ ├ Arguments ... OK
│ │ │ ├ Collection type ... OK
│ │ ├ articles_by_author ...
│ │ │ ├ Arguments ... OK
│ │ │ ├ Collection type ... OK
│ ├ Functions ...
│ │ ├ latest_article_id ...
│ │ │ ├ Result type ... OK
│ │ │ ├ Arguments ... OK
│ │ ├ Procedures ...
│ │ │ ├ upsert_article ...
│ │ │ │ ├ Result type ... OK
│ │ │ │ ├ Arguments ... OK
Query
├ articles ...
│ ├ Simple queries ...
│ │ ├ Select top N ... OK
│ │ ├ Predicates ... OK
│ ├ Aggregate queries ...
│ │ ├ star_count ... OK
├ authors ...
│ ├ Simple queries ...
│ │ ├ Select top N ... OK
│ │ ├ Predicates ... OK
│ ├ Aggregate queries ...
│ │ ├ star_count ... OK
├ articles_by_author ...
However, ndc-test
cannot validate the entire schema. For example, it will not issue queries against the articles_by_author
collection, because it does not have any way to synthesize inputs for its required collection argument.
Getting Started
The reference implementation will serve queries and mutations based on in-memory data read from newline-delimited JSON files.
First, we will define some types to represent the data in the newline-delimited JSON files. Rows of JSON data will be stored in memory as ordered maps:
type Row = BTreeMap<models::FieldName, serde_json::Value>;
Our application state will consist of collections of various types of rows:
#[derive(Debug, Clone)]
pub struct AppState {
pub articles: BTreeMap<i32, Row>,
pub authors: BTreeMap<i32, Row>,
pub institutions: BTreeMap<i32, Row>,
pub metrics: Metrics,
}
In our main
function, the data connector reads the initial data from the newline-delimited JSON files, and creates the AppState
:
fn init_app_state() -> AppState {
// Read the JSON data files
let articles = read_json_lines(ARTICLES_JSON).unwrap();
let authors = read_json_lines(AUTHORS_JSON).unwrap();
let institutions = read_json_lines(INSTITUTIONS_JSON).unwrap();
let metrics = Metrics::new().unwrap();
AppState {
articles,
authors,
institutions,
metrics,
}
}
Finally, we start a web server with the endpoints which are required by this specification:
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn Error>> {
let app_state = Arc::new(Mutex::new(init_app_state()));
let app = Router::new()
.route("/health", get(get_health))
.route("/metrics", get(get_metrics))
.route("/capabilities", get(get_capabilities))
.route("/schema", get(get_schema))
.route("/query", post(post_query))
.route("/query/explain", post(post_query_explain))
.route("/mutation", post(post_mutation))
.route("/mutation/explain", post(post_mutation_explain))
.layer(axum::middleware::from_fn_with_state(
app_state.clone(),
metrics_middleware,
))
.with_state(app_state);
// Start the server on `localhost:<PORT>`.
// This says it's binding to an IPv6 address, but will actually listen to
// any IPv4 or IPv6 address.
let host = net::IpAddr::V6(net::Ipv6Addr::UNSPECIFIED);
let port = env::var("PORT")
.map(|s| s.parse())
.unwrap_or(Ok(DEFAULT_PORT))?;
let addr = net::SocketAddr::new(host, port);
let server = axum::Server::bind(&addr).serve(app.into_make_service());
println!("Serving on {}", server.local_addr());
server.with_graceful_shutdown(shutdown_handler()).await?;
Ok(())
}
Note: the application state is stored in an Arc<Mutex<_>>
, so that we can perform locking reads and writes in multiple threads.
In the next chapters, we will look at the implementation of each of these endpoints in turn.
Capabilities
The capabilities endpoint should return data describing which features the data connector can implement, along with the version of this specification that the data connector claims to implement.
The reference implementation returns a static CapabilitiesResponse
:
async fn get_capabilities() -> Json<models::CapabilitiesResponse> {
Json(models::CapabilitiesResponse {
version: models::VERSION.into(),
capabilities: models::Capabilities {
query: models::QueryCapabilities {
aggregates: Some(models::LeafCapability {}),
variables: Some(models::LeafCapability {}),
explain: None,
nested_fields: models::NestedFieldCapabilities {
filter_by: Some(models::LeafCapability {}),
order_by: Some(models::LeafCapability {}),
aggregates: Some(models::LeafCapability {}),
},
exists: models::ExistsCapabilities {
nested_collections: Some(models::LeafCapability {}),
},
},
mutation: models::MutationCapabilities {
transactional: None,
explain: None,
},
relationships: Some(models::RelationshipCapabilities {
order_by_aggregate: Some(models::LeafCapability {}),
relation_comparisons: Some(models::LeafCapability {}),
}),
},
})
}
Note: the reference implementation supports all capabilities with the exception of query.explain
and mutation.explain
. This is because all queries are run in memory by naively interpreting the query request - there is no better description of the query plan than the raw query request itself!
Schema
The schema endpoint should return data describing the data connector's scalar and object types, along with any collections, functions and procedures which are exposed.
async fn get_schema() -> Json<models::SchemaResponse> {
// ...
Json(models::SchemaResponse {
scalar_types,
object_types,
collections,
functions,
procedures,
})
}
Scalar Types
We define two scalar types: String
and Int
.
String
supports a custom like
comparison operator, and Int
supports custom aggregation operators min
and max
.
let scalar_types = BTreeMap::from_iter([
(
"String".into(),
models::ScalarType {
representation: Some(models::TypeRepresentation::String),
aggregate_functions: BTreeMap::new(),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
("in".into(), models::ComparisonOperatorDefinition::In),
(
"like".into(),
models::ComparisonOperatorDefinition::Custom {
argument_type: models::Type::Named {
name: "String".into(),
},
},
),
]),
},
),
(
"Int".into(),
models::ScalarType {
representation: Some(models::TypeRepresentation::Int32),
aggregate_functions: BTreeMap::from_iter([
(
"max".into(),
models::AggregateFunctionDefinition {
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named {
name: "Int".into(),
}),
},
},
),
(
"min".into(),
models::AggregateFunctionDefinition {
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named {
name: "Int".into(),
}),
},
},
),
]),
comparison_operators: BTreeMap::from_iter([
("eq".into(), models::ComparisonOperatorDefinition::Equal),
("in".into(), models::ComparisonOperatorDefinition::In),
]),
},
),
]);
Object Types
For each collection, we define an object type for its rows. In addition, we define object types for any nested types which we use:
let object_types = BTreeMap::from_iter([
("article".into(), article_type),
("author".into(), author_type),
("institution".into(), institution_type),
("location".into(), location_type),
("staff_member".into(), staff_member_type),
]);
Author
let author_type = models::ObjectType {
description: Some("An author".into()),
fields: BTreeMap::from_iter([
(
"id".into(),
models::ObjectField {
description: Some("The author's primary key".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"first_name".into(),
models::ObjectField {
description: Some("The author's first name".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
(
"last_name".into(),
models::ObjectField {
description: Some("The author's last name".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
]),
};
Article
let article_type = models::ObjectType {
description: Some("An article".into()),
fields: BTreeMap::from_iter([
(
"id".into(),
models::ObjectField {
description: Some("The article's primary key".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"title".into(),
models::ObjectField {
description: Some("The article's title".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
(
"author_id".into(),
models::ObjectField {
description: Some("The article's author ID".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
]),
};
Institution
Note: the fields with array types have field-level arguments (array_arguments
) in order to support nested array operations.
let institution_type = models::ObjectType {
description: Some("An institution".into()),
fields: BTreeMap::from_iter([
(
"id".into(),
models::ObjectField {
description: Some("The institution's primary key".into()),
r#type: models::Type::Named { name: "Int".into() },
arguments: BTreeMap::new(),
},
),
(
"name".into(),
models::ObjectField {
description: Some("The institution's name".into()),
r#type: models::Type::Named {
name: "String".into(),
},
arguments: BTreeMap::new(),
},
),
(
"location".into(),
models::ObjectField {
description: Some("The institution's location".into()),
r#type: models::Type::Named {
name: "location".into(),
},
arguments: BTreeMap::new(),
},
),
(
"staff".into(),
models::ObjectField {
description: Some("The institution's staff".into()),
r#type: models::Type::Array {
element_type: Box::new(models::Type::Named {
name: "staff_member".into(),
}),
},
arguments: array_arguments.clone(),
},
),
(
"departments".into(),
models::ObjectField {
description: Some("The institution's departments".into()),
r#type: models::Type::Array {
element_type: Box::new(models::Type::Named {
name: "String".into(),
}),
},
arguments: array_arguments.clone(),
},
),
]),
};
Collections
We define each collection's schema using the type information defined above:
let collections = vec![
articles_collection,
authors_collection,
institutions_collection,
articles_by_author_collection,
];
Author
let authors_collection = models::CollectionInfo {
name: "authors".into(),
description: Some("A collection of authors".into()),
collection_type: "author".into(),
arguments: BTreeMap::new(),
foreign_keys: BTreeMap::new(),
uniqueness_constraints: BTreeMap::from_iter([(
"AuthorByID".into(),
models::UniquenessConstraint {
unique_columns: vec!["id".into()],
},
)]),
};
Article
let articles_collection = models::CollectionInfo {
name: "articles".into(),
description: Some("A collection of articles".into()),
collection_type: "article".into(),
arguments: BTreeMap::new(),
foreign_keys: BTreeMap::from_iter([(
"Article_AuthorID".into(),
models::ForeignKeyConstraint {
foreign_collection: "authors".into(),
column_mapping: BTreeMap::from_iter([("author_id".into(), "id".into())]),
},
)]),
uniqueness_constraints: BTreeMap::from_iter([(
"ArticleByID".into(),
models::UniquenessConstraint {
unique_columns: vec!["id".into()],
},
)]),
};
articles_by_author
We define one additional collection, articles_by_author
, which is provided as an example of a collection with an argument:
let articles_by_author_collection = models::CollectionInfo {
name: "articles_by_author".into(),
description: Some("Articles parameterized by author".into()),
collection_type: "article".into(),
arguments: BTreeMap::from_iter([(
"author_id".into(),
models::ArgumentInfo {
argument_type: models::Type::Named { name: "Int".into() },
description: None,
},
)]),
foreign_keys: BTreeMap::new(),
uniqueness_constraints: BTreeMap::new(),
};
Institution
let institutions_collection = models::CollectionInfo {
name: "institutions".into(),
description: Some("A collection of institutions".into()),
collection_type: "institution".into(),
arguments: BTreeMap::new(),
foreign_keys: BTreeMap::new(),
uniqueness_constraints: BTreeMap::from_iter([(
"InstitutionByID".into(),
models::UniquenessConstraint {
unique_columns: vec!["id".into()],
},
)]),
};
Functions
The schema defines a list of functions, each including its input and output types.
Get Latest Article
As an example, we define a latest_article_id
function, which returns a single integer representing the maximum article ID.
let latest_article_id_function = models::FunctionInfo {
name: "latest_article_id".into(),
description: Some("Get the ID of the most recent article".into()),
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named { name: "Int".into() }),
},
arguments: BTreeMap::new(),
};
A second example returns the full corresponding article, to illustrate functions returning structured types:
let latest_article_function = models::FunctionInfo {
name: "latest_article".into(),
description: Some("Get the most recent article".into()),
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named {
name: "article".into(),
}),
},
arguments: BTreeMap::new(),
};
Procedures
The schema defines a list of procedures, each including its input and output types.
Upsert Article
As an example, we define an upsert procedure for the article collection defined above. The procedure will accept an input argument of type article
, and returns a nulcollection article
, representing the state of the article before the update, if it were already present.
let upsert_article = models::ProcedureInfo {
name: "upsert_article".into(),
description: Some("Insert or update an article".into()),
arguments: BTreeMap::from_iter([(
"article".into(),
models::ArgumentInfo {
description: Some("The article to insert or update".into()),
argument_type: models::Type::Named {
name: "article".into(),
},
},
)]),
result_type: models::Type::Nullable {
underlying_type: Box::new(models::Type::Named {
name: "article".into(),
}),
},
};
Queries
The reference implementation of the /query
endpoint may seem complicated, because there is a lot of functionality packed into a single endpoint. However, we will break the implementation down into small sections, each of which should be easily understood.
We start by looking at the type signature of the post_query
function, which is the top-level function implementing the query endpoint:
pub async fn post_query(
State(state): State<Arc<Mutex<AppState>>>,
Json(request): Json<models::QueryRequest>,
) -> Result<Json<models::QueryResponse>> {
This function accepts a QueryRequest
and must produce a QueryResponse
.
In the next section, we will start to break down this problem step-by-step.
Query Variables
The first step in post_query
is to reduce the problem from a query with multiple sets of query variables to only a single set.
The post_query
function iterates over all variable sets, and for each one, produces a RowSet
of rows corresponding to that set of variables. Each RowSet
is then added to the final QueryResponse
:
pub async fn post_query(
State(state): State<Arc<Mutex<AppState>>>,
Json(request): Json<models::QueryRequest>,
) -> Result<Json<models::QueryResponse>> {
let state = state.lock().await;
let variable_sets = request.variables.unwrap_or(vec![BTreeMap::new()]);
let mut row_sets = vec![];
for variables in &variable_sets {
let row_set = execute_query_with_variables(
&request.collection,
&request.arguments,
&request.collection_relationships,
&request.query,
variables,
&state,
)?;
row_sets.push(row_set);
}
Ok(Json(models::QueryResponse(row_sets)))
}
In order to compute the RowSet
for a given set of variables, the function delegates to a function named execute_query_with_variables
:
fn execute_query_with_variables(
collection: &models::CollectionName,
arguments: &BTreeMap<models::ArgumentName, models::Argument>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
query: &models::Query,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
) -> Result<models::RowSet> {
In the next section, we will break down the implementation of this function.
Evaluating Arguments
Now that we have reduced the problem to a single set of query variables, we must evaluate any collection arguments, and in turn, evaluate the collection of rows that we will be working with.
From there, we will be able to apply predicates, sort and paginate rows. But one step at a time!
The first step is to evaluate each argument, which the execute_query_with_variables
function does by delegating to the eval_argument
function:
fn execute_query_with_variables(
collection: &models::CollectionName,
arguments: &BTreeMap<models::ArgumentName, models::Argument>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
query: &models::Query,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
) -> Result<models::RowSet> {
let mut argument_values = BTreeMap::new();
for (argument_name, argument_value) in arguments {
if argument_values
.insert(
argument_name.clone(),
eval_argument(variables, argument_value)?,
)
.is_some()
{
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "duplicate argument names".into(),
details: serde_json::Value::Null,
}),
));
}
}
let collection = get_collection_by_name(collection, &argument_values, state)?;
execute_query(
collection_relationships,
variables,
state,
query,
Root::CurrentRow,
collection,
)
}
Once this is complete, and we have a collection of evaluated argument_values
, we can delegate to the get_collection_by_name
function. This function peforms the work of computing the full collection, by pattern matching on the name of the collection:
fn get_collection_by_name(
collection_name: &models::CollectionName,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
state: &AppState,
) -> Result<Vec<Row>> {
match collection_name.as_str() {
"articles" => Ok(state.articles.values().cloned().collect()),
"authors" => Ok(state.authors.values().cloned().collect()),
"institutions" => Ok(state.institutions.values().cloned().collect()),
"articles_by_author" => {
let author_id = arguments.get("author_id").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "missing argument author_id".into(),
details: serde_json::Value::Null,
}),
))?;
let author_id_int: i32 = author_id
.as_i64()
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "author_id must be an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "author_id out of range".into(),
details: serde_json::Value::Null,
}),
)
})?;
let mut articles_by_author = vec![];
for article in state.articles.values() {
let article_author_id = article.get("author_id").ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "author_id not found".into(),
details: serde_json::Value::Null,
}),
))?;
let article_author_id_int: i32 = article_author_id
.as_i64()
.ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "author_id must be an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "author_id out of range".into(),
details: serde_json::Value::Null,
}),
)
})?;
if article_author_id_int == author_id_int {
articles_by_author.push(article.clone());
}
}
Ok(articles_by_author)
}
"latest_article_id" => {
let latest_id = state.articles.keys().max();
let latest_id_value = serde_json::to_value(latest_id).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
)
})?;
Ok(vec![BTreeMap::from_iter([(
"__value".into(),
latest_id_value,
)])])
}
"latest_article" => {
let latest = state
.articles
.iter()
.max_by_key(|(&id, _)| id)
.map(|(_, article)| article);
let latest_value = serde_json::to_value(latest).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
)
})?;
Ok(vec![BTreeMap::from_iter([(
"__value".into(),
latest_value,
)])])
}
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid collection name".into(),
details: serde_json::Value::Null,
}),
)),
}
}
Note 1: the articles_by_author
collection is the only example here which has to apply any arguments. It is provided as an example of a collection which accepts an author_id
argument, and it must validate that the argument is present, and that it is an integer.
Note 2: the latest_article_id
collection is provided as an example of a function. It is a collection like all the others, but must follow the rules for functions: it must consist of a single row, with a single column named __value
.
In the next section, we will break down the implementation of execute_query
.
Once we have computed the full collection, we can move onto evaluating the query in the context of that collection, using the execute_query
function:
fn execute_query(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
query: &models::Query,
root: Root,
collection: Vec<Row>,
) -> Result<models::RowSet> {
In the next section, we will break down the implementation of execute_query
.
Executing Queries
In this section, we will break down the implementation of the execute_query
function:
fn execute_query(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
query: &models::Query,
root: Root,
collection: Vec<Row>,
) -> Result<models::RowSet> {
At this point, we have already computed the full collection, which is passed via the collection
argument. Now, we need to evaluate the Query
in the context of this collection.
The Query
describes the predicate which should be applied to all rows, the sort order, pagination options, along with any aggregates to compute and fields to return.
The first step is to sort the collection.
Note: we could also start by filtering, and then sort the filtered rows. Which is more efficient depends on the data and the query, and choosing between these approaches would be the job of a query planner in a real database engine. However, this is out of scope here, so we make an arbitrary choice, and sort the data first.
Sorting
The first step is to sort the rows in the full collection:
let sorted = sort(
collection_relationships,
variables,
state,
collection,
&query.order_by,
)?;
The Query
object defines the sort order in terms of a list of OrderByElement
s. See the sorting specification for details on how this ought to be interpreted.
The sort
function
The sort
function implements a simple insertion sort, computing the ordering for each pair of rows, and inserting each row at the correct place:
fn sort(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
collection: Vec<Row>,
order_by: &Option<models::OrderBy>,
) -> Result<Vec<Row>> {
match order_by {
None => Ok(collection),
Some(order_by) => {
let mut copy = vec![];
for item_to_insert in collection {
let mut index = 0;
for other in © {
if let Ordering::Greater = eval_order_by(
collection_relationships,
variables,
state,
order_by,
other,
&item_to_insert,
)? {
break;
}
index += 1;
}
copy.insert(index, item_to_insert);
}
Ok(copy)
}
}
}
sort
delegates to the eval_order_by
function to compute the ordering between two rows:
Evaluating the Ordering
To compare two rows, the eval_order_by
computes each OrderByElement
in turn, and compares the rows in order, or in reverse order, depending on whether the ordering is ascending or descending.
The function returns the first Ordering
which makes the two rows distinct (if any):
fn eval_order_by(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
order_by: &models::OrderBy,
t1: &Row,
t2: &Row,
) -> Result<Ordering> {
let mut result = Ordering::Equal;
for element in &order_by.elements {
let v1 = eval_order_by_element(collection_relationships, variables, state, element, t1)?;
let v2 = eval_order_by_element(collection_relationships, variables, state, element, t2)?;
let x = match element.order_direction {
models::OrderDirection::Asc => compare(v1, v2)?,
models::OrderDirection::Desc => compare(v2, v1)?,
};
result = result.then(x);
}
Ok(result)
}
The ordering for a single OrderByElement
is computed by the eval_order_by_element
function.
We won't cover every branch of this function in detail here, but it works by pattern matching on the type of ordering being used.
Ordering by a column
As an example, here is the function eval_order_by_column
which evaluates ordering by a column:
fn eval_order_by_column(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
item: &Row,
path: Vec<models::PathElement>,
name: models::FieldName,
field_path: Option<Vec<models::FieldName>>,
) -> Result<serde_json::Value> {
let rows: Vec<Row> = eval_path(collection_relationships, variables, state, &path, item)?;
if rows.len() > 1 {
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
));
}
match rows.first() {
Some(row) => eval_column_field_path(row, &name, &field_path, &BTreeMap::new()),
None => Ok(serde_json::Value::Null),
}
}
This code computes the target table, possibly by traversing relationships using eval_path
(we will cover this function later when we cover relationships), and validates that we computed a single row before selecting the value of the chosen column.
Now that we have sorted the full collection, we can apply the predicate to filter down the collection of rows. We will cover this in the next section.
Filtering
The next step is to filter the rows based on the provided predicate expression:
let filtered: Vec<Row> = (match &query.predicate {
None => Ok(sorted),
Some(expr) => {
let mut filtered: Vec<Row> = vec![];
for item in sorted {
let root = match root {
Root::CurrentRow => &item,
Root::ExplicitRow(root) => root,
};
if eval_expression(
collection_relationships,
variables,
state,
expr,
root,
&item,
)? {
filtered.push(item);
}
}
Ok(filtered)
}
})?;
As we can see, the function delegates to the eval_expression
function in order to evaluate the predicate on each row.
Evaluating expressions
The eval_expression
function evaluates a predicate by pattern matching on the type of the expression expr
, and returns a boolean value indicating whether the current row matches the predicate:
fn eval_expression(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
expr: &models::Expression,
root: &Row,
item: &Row,
) -> Result<bool> {
Logical expressions
The first category of expression types are the logical expressions - and (conjunction), or (disjunction) and not (negation) - whose evaluators are straightforward:
- To evaluate a conjunction/disjunction of subexpressions, we evaluate all of the subexpressions to booleans, and find the conjunction/disjunction of those boolean values respectively.
- To evaluate the negation of a subexpression, we evaluate the subexpression to a boolean value, and negate the boolean.
match expr {
models::Expression::And { expressions } => {
for expr in expressions {
if !eval_expression(collection_relationships, variables, state, expr, root, item)? {
return Ok(false);
}
}
Ok(true)
}
models::Expression::Or { expressions } => {
for expr in expressions {
if eval_expression(collection_relationships, variables, state, expr, root, item)? {
return Ok(true);
}
}
Ok(false)
}
models::Expression::Not { expression } => {
let b = eval_expression(
collection_relationships,
variables,
state,
expression,
root,
item,
)?;
Ok(!b)
}
Unary Operators
The next category of expressions are the unary operators. The only unary operator is the IsNull
operator, which is evaluated by evaluating the operator's comparison target, and then comparing the result to null
:
models::Expression::UnaryComparisonOperator { column, operator } => match operator {
models::UnaryComparisonOperator::IsNull => {
let vals = eval_comparison_target(
collection_relationships,
variables,
state,
column,
root,
item,
)?;
Ok(vals.iter().any(serde_json::Value::is_null))
}
},
To evaluate the comparison target, we delegate to the eval_comparison_target
function, which pattern matches:
- A column is evaluated using the
eval_path
function, which we will cover when we talk about relationships. - A root collection column (that is, a column from the root collection, or collection used by the nearest enclosing
Query
) is evaluated usingeval_column
. You may have noticed the additional argument,root
, which has been passed down through every function call so far - this is to track the root collection for exactly this case.
fn eval_comparison_target(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
target: &models::ComparisonTarget,
root: &Row,
item: &Row,
) -> Result<Vec<serde_json::Value>> {
match target {
models::ComparisonTarget::Column {
name,
field_path,
path,
} => {
let rows = eval_path(collection_relationships, variables, state, path, item)?;
let mut values = vec![];
for row in &rows {
let value = eval_column_field_path(row, name, field_path, &BTreeMap::new())?;
values.push(value);
}
Ok(values)
}
models::ComparisonTarget::RootCollectionColumn { name, field_path } => {
let value = eval_column_field_path(root, name, field_path, &BTreeMap::new())?;
Ok(vec![value])
}
}
}
Binary Operators
The next category of expressions are the binary operators. Binary operators can be standard or custom.
The only standard binary operators are the equal
and in
operators.
equal
evaluated by evaluating its comparison target and comparison value, and comparing them for equality:
models::Expression::BinaryComparisonOperator {
column,
operator,
value,
} => match operator.as_str() {
"eq" => {
let left_vals = eval_comparison_target(
collection_relationships,
variables,
state,
column,
root,
item,
)?;
let right_vals = eval_comparison_value(
collection_relationships,
variables,
state,
value,
root,
item,
)?;
for left_val in &left_vals {
for right_val in &right_vals {
if left_val == right_val {
return Ok(true);
}
}
}
Ok(false)
}
The in
operator is evaluated by evaluating its comparison target, and all of its comparison values, and testing whether the evaluated target appears in the list of evaluated values:
"in" => {
let left_vals = eval_comparison_target(
collection_relationships,
variables,
state,
column,
root,
item,
)?;
let right_val_sets = eval_comparison_value(
collection_relationships,
variables,
state,
value,
root,
item,
)?;
for comparison_value in &right_val_sets {
let right_vals = comparison_value.as_array().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "expected array".into(),
details: serde_json::Value::Null,
}),
))?;
for left_val in &left_vals {
for right_val in right_vals {
if left_val == right_val {
return Ok(true);
}
}
}
}
Ok(false)
}
The reference implementation provides a single custom binary operator as an example, which is the like
operator on strings:
"like" => {
let column_vals = eval_comparison_target(
collection_relationships,
variables,
state,
column,
root,
item,
)?;
let regex_vals = eval_comparison_value(
collection_relationships,
variables,
state,
value,
root,
item,
)?;
for column_val in &column_vals {
for regex_val in ®ex_vals {
let column_str = column_val.as_str().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "column is not a string".into(),
details: serde_json::Value::Null,
}),
))?;
let regex_str = regex_val.as_str().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
))?;
let regex = Regex::new(regex_str).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid regular expression".into(),
details: serde_json::Value::Null,
}),
)
})?;
if regex.is_match(column_str) {
return Ok(true);
}
}
}
Ok(false)
}
"in" => {
let left_vals = eval_comparison_target(
collection_relationships,
variables,
state,
column,
root,
item,
)?;
let right_val_sets = eval_comparison_value(
collection_relationships,
variables,
state,
value,
root,
item,
)?;
for comparison_value in &right_val_sets {
let right_vals = comparison_value.as_array().ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "expected array".into(),
details: serde_json::Value::Null,
}),
))?;
for left_val in &left_vals {
for right_val in right_vals {
if left_val == right_val {
return Ok(true);
}
}
}
}
Ok(false)
}
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
)),
EXISTS
expressions
An EXISTS
expression is evaluated by recursively evaluating a Query
on a related collection, and testing to see whether the resulting RowSet
contains any rows:
models::Expression::Exists {
in_collection,
predicate,
} => {
let query = models::Query {
aggregates: None,
fields: Some(IndexMap::new()),
limit: None,
offset: None,
order_by: None,
predicate: predicate.clone().map(|e| *e),
};
let collection = eval_in_collection(
collection_relationships,
item,
variables,
state,
in_collection,
)?;
let row_set = execute_query(
collection_relationships,
variables,
state,
&query,
Root::ExplicitRow(root),
collection,
)?;
let rows: Vec<IndexMap<_, _>> = row_set.rows.ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
))?;
Ok(!rows.is_empty())
Pagination
Once the irrelevant rows have been filtered out, the execute_query
function applies the limit
and offset
arguments by calling the `paginate function:
let paginated: Vec<Row> = paginate(filtered.into_iter(), query.limit, query.offset);
The paginate
function is implemented using the skip
and take
functions on iterators:
fn paginate<I: Iterator<Item = Row>>(
collection: I,
limit: Option<u32>,
offset: Option<u32>,
) -> Vec<Row> {
let start = offset.unwrap_or(0).try_into().unwrap();
match limit {
Some(n) => collection.skip(start).take(n.try_into().unwrap()).collect(),
None => collection.skip(start).collect(),
}
}
Aggregates
Now that we have computed the sorted, filtered, and paginated rows of the original collection, we can compute any aggregates over those rows.
Each aggregate is computed in turn by the eval_aggregate
function, and added to the list of all aggregates to return:
let aggregates = query
.aggregates
.as_ref()
.map(|aggregates| {
let mut row: IndexMap<models::FieldName, serde_json::Value> = IndexMap::new();
for (aggregate_name, aggregate) in aggregates {
row.insert(
aggregate_name.clone(),
eval_aggregate(aggregate, &paginated)?,
);
}
Ok(row)
})
.transpose()?;
The eval_aggregate
function works by pattern matching on the type of the aggregate being computed:
- A
star_count
aggregate simply counts all rows, - A
column_count
aggregate computes the subset of rows where the named column is non-null, and returns the count of only those rows, - A
single_column
aggregate is computed by delegating to theeval_aggregate_function
function, which computes a custom aggregate operator over the values of the selected column taken from all rows.
fn eval_aggregate(aggregate: &models::Aggregate, paginated: &[Row]) -> Result<serde_json::Value> {
match aggregate {
models::Aggregate::StarCount {} => Ok(serde_json::Value::from(paginated.len())),
models::Aggregate::ColumnCount {
column,
field_path,
distinct,
} => {
let values = paginated
.iter()
.map(|row| eval_column_field_path(row, column, field_path, &BTreeMap::new()))
.collect::<Result<Vec<_>>>()?;
let non_null_values = values.iter().filter(|value| !value.is_null());
let agg_value = if *distinct {
non_null_values
.map(|value| {
serde_json::to_string(value).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "unable to encode value".into(),
details: serde_json::Value::Null,
}),
)
})
})
.collect::<Result<HashSet<_>>>()?
.len()
} else {
non_null_values.count()
};
serde_json::to_value(agg_value).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
)
})
}
models::Aggregate::SingleColumn {
column,
field_path,
function,
} => {
let values = paginated
.iter()
.map(|row| eval_column_field_path(row, column, field_path, &BTreeMap::new()))
.collect::<Result<Vec<_>>>()?;
eval_aggregate_function(function, values)
}
}
}
The eval_aggregate_function
function implements the custom aggregate operators min
and max
, which are provided for integer-valued columns:
fn eval_aggregate_function(
function: &models::AggregateFunctionName,
values: Vec<serde_json::Value>,
) -> Result<serde_json::Value> {
let int_values = values
.iter()
.map(|value| {
value
.as_i64()
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "column is not an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "column value out of range".into(),
details: serde_json::Value::Null,
}),
)
})
})
.collect::<Result<Vec<i32>>>()?;
let agg_value = match function.as_str() {
"min" => Ok(int_values.iter().min()),
"max" => Ok(int_values.iter().max()),
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid aggregation function".into(),
details: serde_json::Value::Null,
}),
)),
}?;
serde_json::to_value(agg_value).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
)
})
}
Field Selection
In addition to computing aggregates, we can also return fields selected directly from the rows themselves.
This is done by mapping over the computed rows, and using the eval_field
function to evaluate each selected field in turn:
let rows = query
.fields
.as_ref()
.map(|fields| {
let mut rows: Vec<IndexMap<models::FieldName, models::RowFieldValue>> = vec![];
for item in &paginated {
let row = eval_row(fields, collection_relationships, variables, state, item)?;
rows.push(row);
}
Ok(rows)
})
.transpose()?;
The eval_field
function works by pattern matching on the field type:
- A
column
is selected using theeval_column
function (oreval_nested_field
if there are nested fields to fetch) - A
relationship
field is selected by evaluating the related collection usingeval_path_element
(we will cover this in the next section), and then recursively executing a query usingexecute_query
:
fn eval_field(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
field: &models::Field,
item: &Row,
) -> Result<models::RowFieldValue> {
match field {
models::Field::Column {
column,
fields,
arguments,
} => {
let col_val = eval_column(variables, item, column, arguments)?;
match fields {
None => Ok(models::RowFieldValue(col_val)),
Some(nested_field) => eval_nested_field(
collection_relationships,
variables,
state,
col_val,
nested_field,
),
}
}
models::Field::Relationship {
relationship,
arguments,
query,
} => {
let relationship = collection_relationships.get(relationship).ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: " ".into(),
details: serde_json::Value::Null,
}),
))?;
let source = vec![item.clone()];
let collection = eval_path_element(
collection_relationships,
variables,
state,
relationship,
arguments,
&source,
&None,
)?;
let rows = execute_query(
collection_relationships,
variables,
state,
query,
Root::CurrentRow,
collection,
)?;
let rows_json = serde_json::to_value(rows).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode rowset".into(),
details: serde_json::Value::Null,
}),
)
})?;
Ok(models::RowFieldValue(rows_json))
}
}
}
Relationships
Relationships appear in many places in the QueryRequest
, but are always computed using the eval_path
function.
eval_path
accepts a list of PathElement
s, each of which describes the traversal of a single edge of the collection-relationship graph. eval_path
computes the collection at the final node of this path through the graph.
It does this by successively evaluating each edge in turn using the eval_path_element
function:
fn eval_path(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
path: &[models::PathElement],
item: &Row,
) -> Result<Vec<Row>> {
let mut result: Vec<Row> = vec![item.clone()];
for path_element in path {
let relationship = collection_relationships
.get(&path_element.relationship)
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "invalid relationship name in path".into(),
details: serde_json::Value::Null,
}),
))?;
result = eval_path_element(
collection_relationships,
variables,
state,
relationship,
&path_element.arguments,
&result,
&path_element.predicate,
)?;
}
Ok(result)
}
The eval_path_element
function computes a collection from a single relationship, one source row at a time, by evaluating all relationship arguments, computing the target collection using get_collection_by_name
, and evaluating any column mapping on any resulting rows:
fn eval_path_element(
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
variables: &BTreeMap<models::VariableName, serde_json::Value>,
state: &AppState,
relationship: &models::Relationship,
arguments: &BTreeMap<models::ArgumentName, models::RelationshipArgument>,
source: &[Row],
predicate: &Option<Box<models::Expression>>,
) -> Result<Vec<Row>> {
let mut matching_rows: Vec<Row> = vec![];
// Note: Join strategy
//
// Rows can be related in two ways: 1) via a column mapping, and
// 2) via collection arguments. Because collection arguments can be computed
// using the columns on the source side of a relationship, in general
// we need to compute the target collection once for each source row.
// This join strategy can result in some target rows appearing in the
// resulting row set more than once, if two source rows are both related
// to the same target row.
//
// In practice, this is not an issue, either because a) the relationship
// is computed in the course of evaluating a predicate, and all predicates are
// implicitly or explicitly existentially quantified, or b) if the
// relationship is computed in the course of evaluating an ordering, the path
// should consist of all object relationships, and possibly terminated by a
// single array relationship, so there should be no double counting.
for src_row in source {
let mut all_arguments = BTreeMap::new();
for (argument_name, argument_value) in &relationship.arguments {
if all_arguments
.insert(
argument_name.clone(),
eval_relationship_argument(variables, src_row, argument_value)?,
)
.is_some()
{
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "duplicate argument names".into(),
details: serde_json::Value::Null,
}),
));
}
}
for (argument_name, argument_value) in arguments {
if all_arguments
.insert(
argument_name.clone(),
eval_relationship_argument(variables, src_row, argument_value)?,
)
.is_some()
{
return Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "duplicate argument names".into(),
details: serde_json::Value::Null,
}),
));
}
}
let target =
get_collection_by_name(&relationship.target_collection, &all_arguments, state)?;
for tgt_row in &target {
if eval_column_mapping(relationship, src_row, tgt_row)?
&& if let Some(expression) = predicate {
eval_expression(
collection_relationships,
variables,
state,
expression,
tgt_row,
tgt_row,
)?
} else {
true
}
{
matching_rows.push(tgt_row.clone());
}
}
}
Ok(matching_rows)
}
Mutations
In this section, we will break down the implementation of the /mutation
endpoint.
The mutation endpoint is handled by the post_mutation
function:
async fn post_mutation(
State(state): State<Arc<Mutex<AppState>>>,
Json(request): Json<models::MutationRequest>,
) -> Result<Json<models::MutationResponse>> {
This function receives the application state, and the MutationRequest
structure.
The function iterates over the collection of requested MutationOperation
structures, and handles each one in turn, adding each result to the operation_results
field in the response:
if request.operations.len() > 1 {
Err((
StatusCode::NOT_IMPLEMENTED,
Json(models::ErrorResponse {
message: "transactional mutations are not supported".into(),
details: serde_json::Value::Null,
}),
))
} else {
let mut state = state.lock().await;
let mut operation_results = vec![];
for operation in &request.operations {
let operation_result = execute_mutation_operation(
&mut state,
&request.collection_relationships,
operation,
)?;
operation_results.push(operation_result);
}
Ok(Json(models::MutationResponse { operation_results }))
}
}
The execute_mutation_operation
function is responsible for executing an individual operation. In the next section, we'll break that function down.
Handling Operations
The execute_mutation_operation
function is responsible for handling a single MutationOperation
, and returning the corresponding MutationOperationResults
:
fn execute_mutation_operation(
state: &mut AppState,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
operation: &models::MutationOperation,
) -> Result<models::MutationOperationResults> {
match operation {
models::MutationOperation::Procedure {
name,
arguments,
fields,
} => execute_procedure(state, name, arguments, fields, collection_relationships),
}
}
The function matches on the type of the operation, and delegates to the appropriate function. Currently, the only type of operation is Procedure
, so the function delegates to the execute_procedure
function. In the next section, we will break down the implementation of that function.
Procedures
The execute_procedure
function is responsible for executing a single procedure:
fn execute_procedure(
state: &mut AppState,
name: &models::ProcedureName,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
fields: &Option<models::NestedField>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
) -> std::result::Result<models::MutationOperationResults, (StatusCode, Json<models::ErrorResponse>)>
The function receives the application state
, along with the name
of the procedure to invoke, a list of arguments
, a list of fields
to return, and a list of collection_relationships
.
The function matches on the name of the procedure, and fails if the name is not recognized. We will walk through each procedure in turn.
{
match name.as_str() {
"upsert_article" => {
execute_upsert_article(state, arguments, fields, collection_relationships)
}
"delete_articles" => {
execute_delete_articles(state, arguments, fields, collection_relationships)
}
_ => Err((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "unknown procedure".into(),
details: serde_json::Value::Null,
}),
)),
}
}
upsert_article
The upsert_article
procedure is implemented by the execute_upsert_article
function.
The execute_upsert_article
function reads the article
argument from the arguments
list, failing if it is not found or invalid.
It then inserts or updates that article in the application state, depending on whether or not an article with that id
already exists or not.
Finally, it delegates to the eval_nested_field
function to evaluate any nested fields, and returns the selected fields in the result:
fn execute_upsert_article(
state: &mut AppState,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
fields: &Option<models::NestedField>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
) -> std::result::Result<models::MutationOperationResults, (StatusCode, Json<models::ErrorResponse>)>
{
let article = arguments.get("article").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "Expected argument 'article'".into(),
details: serde_json::Value::Null,
}),
))?;
let article_obj: Row = serde_json::from_value(article.clone()).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "article must be an object".into(),
details: serde_json::Value::Null,
}),
)
})?;
let id = article_obj.get("id").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "article missing field 'id'".into(),
details: serde_json::Value::Null,
}),
))?;
let id_int = id
.as_i64()
.ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "id must be an integer".into(),
details: serde_json::Value::Null,
}),
))?
.try_into()
.map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "id out of range".into(),
details: serde_json::Value::Null,
}),
)
})?;
let old_row = state.articles.insert(id_int, article_obj);
Ok(models::MutationOperationResults::Procedure {
result: old_row.map_or(Ok(serde_json::Value::Null), |old_row| {
let old_row_value = serde_json::to_value(old_row).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode response".into(),
details: serde_json::Value::Null,
}),
)
})?;
let old_row_fields = match fields {
None => Ok(models::RowFieldValue(old_row_value)),
Some(nested_field) => eval_nested_field(
collection_relationships,
&BTreeMap::new(),
state,
old_row_value,
nested_field,
),
}?;
Ok(old_row_fields.0)
})?,
})
}
delete_articles
The delete_articles
procedure is implemented by the execute_delete_articles
function.
It is provided as an example of a procedure with a predicate type as the type of an argument.
The execute_delete_articles
function reads the where
argument from the arguments
list, failing if it is not found or invalid.
It then deletes all articles in the application state which match the predicate, and returns a list of the deleted rows.
This function delegates to the eval_nested_field
function to evaluate any nested fields, and returns the selected fields in the result:
fn execute_delete_articles(
state: &mut AppState,
arguments: &BTreeMap<models::ArgumentName, serde_json::Value>,
fields: &Option<models::NestedField>,
collection_relationships: &BTreeMap<models::RelationshipName, models::Relationship>,
) -> std::result::Result<models::MutationOperationResults, (StatusCode, Json<models::ErrorResponse>)>
{
let predicate_value = arguments.get("where").ok_or((
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "Expected argument 'where'".into(),
details: serde_json::Value::Null,
}),
))?;
let predicate: models::Expression =
serde_json::from_value(predicate_value.clone()).map_err(|_| {
(
StatusCode::BAD_REQUEST,
Json(models::ErrorResponse {
message: "Bad predicate".into(),
details: serde_json::Value::Null,
}),
)
})?;
let mut removed: Vec<Row> = vec![];
let state_snapshot = state.clone();
for article in state.articles.values_mut() {
if eval_expression(
&BTreeMap::new(),
&BTreeMap::new(),
&state_snapshot,
&predicate,
article,
article,
)? {
removed.push(article.clone());
}
}
let removed_value = serde_json::to_value(removed).map_err(|_| {
(
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode response".into(),
details: serde_json::Value::Null,
}),
)
})?;
let removed_fields = match fields {
None => Ok(models::RowFieldValue(removed_value)),
Some(nested_field) => eval_nested_field(
collection_relationships,
&BTreeMap::new(),
&state_snapshot,
removed_value,
nested_field,
),
}?;
Ok(models::MutationOperationResults::Procedure {
result: removed_fields.0,
})
}
Explain
The /query/explain
and /mutation/explain
endpoints are not implemented in the reference implementation, because their respective request objects are interpreted directly. There is no intermediate representation (such as SQL) which could be described as an "execution plan".
The query.explain
and mutation.explain
capabilities are turned off in the capabilities endpoint,
and the /query/explain
and /mutation/explain
endpoints throw an error:
async fn post_query_explain(
Json(_request): Json<models::QueryRequest>,
) -> Result<Json<models::ExplainResponse>> {
Err((
StatusCode::NOT_IMPLEMENTED,
Json(models::ErrorResponse {
message: "explain is not supported".into(),
details: serde_json::Value::Null,
}),
))
}
Health and Metrics
Service Health
The /health
endpoint has nothing to check, because the reference implementation does not need to connect to any other services. Therefore, once the reference implementation is running, it can always report a healthy status:
async fn get_health() -> StatusCode {
StatusCode::OK
}
In practice, a connector should make sure that any upstream services can be successfully contacted, and respond accordingly.
Metrics
The reference implementation maintains some generic access metrics in its application state:
metrics.total_requests
counts the number of requests ever served, andmetrics.active_requests
counts the number of requests currently being served.
The metrics endpoint reports these metrics using the Rust prometheus crate:
async fn get_metrics(State(state): State<Arc<Mutex<AppState>>>) -> Result<String> {
let state = state.lock().await;
state.metrics.as_text().ok_or((
StatusCode::INTERNAL_SERVER_ERROR,
Json(models::ErrorResponse {
message: "cannot encode metrics".into(),
details: serde_json::Value::Null,
}),
))
}
To maintain these metrics, it uses a simple metrics middleware:
async fn metrics_middleware<T>(
state: State<Arc<Mutex<AppState>>>,
request: axum::http::Request<T>,
next: axum::middleware::Next<T>,
) -> axum::response::Response {
// Don't hold the lock to update metrics, since the
// lock doesn't protect the metrics anyway.
let metrics = {
let state = state.lock().await;
state.metrics.clone()
};
metrics.total_requests.inc();
metrics.active_requests.inc();
let response = next.run(request).await;
metrics.active_requests.dec();
response
}
Types
Aggregate
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[skip_serializing_none]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Aggregate")]
pub enum Aggregate {
ColumnCount {
/// The column to apply the count aggregate function to
column: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Whether or not only distinct items should be counted
distinct: bool,
},
SingleColumn {
/// The column to apply the aggregation function to
column: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Single column aggregate function name.
function: AggregateFunctionName,
},
StarCount {},
}
AggregateFunctionDefinition
/// The definition of an aggregation function on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Aggregate Function Definition")]
pub struct AggregateFunctionDefinition {
/// The scalar or object type of the result of this function
pub result_type: Type,
}
Argument
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Argument")]
pub enum Argument {
/// The argument is provided by reference to a variable
Variable { name: VariableName },
/// The argument is provided as a literal value
Literal { value: serde_json::Value },
}
ArgumentInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Argument Info")]
pub struct ArgumentInfo {
/// Argument description
pub description: Option<String>,
/// The name of the type of this argument
#[serde(rename = "type")]
pub argument_type: Type,
}
Capabilities
/// Describes the features of the specification which a data connector implements.
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Capabilities")]
pub struct Capabilities {
pub query: QueryCapabilities,
pub mutation: MutationCapabilities,
pub relationships: Option<RelationshipCapabilities>,
}
CapabilitiesResponse
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Capabilities Response")]
pub struct CapabilitiesResponse {
pub version: String,
pub capabilities: Capabilities,
}
ComparisonOperatorDefinition
/// The definition of a comparison operator on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Comparison Operator Definition")]
pub enum ComparisonOperatorDefinition {
Equal,
In,
Custom {
/// The type of the argument to this operator
argument_type: Type,
},
}
ComparisonTarget
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Comparison Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ComparisonTarget {
Column {
/// The name of the column
name: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Any relationships to traverse to reach this column
path: Vec<PathElement>,
},
RootCollectionColumn {
/// The name of the column
name: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
},
}
ComparisonValue
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Comparison Value")]
pub enum ComparisonValue {
Column { column: ComparisonTarget },
Scalar { value: serde_json::Value },
Variable { name: VariableName },
}
ErrorResponse
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Error Response")]
pub struct ErrorResponse {
/// A human-readable summary of the error
pub message: String,
/// Any additional structured information about the error
pub details: serde_json::Value,
}
ExistsCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Exists Capabilities")]
pub struct ExistsCapabilities {
/// Does the connector support ExistsInCollection::NestedCollection
pub nested_collections: Option<LeafCapability>,
}
ExistsInCollection
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Exists In Collection")]
pub enum ExistsInCollection {
Related {
relationship: RelationshipName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
Unrelated {
/// The name of a collection
collection: CollectionName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
NestedCollection {
column_name: FieldName,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested collection via object columns
#[serde(skip_serializing_if = "Vec::is_empty", default)]
field_path: Vec<FieldName>,
},
}
ExplainResponse
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Explain Response")]
pub struct ExplainResponse {
/// A list of human-readable key-value pairs describing
/// a query execution plan. For example, a connector for
/// a relational database might return the generated SQL
/// and/or the output of the `EXPLAIN` command. An API-based
/// connector might encode a list of statically-known API
/// calls which would be made.
pub details: BTreeMap<String, String>,
}
Expression
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Expression")]
pub enum Expression {
And {
expressions: Vec<Expression>,
},
Or {
expressions: Vec<Expression>,
},
Not {
expression: Box<Expression>,
},
UnaryComparisonOperator {
column: ComparisonTarget,
operator: UnaryComparisonOperator,
},
BinaryComparisonOperator {
column: ComparisonTarget,
operator: ComparisonOperatorName,
value: ComparisonValue,
},
Exists {
in_collection: ExistsInCollection,
predicate: Option<Box<Expression>>,
},
}
Field
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Field")]
pub enum Field {
Column {
column: FieldName,
/// When the type of the column is a (possibly-nullable) array or object,
/// the caller can request a subset of the complete column data,
/// by specifying fields to fetch here.
/// If omitted, the column data will be fetched in full.
fields: Option<NestedField>,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
},
Relationship {
query: Box<Query>,
/// The name of the relationship to follow for the subquery
relationship: RelationshipName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
}
ForeignKeyConstraint
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Foreign Key Constraint")]
pub struct ForeignKeyConstraint {
/// The columns on which you want want to define the foreign key.
pub column_mapping: BTreeMap<FieldName, FieldName>,
/// The name of a collection
pub foreign_collection: CollectionName,
}
FunctionInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Function Info")]
pub struct FunctionInfo {
/// The name of the function
pub name: FunctionName,
/// Description of the function
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the function's result type
pub result_type: Type,
}
LeafCapability
/// A unit value to indicate a particular leaf capability is supported.
/// This is an empty struct to allow for future sub-capabilities.
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
pub struct LeafCapability {}
MutationOperation
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Operation")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum MutationOperation {
Procedure {
/// The name of a procedure
name: ProcedureName,
/// Any named procedure arguments
arguments: BTreeMap<ArgumentName, serde_json::Value>,
/// The fields to return from the result, or null to return everything
fields: Option<NestedField>,
},
}
MutationOperationResults
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Operation Results")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum MutationOperationResults {
Procedure { result: serde_json::Value },
}
MutationRequest
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Request")]
pub struct MutationRequest {
/// The mutation operations to perform
pub operations: Vec<MutationOperation>,
/// The relationships between collections involved in the entire mutation request
pub collection_relationships: BTreeMap<RelationshipName, Relationship>,
}
MutationResponse
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Response")]
pub struct MutationResponse {
/// The results of each mutation operation, in the same order as they were received
pub operation_results: Vec<MutationOperationResults>,
}
NestedArray
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedArray")]
pub struct NestedArray {
pub fields: Box<NestedField>,
}
NestedField
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "NestedField")]
pub enum NestedField {
Object(NestedObject),
Array(NestedArray),
}
NestedFieldCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Nested Field Capabilities")]
pub struct NestedFieldCapabilities {
/// Does the connector support filtering by values of nested fields
pub filter_by: Option<LeafCapability>,
/// Does the connector support ordering by values of nested fields
pub order_by: Option<LeafCapability>,
/// Does the connector support aggregating values within nested fields
pub aggregates: Option<LeafCapability>,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Capabilities")]
pub struct MutationCapabilities {
/// Does the connector support executing multiple mutations in a transaction.
pub transactional: Option<LeafCapability>,
/// Does the connector support explaining mutations
pub explain: Option<LeafCapability>,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship Capabilities")]
pub struct RelationshipCapabilities {
/// Does the connector support comparisons that involve related collections (ie. joins)?
pub relation_comparisons: Option<LeafCapability>,
/// Does the connector support ordering by an aggregated array relationship?
pub order_by_aggregate: Option<LeafCapability>,
}
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Schema Response")]
pub struct SchemaResponse {
/// A list of scalar types which will be used as the types of collection columns
pub scalar_types: BTreeMap<ScalarTypeName, ScalarType>,
/// A list of object types which can be used as the types of arguments, or return types of procedures.
/// Names should not overlap with scalar type names.
pub object_types: BTreeMap<ObjectTypeName, ObjectType>,
/// Collections which are available for queries
pub collections: Vec<CollectionInfo>,
/// Functions (i.e. collections which return a single column and row)
pub functions: Vec<FunctionInfo>,
/// Procedures which are available for execution as part of mutations
pub procedures: Vec<ProcedureInfo>,
}
/// The definition of a scalar type, i.e. types that can be used as the types of columns.
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Scalar Type")]
pub struct ScalarType {
/// A description of valid values for this scalar type.
/// Defaults to `TypeRepresentation::JSON` if omitted
pub representation: Option<TypeRepresentation>,
/// A map from aggregate function names to their definitions. Result type names must be defined scalar types declared in ScalarTypesCapabilities.
pub aggregate_functions: BTreeMap<AggregateFunctionName, AggregateFunctionDefinition>,
/// A map from comparison operator names to their definitions. Argument type names must be defined scalar types declared in ScalarTypesCapabilities.
pub comparison_operators: BTreeMap<ComparisonOperatorName, ComparisonOperatorDefinition>,
}
/// Representations of scalar types
#[derive(
Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Type Representation")]
pub enum TypeRepresentation {
/// JSON booleans
Boolean,
/// Any JSON string
String,
/// Any JSON number
#[deprecated(since = "0.1.2", note = "Use sized numeric types instead")]
Number,
/// Any JSON number, with no decimal part
#[deprecated(since = "0.1.2", note = "Use sized numeric types instead")]
Integer,
/// A 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1
Int8,
/// A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1
Int16,
/// A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1
Int32,
/// A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1
Int64,
/// An IEEE-754 single-precision floating-point number
Float32,
/// An IEEE-754 double-precision floating-point number
Float64,
/// Arbitrary-precision integer string
#[serde(rename = "biginteger")]
BigInteger,
/// Arbitrary-precision decimal string
#[serde(rename = "bigdecimal")]
BigDecimal,
/// UUID string (8-4-4-4-12)
#[serde(rename = "uuid")]
UUID,
/// ISO 8601 date
Date,
/// ISO 8601 timestamp
Timestamp,
/// ISO 8601 timestamp-with-timezone
#[serde(rename = "timestamptz")]
TimestampTZ,
/// GeoJSON, per RFC 7946
Geography,
/// GeoJSON Geometry object, per RFC 7946
Geometry,
/// Base64-encoded bytes
Bytes,
/// Arbitrary JSON
#[serde(rename = "json")]
JSON,
/// One of the specified string values
Enum { one_of: Vec<String> },
}
/// The definition of an object type
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Object Type")]
pub struct ObjectType {
/// Description of this type
pub description: Option<String>,
/// Fields defined on this object type
pub fields: BTreeMap<FieldName, ObjectField>,
}
/// The definition of an object field
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Object Field")]
pub struct ObjectField {
/// Description of this field
pub description: Option<String>,
/// The type of this field
#[serde(rename = "type")]
pub r#type: Type,
/// The arguments available to the field - Matches implementation from CollectionInfo
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
}
/// Types track the valid representations of values as JSON
#[derive(
Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Type")]
pub enum Type {
/// A named type
Named {
/// The name can refer to a scalar or object type
name: TypeName,
},
/// A nullable type
Nullable {
/// The type of the non-null inhabitants of this type
underlying_type: Box<Type>,
},
/// An array type
Array {
/// The type of the elements of the array
element_type: Box<Type>,
},
/// A predicate type for a given object type
Predicate {
/// The object type name
object_type_name: ObjectTypeName,
},
}
/// The definition of a comparison operator on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Comparison Operator Definition")]
pub enum ComparisonOperatorDefinition {
Equal,
In,
Custom {
/// The type of the argument to this operator
argument_type: Type,
},
}
/// The definition of an aggregation function on a scalar type
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Aggregate Function Definition")]
pub struct AggregateFunctionDefinition {
/// The scalar or object type of the result of this function
pub result_type: Type,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Collection Info")]
pub struct CollectionInfo {
/// The name of the collection
///
/// Note: these names are abstract - there is no requirement that this name correspond to
/// the name of an actual collection in the database.
pub name: CollectionName,
/// Description of the collection
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the collection's object type
#[serde(rename = "type")]
pub collection_type: ObjectTypeName,
/// Any uniqueness constraints enforced on this collection
pub uniqueness_constraints: BTreeMap<String, UniquenessConstraint>,
/// Any foreign key constraints enforced on this collection
pub foreign_keys: BTreeMap<String, ForeignKeyConstraint>,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Function Info")]
pub struct FunctionInfo {
/// The name of the function
pub name: FunctionName,
/// Description of the function
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the function's result type
pub result_type: Type,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Argument Info")]
pub struct ArgumentInfo {
/// Argument description
pub description: Option<String>,
/// The name of the type of this argument
#[serde(rename = "type")]
pub argument_type: Type,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Uniqueness Constraint")]
pub struct UniquenessConstraint {
/// A list of columns which this constraint requires to be unique
pub unique_columns: Vec<FieldName>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Foreign Key Constraint")]
pub struct ForeignKeyConstraint {
/// The columns on which you want want to define the foreign key.
pub column_mapping: BTreeMap<FieldName, FieldName>,
/// The name of a collection
pub foreign_collection: CollectionName,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Procedure Info")]
pub struct ProcedureInfo {
/// The name of the procedure
pub name: ProcedureName,
/// Column description
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the result type
pub result_type: Type,
}
/// This is the request body of the query POST endpoint
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Request")]
pub struct QueryRequest {
/// The name of a collection
pub collection: CollectionName,
/// The query syntax tree
pub query: Query,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, Argument>,
/// Any relationships between collections involved in the query request
pub collection_relationships: BTreeMap<RelationshipName, Relationship>,
/// One set of named variables for each rowset to fetch. Each variable set
/// should be subtituted in turn, and a fresh set of rows returned.
pub variables: Option<Vec<BTreeMap<VariableName, serde_json::Value>>>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Argument")]
pub enum Argument {
/// The argument is provided by reference to a variable
Variable { name: VariableName },
/// The argument is provided as a literal value
Literal { value: serde_json::Value },
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query")]
pub struct Query {
/// Aggregate fields of the query
pub aggregates: Option<IndexMap<FieldName, Aggregate>>,
/// Fields of the query
pub fields: Option<IndexMap<FieldName, Field>>,
/// Optionally limit to N results
pub limit: Option<u32>,
/// Optionally offset from the Nth result
pub offset: Option<u32>,
pub order_by: Option<OrderBy>,
pub predicate: Option<Expression>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[skip_serializing_none]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Aggregate")]
pub enum Aggregate {
ColumnCount {
/// The column to apply the count aggregate function to
column: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Whether or not only distinct items should be counted
distinct: bool,
},
SingleColumn {
/// The column to apply the aggregation function to
column: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Single column aggregate function name.
function: AggregateFunctionName,
},
StarCount {},
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedObject")]
pub struct NestedObject {
pub fields: IndexMap<FieldName, Field>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedArray")]
pub struct NestedArray {
pub fields: Box<NestedField>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "NestedField")]
pub enum NestedField {
Object(NestedObject),
Array(NestedArray),
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Field")]
pub enum Field {
Column {
column: FieldName,
/// When the type of the column is a (possibly-nullable) array or object,
/// the caller can request a subset of the complete column data,
/// by specifying fields to fetch here.
/// If omitted, the column data will be fetched in full.
fields: Option<NestedField>,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
},
Relationship {
query: Box<Query>,
/// The name of the relationship to follow for the subquery
relationship: RelationshipName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By")]
pub struct OrderBy {
/// The elements to order by, in priority order
pub elements: Vec<OrderByElement>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By Element")]
pub struct OrderByElement {
pub order_direction: OrderDirection,
pub target: OrderByTarget,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum OrderByTarget {
Column {
/// The name of the column
name: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Any relationships to traverse to reach this column
path: Vec<PathElement>,
},
SingleColumnAggregate {
/// The column to apply the aggregation function to
column: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Single column aggregate function name.
function: AggregateFunctionName,
/// Non-empty collection of relationships to traverse
path: Vec<PathElement>,
},
StarCountAggregate {
/// Non-empty collection of relationships to traverse
path: Vec<PathElement>,
},
}
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Order Direction")]
#[serde(rename_all = "snake_case")]
pub enum OrderDirection {
Asc,
Desc,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Expression")]
pub enum Expression {
And {
expressions: Vec<Expression>,
},
Or {
expressions: Vec<Expression>,
},
Not {
expression: Box<Expression>,
},
UnaryComparisonOperator {
column: ComparisonTarget,
operator: UnaryComparisonOperator,
},
BinaryComparisonOperator {
column: ComparisonTarget,
operator: ComparisonOperatorName,
value: ComparisonValue,
},
Exists {
in_collection: ExistsInCollection,
predicate: Option<Box<Expression>>,
},
}
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Unary Comparison Operator")]
#[serde(rename_all = "snake_case")]
pub enum UnaryComparisonOperator {
IsNull,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Comparison Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ComparisonTarget {
Column {
/// The name of the column
name: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Any relationships to traverse to reach this column
path: Vec<PathElement>,
},
RootCollectionColumn {
/// The name of the column
name: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
},
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "Path Element")]
pub struct PathElement {
/// The name of the relationship to follow
pub relationship: RelationshipName,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, RelationshipArgument>,
/// A predicate expression to apply to the target collection
pub predicate: Option<Box<Expression>>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Comparison Value")]
pub enum ComparisonValue {
Column { column: ComparisonTarget },
Scalar { value: serde_json::Value },
Variable { name: VariableName },
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Exists In Collection")]
pub enum ExistsInCollection {
Related {
relationship: RelationshipName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
Unrelated {
/// The name of a collection
collection: CollectionName,
/// Values to be provided to any collection arguments
arguments: BTreeMap<ArgumentName, RelationshipArgument>,
},
NestedCollection {
column_name: FieldName,
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
arguments: BTreeMap<ArgumentName, Argument>,
/// Path to a nested collection via object columns
#[serde(skip_serializing_if = "Vec::is_empty", default)]
field_path: Vec<FieldName>,
},
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Response")]
/// Query responses may return multiple RowSets when using queries with variables.
/// Else, there should always be exactly one RowSet
pub struct QueryResponse(pub Vec<RowSet>);
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Row Set")]
pub struct RowSet {
/// The results of the aggregates returned by the query
pub aggregates: Option<IndexMap<FieldName, serde_json::Value>>,
/// The rows returned by the query, corresponding to the query's fields
pub rows: Option<Vec<IndexMap<FieldName, RowFieldValue>>>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Row Field Value")]
pub struct RowFieldValue(pub serde_json::Value);
impl RowFieldValue {
/// In the case where this field value was obtained using a
/// [`Field::Relationship`], the returned JSON will be a [`RowSet`].
/// We cannot express [`RowFieldValue`] as an enum, because
/// [`RowFieldValue`] overlaps with values which have object types.
pub fn as_rowset(self) -> Option<RowSet> {
serde_json::from_value(self.0).ok()
}
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Explain Response")]
pub struct ExplainResponse {
/// A list of human-readable key-value pairs describing
/// a query execution plan. For example, a connector for
/// a relational database might return the generated SQL
/// and/or the output of the `EXPLAIN` command. An API-based
/// connector might encode a list of statically-known API
/// calls which would be made.
pub details: BTreeMap<String, String>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Request")]
pub struct MutationRequest {
/// The mutation operations to perform
pub operations: Vec<MutationOperation>,
/// The relationships between collections involved in the entire mutation request
pub collection_relationships: BTreeMap<RelationshipName, Relationship>,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Operation")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum MutationOperation {
Procedure {
/// The name of a procedure
name: ProcedureName,
/// Any named procedure arguments
arguments: BTreeMap<ArgumentName, serde_json::Value>,
/// The fields to return from the result, or null to return everything
fields: Option<NestedField>,
},
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship")]
pub struct Relationship {
/// A mapping between columns on the source collection to columns on the target collection
pub column_mapping: BTreeMap<FieldName, FieldName>,
pub relationship_type: RelationshipType,
/// The name of a collection
pub target_collection: CollectionName,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, RelationshipArgument>,
}
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship Argument")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum RelationshipArgument {
/// The argument is provided by reference to a variable
Variable {
name: VariableName,
},
/// The argument is provided as a literal value
Literal {
value: serde_json::Value,
},
// The argument is provided based on a column of the source collection
Column {
name: FieldName,
},
}
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Relationship Type")]
#[serde(rename_all = "snake_case")]
pub enum RelationshipType {
Object,
Array,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Response")]
pub struct MutationResponse {
/// The results of each mutation operation, in the same order as they were received
pub operation_results: Vec<MutationOperationResults>,
}
#[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Mutation Operation Results")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum MutationOperationResults {
Procedure { result: serde_json::Value },
}
macro_rules! newtype {
($name: ident over $oldtype: ident) => {
#[derive(
Clone,
Debug,
Default,
Hash,
Eq,
Ord,
PartialEq,
PartialOrd,
Serialize,
Deserialize,
RefCast,
)]
#[repr(transparent)]
pub struct $name($oldtype);
impl JsonSchema for $name {
fn schema_name() -> String {
String::schema_name()
}
fn json_schema(gen: &mut schemars::gen::SchemaGenerator) -> schemars::schema::Schema {
String::json_schema(gen)
}
fn is_referenceable() -> bool {
String::is_referenceable()
}
fn schema_id() -> std::borrow::Cow<'static, str> {
String::schema_id()
}
}
impl AsRef<$oldtype> for $name {
fn as_ref(&self) -> &$oldtype {
&self.0
}
}
impl From<&str> for $name {
fn from(value: &str) -> Self {
$name(value.into())
}
}
impl From<$oldtype> for $name {
fn from(value: $oldtype) -> Self {
$name(value)
}
}
impl From<$name> for $oldtype {
fn from(value: $name) -> Self {
value.0
}
}
impl std::fmt::Display for $name {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
}
}
impl Borrow<str> for $name {
fn borrow(&self) -> &str {
self.0.as_str()
}
}
impl Borrow<$oldtype> for $name {
fn borrow(&self) -> &$oldtype {
&self.0
}
}
impl $name {
pub fn new(value: $oldtype) -> Self {
$name(value)
}
pub fn as_str(&self) -> &str {
self.0.as_str()
}
pub fn into_inner(self) -> $oldtype {
self.0
}
pub fn inner(&self) -> &$oldtype {
&self.0
}
}
};
($name: ident) => {
newtype! {$name over SmolStr}
impl From<String> for $name {
fn from(value: String) -> Self {
$name(value.into())
}
}
impl From<$name> for String {
fn from(value: $name) -> Self {
value.0.into()
}
}
};
}
newtype! {AggregateFunctionName}
newtype! {ArgumentName}
newtype! {CollectionName}
newtype! {ComparisonOperatorName}
newtype! {FieldName}
newtype! {FunctionName over CollectionName}
newtype! {ObjectTypeName over TypeName}
newtype! {ProcedureName}
newtype! {RelationshipName}
newtype! {ScalarTypeName over TypeName}
newtype! {TypeName}
newtype! {VariableName}
impl From<String> for FunctionName {
fn from(value: String) -> Self {
FunctionName(value.into())
}
}
impl From<FunctionName> for String {
fn from(value: FunctionName) -> Self {
value.0.into()
}
}
impl From<String> for ObjectTypeName {
fn from(value: String) -> Self {
ObjectTypeName(value.into())
}
}
impl From<ObjectTypeName> for String {
fn from(value: ObjectTypeName) -> Self {
value.0.into()
}
}
impl From<String> for ScalarTypeName {
fn from(value: String) -> Self {
ScalarTypeName(value.into())
}
}
impl From<ScalarTypeName> for String {
fn from(value: ScalarTypeName) -> Self {
value.0.into()
}
}
#[cfg(test)]
mod tests {
use std::io::Write;
use std::path::PathBuf;
use goldenfile::Mint;
use schemars::schema_for;
use super::*;
#[test]
fn test_json_schemas() {
let test_dir = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("tests");
let mut mint = Mint::new(test_dir);
test_json_schema(
&mut mint,
schema_for!(ErrorResponse),
"error_response.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(SchemaResponse),
"schema_response.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(CapabilitiesResponse),
"capabilities_response.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(QueryRequest),
"query_request.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(QueryResponse),
"query_response.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(ExplainResponse),
"explain_response.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(MutationRequest),
"mutation_request.jsonschema",
);
test_json_schema(
&mut mint,
schema_for!(MutationResponse),
"mutation_response.jsonschema",
);
}
fn test_json_schema(mint: &mut Mint, schema: schemars::schema::RootSchema, filename: &str) {
let expected_path = PathBuf::from_iter(["json_schema", filename]);
let mut expected = mint.new_goldenfile(expected_path).unwrap();
write!(
expected,
"{}",
serde_json::to_string_pretty(&schema).unwrap()
)
.unwrap();
}
}
NestedObject
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "NestedObject")]
pub struct NestedObject {
pub fields: IndexMap<FieldName, Field>,
}
ObjectField
/// The definition of an object field
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Object Field")]
pub struct ObjectField {
/// Description of this field
pub description: Option<String>,
/// The type of this field
#[serde(rename = "type")]
pub r#type: Type,
/// The arguments available to the field - Matches implementation from CollectionInfo
#[serde(skip_serializing_if = "BTreeMap::is_empty", default)]
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
}
ObjectType
/// The definition of an object type
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Object Type")]
pub struct ObjectType {
/// Description of this type
pub description: Option<String>,
/// Fields defined on this object type
pub fields: BTreeMap<FieldName, ObjectField>,
}
OrderBy
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By")]
pub struct OrderBy {
/// The elements to order by, in priority order
pub elements: Vec<OrderByElement>,
}
OrderByElement
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By Element")]
pub struct OrderByElement {
pub order_direction: OrderDirection,
pub target: OrderByTarget,
}
OrderByTarget
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Order By Target")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum OrderByTarget {
Column {
/// The name of the column
name: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Any relationships to traverse to reach this column
path: Vec<PathElement>,
},
SingleColumnAggregate {
/// The column to apply the aggregation function to
column: FieldName,
/// Path to a nested field within an object column
field_path: Option<Vec<FieldName>>,
/// Single column aggregate function name.
function: AggregateFunctionName,
/// Non-empty collection of relationships to traverse
path: Vec<PathElement>,
},
StarCountAggregate {
/// Non-empty collection of relationships to traverse
path: Vec<PathElement>,
},
}
OrderDirection
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Order Direction")]
#[serde(rename_all = "snake_case")]
pub enum OrderDirection {
Asc,
Desc,
}
PathElement
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[serde(rename_all = "snake_case")]
#[schemars(title = "Path Element")]
pub struct PathElement {
/// The name of the relationship to follow
pub relationship: RelationshipName,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, RelationshipArgument>,
/// A predicate expression to apply to the target collection
pub predicate: Option<Box<Expression>>,
}
ProcedureInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Procedure Info")]
pub struct ProcedureInfo {
/// The name of the procedure
pub name: ProcedureName,
/// Column description
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the result type
pub result_type: Type,
}
Query
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query")]
pub struct Query {
/// Aggregate fields of the query
pub aggregates: Option<IndexMap<FieldName, Aggregate>>,
/// Fields of the query
pub fields: Option<IndexMap<FieldName, Field>>,
/// Optionally limit to N results
pub limit: Option<u32>,
/// Optionally offset from the Nth result
pub offset: Option<u32>,
pub order_by: Option<OrderBy>,
pub predicate: Option<Expression>,
}
QueryCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Capabilities")]
pub struct QueryCapabilities {
/// Does the connector support aggregate queries
pub aggregates: Option<LeafCapability>,
/// Does the connector support queries which use variables
pub variables: Option<LeafCapability>,
/// Does the connector support explaining queries
pub explain: Option<LeafCapability>,
/// Does the connector support nested fields
#[serde(default)]
pub nested_fields: NestedFieldCapabilities,
/// Does the connector support EXISTS predicates
#[serde(default)]
pub exists: ExistsCapabilities,
}
QueryRequest
/// This is the request body of the query POST endpoint
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Request")]
pub struct QueryRequest {
/// The name of a collection
pub collection: CollectionName,
/// The query syntax tree
pub query: Query,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, Argument>,
/// Any relationships between collections involved in the query request
pub collection_relationships: BTreeMap<RelationshipName, Relationship>,
/// One set of named variables for each rowset to fetch. Each variable set
/// should be subtituted in turn, and a fresh set of rows returned.
pub variables: Option<Vec<BTreeMap<VariableName, serde_json::Value>>>,
}
QueryResponse
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Query Response")]
/// Query responses may return multiple RowSets when using queries with variables.
/// Else, there should always be exactly one RowSet
pub struct QueryResponse(pub Vec<RowSet>);
Relationship
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship")]
pub struct Relationship {
/// A mapping between columns on the source collection to columns on the target collection
pub column_mapping: BTreeMap<FieldName, FieldName>,
pub relationship_type: RelationshipType,
/// The name of a collection
pub target_collection: CollectionName,
/// Values to be provided to any collection arguments
pub arguments: BTreeMap<ArgumentName, RelationshipArgument>,
}
RelationshipArgument
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship Argument")]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum RelationshipArgument {
/// The argument is provided by reference to a variable
Variable {
name: VariableName,
},
/// The argument is provided as a literal value
Literal {
value: serde_json::Value,
},
// The argument is provided based on a column of the source collection
Column {
name: FieldName,
},
}
RelationshipCapabilities
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Relationship Capabilities")]
pub struct RelationshipCapabilities {
/// Does the connector support comparisons that involve related collections (ie. joins)?
pub relation_comparisons: Option<LeafCapability>,
/// Does the connector support ordering by an aggregated array relationship?
pub order_by_aggregate: Option<LeafCapability>,
}
RelationshipType
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Relationship Type")]
#[serde(rename_all = "snake_case")]
pub enum RelationshipType {
Object,
Array,
}
RowFieldValue
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Row Field Value")]
pub struct RowFieldValue(pub serde_json::Value);
impl RowFieldValue {
/// In the case where this field value was obtained using a
/// [`Field::Relationship`], the returned JSON will be a [`RowSet`].
/// We cannot express [`RowFieldValue`] as an enum, because
/// [`RowFieldValue`] overlaps with values which have object types.
pub fn as_rowset(self) -> Option<RowSet> {
serde_json::from_value(self.0).ok()
}
}
RowSet
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Row Set")]
pub struct RowSet {
/// The results of the aggregates returned by the query
pub aggregates: Option<IndexMap<FieldName, serde_json::Value>>,
/// The rows returned by the query, corresponding to the query's fields
pub rows: Option<Vec<IndexMap<FieldName, RowFieldValue>>>,
}
ScalarType
/// The definition of a scalar type, i.e. types that can be used as the types of columns.
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Scalar Type")]
pub struct ScalarType {
/// A description of valid values for this scalar type.
/// Defaults to `TypeRepresentation::JSON` if omitted
pub representation: Option<TypeRepresentation>,
/// A map from aggregate function names to their definitions. Result type names must be defined scalar types declared in ScalarTypesCapabilities.
pub aggregate_functions: BTreeMap<AggregateFunctionName, AggregateFunctionDefinition>,
/// A map from comparison operator names to their definitions. Argument type names must be defined scalar types declared in ScalarTypesCapabilities.
pub comparison_operators: BTreeMap<ComparisonOperatorName, ComparisonOperatorDefinition>,
}
SchemaResponse
#[derive(Clone, Debug, Default, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Schema Response")]
pub struct SchemaResponse {
/// A list of scalar types which will be used as the types of collection columns
pub scalar_types: BTreeMap<ScalarTypeName, ScalarType>,
/// A list of object types which can be used as the types of arguments, or return types of procedures.
/// Names should not overlap with scalar type names.
pub object_types: BTreeMap<ObjectTypeName, ObjectType>,
/// Collections which are available for queries
pub collections: Vec<CollectionInfo>,
/// Functions (i.e. collections which return a single column and row)
pub functions: Vec<FunctionInfo>,
/// Procedures which are available for execution as part of mutations
pub procedures: Vec<ProcedureInfo>,
}
CollectionInfo
#[skip_serializing_none]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Collection Info")]
pub struct CollectionInfo {
/// The name of the collection
///
/// Note: these names are abstract - there is no requirement that this name correspond to
/// the name of an actual collection in the database.
pub name: CollectionName,
/// Description of the collection
pub description: Option<String>,
/// Any arguments that this collection requires
pub arguments: BTreeMap<ArgumentName, ArgumentInfo>,
/// The name of the collection's object type
#[serde(rename = "type")]
pub collection_type: ObjectTypeName,
/// Any uniqueness constraints enforced on this collection
pub uniqueness_constraints: BTreeMap<String, UniquenessConstraint>,
/// Any foreign key constraints enforced on this collection
pub foreign_keys: BTreeMap<String, ForeignKeyConstraint>,
}
Type
/// Types track the valid representations of values as JSON
#[derive(
Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Type")]
pub enum Type {
/// A named type
Named {
/// The name can refer to a scalar or object type
name: TypeName,
},
/// A nullable type
Nullable {
/// The type of the non-null inhabitants of this type
underlying_type: Box<Type>,
},
/// An array type
Array {
/// The type of the elements of the array
element_type: Box<Type>,
},
/// A predicate type for a given object type
Predicate {
/// The object type name
object_type_name: ObjectTypeName,
},
}
TypeRepresentation
/// Representations of scalar types
#[derive(
Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[serde(tag = "type", rename_all = "snake_case")]
#[schemars(title = "Type Representation")]
pub enum TypeRepresentation {
/// JSON booleans
Boolean,
/// Any JSON string
String,
/// Any JSON number
#[deprecated(since = "0.1.2", note = "Use sized numeric types instead")]
Number,
/// Any JSON number, with no decimal part
#[deprecated(since = "0.1.2", note = "Use sized numeric types instead")]
Integer,
/// A 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1
Int8,
/// A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1
Int16,
/// A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1
Int32,
/// A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1
Int64,
/// An IEEE-754 single-precision floating-point number
Float32,
/// An IEEE-754 double-precision floating-point number
Float64,
/// Arbitrary-precision integer string
#[serde(rename = "biginteger")]
BigInteger,
/// Arbitrary-precision decimal string
#[serde(rename = "bigdecimal")]
BigDecimal,
/// UUID string (8-4-4-4-12)
#[serde(rename = "uuid")]
UUID,
/// ISO 8601 date
Date,
/// ISO 8601 timestamp
Timestamp,
/// ISO 8601 timestamp-with-timezone
#[serde(rename = "timestamptz")]
TimestampTZ,
/// GeoJSON, per RFC 7946
Geography,
/// GeoJSON Geometry object, per RFC 7946
Geometry,
/// Base64-encoded bytes
Bytes,
/// Arbitrary JSON
#[serde(rename = "json")]
JSON,
/// One of the specified string values
Enum { one_of: Vec<String> },
}
UnaryComparisonOperator
#[derive(
Clone, Copy, Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Serialize, Deserialize, JsonSchema,
)]
#[schemars(title = "Unary Comparison Operator")]
#[serde(rename_all = "snake_case")]
pub enum UnaryComparisonOperator {
IsNull,
}
UniquenessConstraint
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
#[schemars(title = "Uniqueness Constraint")]
pub struct UniquenessConstraint {
/// A list of columns which this constraint requires to be unique
pub unique_columns: Vec<FieldName>,
}
JSON Schema
CapabilitiesResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Capabilities Response",
"type": "object",
"required": [
"capabilities",
"version"
],
"properties": {
"version": {
"type": "string"
},
"capabilities": {
"$ref": "#/definitions/Capabilities"
}
},
"definitions": {
"Capabilities": {
"title": "Capabilities",
"description": "Describes the features of the specification which a data connector implements.",
"type": "object",
"required": [
"mutation",
"query"
],
"properties": {
"query": {
"$ref": "#/definitions/QueryCapabilities"
},
"mutation": {
"$ref": "#/definitions/MutationCapabilities"
},
"relationships": {
"anyOf": [
{
"$ref": "#/definitions/RelationshipCapabilities"
},
{
"type": "null"
}
]
}
}
},
"QueryCapabilities": {
"title": "Query Capabilities",
"type": "object",
"properties": {
"aggregates": {
"description": "Does the connector support aggregate queries",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"variables": {
"description": "Does the connector support queries which use variables",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"explain": {
"description": "Does the connector support explaining queries",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"nested_fields": {
"description": "Does the connector support nested fields",
"default": {},
"allOf": [
{
"$ref": "#/definitions/NestedFieldCapabilities"
}
]
},
"exists": {
"description": "Does the connector support EXISTS predicates",
"default": {},
"allOf": [
{
"$ref": "#/definitions/ExistsCapabilities"
}
]
}
}
},
"LeafCapability": {
"description": "A unit value to indicate a particular leaf capability is supported. This is an empty struct to allow for future sub-capabilities.",
"type": "object"
},
"NestedFieldCapabilities": {
"title": "Nested Field Capabilities",
"type": "object",
"properties": {
"filter_by": {
"description": "Does the connector support filtering by values of nested fields",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"order_by": {
"description": "Does the connector support ordering by values of nested fields",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"aggregates": {
"description": "Does the connector support aggregating values within nested fields",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"ExistsCapabilities": {
"title": "Exists Capabilities",
"type": "object",
"properties": {
"nested_collections": {
"description": "Does the connector support ExistsInCollection::NestedCollection",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"MutationCapabilities": {
"title": "Mutation Capabilities",
"type": "object",
"properties": {
"transactional": {
"description": "Does the connector support executing multiple mutations in a transaction.",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"explain": {
"description": "Does the connector support explaining mutations",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
},
"RelationshipCapabilities": {
"title": "Relationship Capabilities",
"type": "object",
"properties": {
"relation_comparisons": {
"description": "Does the connector support comparisons that involve related collections (ie. joins)?",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
},
"order_by_aggregate": {
"description": "Does the connector support ordering by an aggregated array relationship?",
"anyOf": [
{
"$ref": "#/definitions/LeafCapability"
},
{
"type": "null"
}
]
}
}
}
}
}
ErrorResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Error Response",
"type": "object",
"required": [
"details",
"message"
],
"properties": {
"message": {
"description": "A human-readable summary of the error",
"type": "string"
},
"details": {
"description": "Any additional structured information about the error"
}
}
}
ExplainResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Explain Response",
"type": "object",
"required": [
"details"
],
"properties": {
"details": {
"description": "A list of human-readable key-value pairs describing a query execution plan. For example, a connector for a relational database might return the generated SQL and/or the output of the `EXPLAIN` command. An API-based connector might encode a list of statically-known API calls which would be made.",
"type": "object",
"additionalProperties": {
"type": "string"
}
}
}
}
MutationRequest
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Mutation Request",
"type": "object",
"required": [
"collection_relationships",
"operations"
],
"properties": {
"operations": {
"description": "The mutation operations to perform",
"type": "array",
"items": {
"$ref": "#/definitions/MutationOperation"
}
},
"collection_relationships": {
"description": "The relationships between collections involved in the entire mutation request",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Relationship"
}
}
},
"definitions": {
"MutationOperation": {
"title": "Mutation Operation",
"oneOf": [
{
"type": "object",
"required": [
"arguments",
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"procedure"
]
},
"name": {
"description": "The name of a procedure",
"type": "string"
},
"arguments": {
"description": "Any named procedure arguments",
"type": "object",
"additionalProperties": true
},
"fields": {
"description": "The fields to return from the result, or null to return everything",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
}
}
}
]
},
"NestedField": {
"title": "NestedField",
"oneOf": [
{
"title": "NestedObject",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"object"
]
},
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Field"
}
}
}
},
{
"title": "NestedArray",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"fields": {
"$ref": "#/definitions/NestedField"
}
}
}
]
},
"Field": {
"title": "Field",
"oneOf": [
{
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"type": "string"
},
"fields": {
"description": "When the type of the column is a (possibly-nullable) array or object, the caller can request a subset of the complete column data, by specifying fields to fetch here. If omitted, the column data will be fetched in full.",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
}
}
},
{
"type": "object",
"required": [
"arguments",
"query",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"relationship"
]
},
"query": {
"$ref": "#/definitions/Query"
},
"relationship": {
"description": "The name of the relationship to follow for the subquery",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
}
]
},
"Argument": {
"title": "Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
}
]
},
"Query": {
"title": "Query",
"type": "object",
"properties": {
"aggregates": {
"description": "Aggregate fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"fields": {
"description": "Fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Field"
}
},
"limit": {
"description": "Optionally limit to N results",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth result",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"order_by": {
"anyOf": [
{
"$ref": "#/definitions/OrderBy"
},
{
"type": "null"
}
]
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Aggregate": {
"title": "Aggregate",
"oneOf": [
{
"type": "object",
"required": [
"column",
"distinct",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column_count"
]
},
"column": {
"description": "The column to apply the count aggregate function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"distinct": {
"description": "Whether or not only distinct items should be counted",
"type": "boolean"
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count"
]
}
}
}
]
},
"OrderBy": {
"title": "Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/OrderByElement"
}
}
}
},
"OrderByElement": {
"title": "Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/OrderByTarget"
}
}
},
"OrderDirection": {
"title": "Order Direction",
"type": "string",
"enum": [
"asc",
"desc"
]
},
"OrderByTarget": {
"title": "Order By Target",
"oneOf": [
{
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"path": {
"description": "Any relationships to traverse to reach this column",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column_aggregate"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count_aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
}
]
},
"PathElement": {
"title": "Path Element",
"type": "object",
"required": [
"arguments",
"relationship"
],
"properties": {
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
},
"predicate": {
"description": "A predicate expression to apply to the target collection",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"RelationshipArgument": {
"title": "Relationship Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"type": "string"
}
}
}
]
},
"Expression": {
"title": "Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/Expression"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"type": "object",
"required": [
"in_collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"exists"
]
},
"in_collection": {
"$ref": "#/definitions/ExistsInCollection"
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
}
]
},
"ComparisonTarget": {
"title": "Comparison Target",
"oneOf": [
{
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"path": {
"description": "Any relationships to traverse to reach this column",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"root_collection_column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
}
]
},
"UnaryComparisonOperator": {
"title": "Unary Comparison Operator",
"type": "string",
"enum": [
"is_null"
]
},
"ComparisonValue": {
"title": "Comparison Value",
"oneOf": [
{
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
}
}
},
{
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"ExistsInCollection": {
"title": "Exists In Collection",
"oneOf": [
{
"type": "object",
"required": [
"arguments",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"related"
]
},
"relationship": {
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"type": "object",
"required": [
"arguments",
"collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unrelated"
]
},
"collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"Relationship": {
"title": "Relationship",
"type": "object",
"required": [
"arguments",
"column_mapping",
"relationship_type",
"target_collection"
],
"properties": {
"column_mapping": {
"description": "A mapping between columns on the source collection to columns on the target collection",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"relationship_type": {
"$ref": "#/definitions/RelationshipType"
},
"target_collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
"RelationshipType": {
"title": "Relationship Type",
"type": "string",
"enum": [
"object",
"array"
]
}
}
}
MutationResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Mutation Response",
"type": "object",
"required": [
"operation_results"
],
"properties": {
"operation_results": {
"description": "The results of each mutation operation, in the same order as they were received",
"type": "array",
"items": {
"$ref": "#/definitions/MutationOperationResults"
}
}
},
"definitions": {
"MutationOperationResults": {
"title": "Mutation Operation Results",
"oneOf": [
{
"type": "object",
"required": [
"result",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"procedure"
]
},
"result": true
}
}
]
}
}
}
QueryRequest
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Query Request",
"description": "This is the request body of the query POST endpoint",
"type": "object",
"required": [
"arguments",
"collection",
"collection_relationships",
"query"
],
"properties": {
"collection": {
"description": "The name of a collection",
"type": "string"
},
"query": {
"description": "The query syntax tree",
"allOf": [
{
"$ref": "#/definitions/Query"
}
]
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"collection_relationships": {
"description": "Any relationships between collections involved in the query request",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Relationship"
}
},
"variables": {
"description": "One set of named variables for each rowset to fetch. Each variable set should be subtituted in turn, and a fresh set of rows returned.",
"type": [
"array",
"null"
],
"items": {
"type": "object",
"additionalProperties": true
}
}
},
"definitions": {
"Query": {
"title": "Query",
"type": "object",
"properties": {
"aggregates": {
"description": "Aggregate fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"fields": {
"description": "Fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Field"
}
},
"limit": {
"description": "Optionally limit to N results",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth result",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"order_by": {
"anyOf": [
{
"$ref": "#/definitions/OrderBy"
},
{
"type": "null"
}
]
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Aggregate": {
"title": "Aggregate",
"oneOf": [
{
"type": "object",
"required": [
"column",
"distinct",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column_count"
]
},
"column": {
"description": "The column to apply the count aggregate function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"distinct": {
"description": "Whether or not only distinct items should be counted",
"type": "boolean"
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count"
]
}
}
}
]
},
"Field": {
"title": "Field",
"oneOf": [
{
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"type": "string"
},
"fields": {
"description": "When the type of the column is a (possibly-nullable) array or object, the caller can request a subset of the complete column data, by specifying fields to fetch here. If omitted, the column data will be fetched in full.",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
}
}
},
{
"type": "object",
"required": [
"arguments",
"query",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"relationship"
]
},
"query": {
"$ref": "#/definitions/Query"
},
"relationship": {
"description": "The name of the relationship to follow for the subquery",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
}
]
},
"NestedField": {
"title": "NestedField",
"oneOf": [
{
"title": "NestedObject",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"object"
]
},
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Field"
}
}
}
},
{
"title": "NestedArray",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"fields": {
"$ref": "#/definitions/NestedField"
}
}
}
]
},
"Argument": {
"title": "Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
}
]
},
"RelationshipArgument": {
"title": "Relationship Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"type": "string"
}
}
}
]
},
"OrderBy": {
"title": "Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/OrderByElement"
}
}
}
},
"OrderByElement": {
"title": "Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/OrderByTarget"
}
}
},
"OrderDirection": {
"title": "Order Direction",
"type": "string",
"enum": [
"asc",
"desc"
]
},
"OrderByTarget": {
"title": "Order By Target",
"oneOf": [
{
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"path": {
"description": "Any relationships to traverse to reach this column",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column_aggregate"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count_aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
}
]
},
"PathElement": {
"title": "Path Element",
"type": "object",
"required": [
"arguments",
"relationship"
],
"properties": {
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
},
"predicate": {
"description": "A predicate expression to apply to the target collection",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Expression": {
"title": "Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/Expression"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"type": "object",
"required": [
"in_collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"exists"
]
},
"in_collection": {
"$ref": "#/definitions/ExistsInCollection"
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
}
]
},
"ComparisonTarget": {
"title": "Comparison Target",
"oneOf": [
{
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"path": {
"description": "Any relationships to traverse to reach this column",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"root_collection_column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
}
]
},
"UnaryComparisonOperator": {
"title": "Unary Comparison Operator",
"type": "string",
"enum": [
"is_null"
]
},
"ComparisonValue": {
"title": "Comparison Value",
"oneOf": [
{
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
}
}
},
{
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"ExistsInCollection": {
"title": "Exists In Collection",
"oneOf": [
{
"type": "object",
"required": [
"arguments",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"related"
]
},
"relationship": {
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"type": "object",
"required": [
"arguments",
"collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unrelated"
]
},
"collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"Relationship": {
"title": "Relationship",
"type": "object",
"required": [
"arguments",
"column_mapping",
"relationship_type",
"target_collection"
],
"properties": {
"column_mapping": {
"description": "A mapping between columns on the source collection to columns on the target collection",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"relationship_type": {
"$ref": "#/definitions/RelationshipType"
},
"target_collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
"RelationshipType": {
"title": "Relationship Type",
"type": "string",
"enum": [
"object",
"array"
]
}
}
}
QueryResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Query Response",
"description": "Query responses may return multiple RowSets when using queries with variables. Else, there should always be exactly one RowSet",
"type": "array",
"items": {
"$ref": "#/definitions/RowSet"
},
"definitions": {
"RowSet": {
"title": "Row Set",
"type": "object",
"properties": {
"aggregates": {
"description": "The results of the aggregates returned by the query",
"type": [
"object",
"null"
],
"additionalProperties": true
},
"rows": {
"description": "The rows returned by the query, corresponding to the query's fields",
"type": [
"array",
"null"
],
"items": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RowFieldValue"
}
}
}
}
},
"RowFieldValue": {
"title": "Row Field Value"
}
}
}
QueryRequest
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Query Request",
"description": "This is the request body of the query POST endpoint",
"type": "object",
"required": [
"arguments",
"collection",
"collection_relationships",
"query"
],
"properties": {
"collection": {
"description": "The name of a collection",
"type": "string"
},
"query": {
"description": "The query syntax tree",
"allOf": [
{
"$ref": "#/definitions/Query"
}
]
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"collection_relationships": {
"description": "Any relationships between collections involved in the query request",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Relationship"
}
},
"variables": {
"description": "One set of named variables for each rowset to fetch. Each variable set should be subtituted in turn, and a fresh set of rows returned.",
"type": [
"array",
"null"
],
"items": {
"type": "object",
"additionalProperties": true
}
}
},
"definitions": {
"Query": {
"title": "Query",
"type": "object",
"properties": {
"aggregates": {
"description": "Aggregate fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Aggregate"
}
},
"fields": {
"description": "Fields of the query",
"type": [
"object",
"null"
],
"additionalProperties": {
"$ref": "#/definitions/Field"
}
},
"limit": {
"description": "Optionally limit to N results",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"offset": {
"description": "Optionally offset from the Nth result",
"type": [
"integer",
"null"
],
"format": "uint32",
"minimum": 0.0
},
"order_by": {
"anyOf": [
{
"$ref": "#/definitions/OrderBy"
},
{
"type": "null"
}
]
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Aggregate": {
"title": "Aggregate",
"oneOf": [
{
"type": "object",
"required": [
"column",
"distinct",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column_count"
]
},
"column": {
"description": "The column to apply the count aggregate function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"distinct": {
"description": "Whether or not only distinct items should be counted",
"type": "boolean"
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count"
]
}
}
}
]
},
"Field": {
"title": "Field",
"oneOf": [
{
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"type": "string"
},
"fields": {
"description": "When the type of the column is a (possibly-nullable) array or object, the caller can request a subset of the complete column data, by specifying fields to fetch here. If omitted, the column data will be fetched in full.",
"anyOf": [
{
"$ref": "#/definitions/NestedField"
},
{
"type": "null"
}
]
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
}
}
},
{
"type": "object",
"required": [
"arguments",
"query",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"relationship"
]
},
"query": {
"$ref": "#/definitions/Query"
},
"relationship": {
"description": "The name of the relationship to follow for the subquery",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
}
]
},
"NestedField": {
"title": "NestedField",
"oneOf": [
{
"title": "NestedObject",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"object"
]
},
"fields": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Field"
}
}
}
},
{
"title": "NestedArray",
"type": "object",
"required": [
"fields",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"fields": {
"$ref": "#/definitions/NestedField"
}
}
}
]
},
"Argument": {
"title": "Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
}
]
},
"RelationshipArgument": {
"title": "Relationship Argument",
"oneOf": [
{
"description": "The argument is provided by reference to a variable",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
},
{
"description": "The argument is provided as a literal value",
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"literal"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"type": "string"
}
}
}
]
},
"OrderBy": {
"title": "Order By",
"type": "object",
"required": [
"elements"
],
"properties": {
"elements": {
"description": "The elements to order by, in priority order",
"type": "array",
"items": {
"$ref": "#/definitions/OrderByElement"
}
}
}
},
"OrderByElement": {
"title": "Order By Element",
"type": "object",
"required": [
"order_direction",
"target"
],
"properties": {
"order_direction": {
"$ref": "#/definitions/OrderDirection"
},
"target": {
"$ref": "#/definitions/OrderByTarget"
}
}
},
"OrderDirection": {
"title": "Order Direction",
"type": "string",
"enum": [
"asc",
"desc"
]
},
"OrderByTarget": {
"title": "Order By Target",
"oneOf": [
{
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"path": {
"description": "Any relationships to traverse to reach this column",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"column",
"function",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"single_column_aggregate"
]
},
"column": {
"description": "The column to apply the aggregation function to",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"function": {
"description": "Single column aggregate function name.",
"type": "string"
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"star_count_aggregate"
]
},
"path": {
"description": "Non-empty collection of relationships to traverse",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
}
]
},
"PathElement": {
"title": "Path Element",
"type": "object",
"required": [
"arguments",
"relationship"
],
"properties": {
"relationship": {
"description": "The name of the relationship to follow",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
},
"predicate": {
"description": "A predicate expression to apply to the target collection",
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
},
"Expression": {
"title": "Expression",
"oneOf": [
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"and"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expressions",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"or"
]
},
"expressions": {
"type": "array",
"items": {
"$ref": "#/definitions/Expression"
}
}
}
},
{
"type": "object",
"required": [
"expression",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"not"
]
},
"expression": {
"$ref": "#/definitions/Expression"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"$ref": "#/definitions/UnaryComparisonOperator"
}
}
},
{
"type": "object",
"required": [
"column",
"operator",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"binary_comparison_operator"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
},
"operator": {
"type": "string"
},
"value": {
"$ref": "#/definitions/ComparisonValue"
}
}
},
{
"type": "object",
"required": [
"in_collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"exists"
]
},
"in_collection": {
"$ref": "#/definitions/ExistsInCollection"
},
"predicate": {
"anyOf": [
{
"$ref": "#/definitions/Expression"
},
{
"type": "null"
}
]
}
}
}
]
},
"ComparisonTarget": {
"title": "Comparison Target",
"oneOf": [
{
"type": "object",
"required": [
"name",
"path",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
},
"path": {
"description": "Any relationships to traverse to reach this column",
"type": "array",
"items": {
"$ref": "#/definitions/PathElement"
}
}
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"root_collection_column"
]
},
"name": {
"description": "The name of the column",
"type": "string"
},
"field_path": {
"description": "Path to a nested field within an object column",
"type": [
"array",
"null"
],
"items": {
"type": "string"
}
}
}
}
]
},
"UnaryComparisonOperator": {
"title": "Unary Comparison Operator",
"type": "string",
"enum": [
"is_null"
]
},
"ComparisonValue": {
"title": "Comparison Value",
"oneOf": [
{
"type": "object",
"required": [
"column",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"column"
]
},
"column": {
"$ref": "#/definitions/ComparisonTarget"
}
}
},
{
"type": "object",
"required": [
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"scalar"
]
},
"value": true
}
},
{
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"variable"
]
},
"name": {
"type": "string"
}
}
}
]
},
"ExistsInCollection": {
"title": "Exists In Collection",
"oneOf": [
{
"type": "object",
"required": [
"arguments",
"relationship",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"related"
]
},
"relationship": {
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"type": "object",
"required": [
"arguments",
"collection",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"unrelated"
]
},
"collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
{
"type": "object",
"required": [
"column_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nested_collection"
]
},
"column_name": {
"type": "string"
},
"arguments": {
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/Argument"
}
},
"field_path": {
"description": "Path to a nested collection via object columns",
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"Relationship": {
"title": "Relationship",
"type": "object",
"required": [
"arguments",
"column_mapping",
"relationship_type",
"target_collection"
],
"properties": {
"column_mapping": {
"description": "A mapping between columns on the source collection to columns on the target collection",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"relationship_type": {
"$ref": "#/definitions/RelationshipType"
},
"target_collection": {
"description": "The name of a collection",
"type": "string"
},
"arguments": {
"description": "Values to be provided to any collection arguments",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/RelationshipArgument"
}
}
}
},
"RelationshipType": {
"title": "Relationship Type",
"type": "string",
"enum": [
"object",
"array"
]
}
}
}
SchemaResponse
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Schema Response",
"type": "object",
"required": [
"collections",
"functions",
"object_types",
"procedures",
"scalar_types"
],
"properties": {
"scalar_types": {
"description": "A list of scalar types which will be used as the types of collection columns",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ScalarType"
}
},
"object_types": {
"description": "A list of object types which can be used as the types of arguments, or return types of procedures. Names should not overlap with scalar type names.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ObjectType"
}
},
"collections": {
"description": "Collections which are available for queries",
"type": "array",
"items": {
"$ref": "#/definitions/CollectionInfo"
}
},
"functions": {
"description": "Functions (i.e. collections which return a single column and row)",
"type": "array",
"items": {
"$ref": "#/definitions/FunctionInfo"
}
},
"procedures": {
"description": "Procedures which are available for execution as part of mutations",
"type": "array",
"items": {
"$ref": "#/definitions/ProcedureInfo"
}
}
},
"definitions": {
"ScalarType": {
"title": "Scalar Type",
"description": "The definition of a scalar type, i.e. types that can be used as the types of columns.",
"type": "object",
"required": [
"aggregate_functions",
"comparison_operators"
],
"properties": {
"representation": {
"description": "A description of valid values for this scalar type. Defaults to `TypeRepresentation::JSON` if omitted",
"anyOf": [
{
"$ref": "#/definitions/TypeRepresentation"
},
{
"type": "null"
}
]
},
"aggregate_functions": {
"description": "A map from aggregate function names to their definitions. Result type names must be defined scalar types declared in ScalarTypesCapabilities.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/AggregateFunctionDefinition"
}
},
"comparison_operators": {
"description": "A map from comparison operator names to their definitions. Argument type names must be defined scalar types declared in ScalarTypesCapabilities.",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ComparisonOperatorDefinition"
}
}
}
},
"TypeRepresentation": {
"title": "Type Representation",
"description": "Representations of scalar types",
"oneOf": [
{
"description": "JSON booleans",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"boolean"
]
}
}
},
{
"description": "Any JSON string",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"string"
]
}
}
},
{
"description": "Any JSON number",
"deprecated": true,
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"number"
]
}
}
},
{
"description": "Any JSON number, with no decimal part",
"deprecated": true,
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"integer"
]
}
}
},
{
"description": "A 8-bit signed integer with a minimum value of -2^7 and a maximum value of 2^7 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int8"
]
}
}
},
{
"description": "A 16-bit signed integer with a minimum value of -2^15 and a maximum value of 2^15 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int16"
]
}
}
},
{
"description": "A 32-bit signed integer with a minimum value of -2^31 and a maximum value of 2^31 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int32"
]
}
}
},
{
"description": "A 64-bit signed integer with a minimum value of -2^63 and a maximum value of 2^63 - 1",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"int64"
]
}
}
},
{
"description": "An IEEE-754 single-precision floating-point number",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"float32"
]
}
}
},
{
"description": "An IEEE-754 double-precision floating-point number",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"float64"
]
}
}
},
{
"description": "Arbitrary-precision integer string",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"biginteger"
]
}
}
},
{
"description": "Arbitrary-precision decimal string",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"bigdecimal"
]
}
}
},
{
"description": "UUID string (8-4-4-4-12)",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"uuid"
]
}
}
},
{
"description": "ISO 8601 date",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"date"
]
}
}
},
{
"description": "ISO 8601 timestamp",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"timestamp"
]
}
}
},
{
"description": "ISO 8601 timestamp-with-timezone",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"timestamptz"
]
}
}
},
{
"description": "GeoJSON, per RFC 7946",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"geography"
]
}
}
},
{
"description": "GeoJSON Geometry object, per RFC 7946",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"geometry"
]
}
}
},
{
"description": "Base64-encoded bytes",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"bytes"
]
}
}
},
{
"description": "Arbitrary JSON",
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"json"
]
}
}
},
{
"description": "One of the specified string values",
"type": "object",
"required": [
"one_of",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"enum"
]
},
"one_of": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
]
},
"AggregateFunctionDefinition": {
"title": "Aggregate Function Definition",
"description": "The definition of an aggregation function on a scalar type",
"type": "object",
"required": [
"result_type"
],
"properties": {
"result_type": {
"description": "The scalar or object type of the result of this function",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
"Type": {
"title": "Type",
"description": "Types track the valid representations of values as JSON",
"oneOf": [
{
"description": "A named type",
"type": "object",
"required": [
"name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"named"
]
},
"name": {
"description": "The name can refer to a scalar or object type",
"type": "string"
}
}
},
{
"description": "A nullable type",
"type": "object",
"required": [
"type",
"underlying_type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"nullable"
]
},
"underlying_type": {
"description": "The type of the non-null inhabitants of this type",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
{
"description": "An array type",
"type": "object",
"required": [
"element_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"array"
]
},
"element_type": {
"description": "The type of the elements of the array",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
{
"description": "A predicate type for a given object type",
"type": "object",
"required": [
"object_type_name",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"predicate"
]
},
"object_type_name": {
"description": "The object type name",
"type": "string"
}
}
}
]
},
"ComparisonOperatorDefinition": {
"title": "Comparison Operator Definition",
"description": "The definition of a comparison operator on a scalar type",
"oneOf": [
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"equal"
]
}
}
},
{
"type": "object",
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"in"
]
}
}
},
{
"type": "object",
"required": [
"argument_type",
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"custom"
]
},
"argument_type": {
"description": "The type of the argument to this operator",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
}
]
},
"ObjectType": {
"title": "Object Type",
"description": "The definition of an object type",
"type": "object",
"required": [
"fields"
],
"properties": {
"description": {
"description": "Description of this type",
"type": [
"string",
"null"
]
},
"fields": {
"description": "Fields defined on this object type",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ObjectField"
}
}
}
},
"ObjectField": {
"title": "Object Field",
"description": "The definition of an object field",
"type": "object",
"required": [
"type"
],
"properties": {
"description": {
"description": "Description of this field",
"type": [
"string",
"null"
]
},
"type": {
"description": "The type of this field",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
},
"arguments": {
"description": "The arguments available to the field - Matches implementation from CollectionInfo",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
}
}
},
"ArgumentInfo": {
"title": "Argument Info",
"type": "object",
"required": [
"type"
],
"properties": {
"description": {
"description": "Argument description",
"type": [
"string",
"null"
]
},
"type": {
"description": "The name of the type of this argument",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
"CollectionInfo": {
"title": "Collection Info",
"type": "object",
"required": [
"arguments",
"foreign_keys",
"name",
"type",
"uniqueness_constraints"
],
"properties": {
"name": {
"description": "The name of the collection\n\nNote: these names are abstract - there is no requirement that this name correspond to the name of an actual collection in the database.",
"type": "string"
},
"description": {
"description": "Description of the collection",
"type": [
"string",
"null"
]
},
"arguments": {
"description": "Any arguments that this collection requires",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
},
"type": {
"description": "The name of the collection's object type",
"type": "string"
},
"uniqueness_constraints": {
"description": "Any uniqueness constraints enforced on this collection",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/UniquenessConstraint"
}
},
"foreign_keys": {
"description": "Any foreign key constraints enforced on this collection",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ForeignKeyConstraint"
}
}
}
},
"UniquenessConstraint": {
"title": "Uniqueness Constraint",
"type": "object",
"required": [
"unique_columns"
],
"properties": {
"unique_columns": {
"description": "A list of columns which this constraint requires to be unique",
"type": "array",
"items": {
"type": "string"
}
}
}
},
"ForeignKeyConstraint": {
"title": "Foreign Key Constraint",
"type": "object",
"required": [
"column_mapping",
"foreign_collection"
],
"properties": {
"column_mapping": {
"description": "The columns on which you want want to define the foreign key.",
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"foreign_collection": {
"description": "The name of a collection",
"type": "string"
}
}
},
"FunctionInfo": {
"title": "Function Info",
"type": "object",
"required": [
"arguments",
"name",
"result_type"
],
"properties": {
"name": {
"description": "The name of the function",
"type": "string"
},
"description": {
"description": "Description of the function",
"type": [
"string",
"null"
]
},
"arguments": {
"description": "Any arguments that this collection requires",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
},
"result_type": {
"description": "The name of the function's result type",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
},
"ProcedureInfo": {
"title": "Procedure Info",
"type": "object",
"required": [
"arguments",
"name",
"result_type"
],
"properties": {
"name": {
"description": "The name of the procedure",
"type": "string"
},
"description": {
"description": "Column description",
"type": [
"string",
"null"
]
},
"arguments": {
"description": "Any arguments that this collection requires",
"type": "object",
"additionalProperties": {
"$ref": "#/definitions/ArgumentInfo"
}
},
"result_type": {
"description": "The name of the result type",
"allOf": [
{
"$ref": "#/definitions/Type"
}
]
}
}
}
}
}