Skip to content

Latest commit

 

History

History
975 lines (798 loc) · 40.4 KB

events.adoc

File metadata and controls

975 lines (798 loc) · 40.4 KB

EVENT Basics - Event Types

Zalando’s architecture centers around decoupled microservices and in that context we favour asynchronous event driven approaches. The guidelines focus on how to design and publish events intended to be shared for others to consume.

Events are defined using an item called an Event Type. The Event Type allows events to have their structure declared with a schema by producers and understood by consumers. An Event Type declares standard information, such as a name, an owning application (and by implication, an owning team), a schema defining the event’s custom data, and a compatibility mode declaring how the schema will be evolved. Event Types also allow the declaration of validation and enrichment strategies for events, along with supplemental information such as how events can be partitioned in an event stream.

Event Types belong to a well known Event Category (such as a data change category), which provides extra information that is common to that kind of event.

Event Types can be published and made available as API resources for teams to use, typically in an Event Type Registry. Each event published can then be validated against the overall structure of its event type and the schema for its custom data.

The basic model described above was originally developed in the Nakadi project, which acts as a reference implementation (see also {nakadi-api}[Nakadi API (internal_link)]) of the event type registry, and as a validating publish/subscribe broker for event producers and consumers.

{MUST} define events compliant with overall API guidelines

Events must be consistent with other API data and the API Guidelines in general (as far as applicable).

Everything expressed in the [introduction] to these Guidelines is applicable to event data interchange between services. This is because our events, just like our APIs, represent a commitment to express what our systems do and designing high-quality, useful events allows us to develop new and interesting products and services.

What distinguishes events from other kinds of data is the delivery style used, asynchronous publish-subscribe messaging. But there is no reason why they could not be made available using a REST API, for example via a search request or as a paginated feed, and it will be common to base events on the models created for the service’s REST API.

The following existing guideline sections are applicable to events:

{MUST} treat events as part of the service interface

Events are part of a service’s interface to the outside world equivalent in standing to a service’s REST API. Services publishing data for integration must treat their events as a first class design concern, just as they would an API. For example this means approaching events with the "API first" principle in mind as described in the [introduction].

{MUST} make event schema available for review

Services publishing event data for use by others must make the event schema as well as the event type definition available for review.

{MUST} specify and register events as event types

In Zalando’s architecture, events are registered using a structure called an Event Type. The Event Type declares standard information as follows:

  • A well known event category, such as a general or data change category.

  • The name of the event type.

  • The definition of the event target audience.

  • An owning application, and by implication, an owning team.

  • A schema defining the event payload.

  • The compatibility mode for the type.

Event Types allow easier discovery of event information and ensure that information is well-structured, consistent, and can be validated. The core Event Type structure is shown below as an OpenAPI object definition:

EventType:
  description: |
    An event type defines the schema and its runtime properties. The required
    fields are the minimum set the creator of an event type is expected to
    supply.
  required:
    - name
    - category
    - owning_application
    - schema
  properties:
    name:
      description: |
        Name of this EventType. The name must follow the functional naming
        pattern `<functional-name>.<event-name>` to preserve global
        uniqueness and readability.
      type: string
      pattern: '[a-z][a-z0-9-]*\.[a-z][a-z0-9-]*(\.[Vv][0-9.]+)?'
      example: |
        transactions-order.order-cancelled
        customer-personal-data.email-changed.v2
    audience:
      type: string
      x-extensible-enum:
        - component-internal
        - business-unit-internal
        - company-internal
        - external-partner
        - external-public
      description: |
        Intended target audience of the event type, analogue to audience definition for REST APIs
        in rule #219 -- see https://opensource.zalando.com/restful-api-guidelines/#219
    owning_application:
      description: |
        Name of the application (eg, as would be used in infrastructure
        application or service registry) owning this `EventType`.
      type: string
      example: price-service
    category:
      description: Defines the category of this EventType.
      type: string
      x-extensible-enum:
        - data
        - general
    compatibility_mode:
      description: |
        The compatibility mode to evolve the schema.
      type: string
      x-extensible-enum:
        - compatible
        - forward
        - none
      default: forward
    schema:
      description: The most recent payload schema for this EventType.
      type: object
      properties:
        version:
          description: Values are based on semantic versioning (eg "1.2.1").
          type: string
          default: '1.0.0'
        created_at:
          description: Creation timestamp of the schema.
          type: string
          readOnly: true
          format: date-time
          example: '1996-12-19T16:39:57-08:00'
        type:
          description: |
             The schema language of schema definition. Currently only
             json_schema (JSON Schema v04) syntax is defined, but in the
             future there could be others.
          type: string
          x-extensible-enum:
            - json_schema
        schema:
          description: |
              The schema as string in the syntax defined in the field type.
          type: string
      required:
        - type
        - schema
    ordering_key_fields:
      type: array
      description: |
        Indicates which field is used for application level ordering of events.
        It is typically a single field, but also multiple fields for compound
        ordering key are supported (first item is most significant).

        This is an informational only event type attribute for specification of
        application level ordering. Nakadi transportation layer is not affected,
        where events are delivered to consumers in the order they were published.

        Scope of the ordering is all events (of all partitions), unless it is
        restricted to data instance scope in combination with
        `ordering_instance_ids` attribute below.

        This field can be modified at any moment, but event type owners are
        expected to notify consumer in advance about the change.

        *Background:* Event ordering is often created on application level using
        ascending counters, and data providers/consumers do not need to rely on the
        event publication order. A typical example are data instance change events
        used to keep a data store replica in sync. Here you have an order
        defined per instance using data object change counters (aka row update
        version) and the order of event publication is not relevant, because
        consumers for data synchronization skip older instance versions when they
        reconstruct the data object replica state.

      items:
        type: string
        description: |
          Indicates a single ordering field. This is a JsonPointer, which is applied
          onto the whole event object, including the contained metadata and data (in
          case of a data change event) objects. It must point to a field of type
          string or number/integer (as for those the ordering is obvious).

          Indicates a single ordering field. It is a simple path (dot separated) to
          the JSON leaf element of the whole event object, including the contained metadata and data (in
          case of a data change event) objects. It must point to a field of type
          string or number/integer (as for those the ordering is obvious), and must be
          present in the schema.
        example: "data.order_change_counter"
    ordering_instance_ids:
      type: array
      description: |
        Indicates which field represents the data instance identifier and scope in
        which ordering_key_fields provides a strict order. It is typically a single
        field, but multiple fields for compound identifier keys are also supported.

        This is an informational only event type attribute without specific Nakadi
        semantics for specification of application level ordering. It only can be
        used in combination with `ordering_key_fields`.

        This field can be modified at any moment, but event type owners are expected
        to notify consumer in advance about the change.
      items:
        type: string
        description: |
          Indicates a single key field. It is a simple path (dot separated) to the JSON
          leaf element of the whole event object, including the contained metadata and
          data (in case of a data change event) objects, and it must be present in the
          schema.
       example: "data.order_number"
    created_at:
      description: When this event type was created.
      type: string
      pattern: date-time
    updated_at:
      description: When this event type was last updated.
      type: string
      pattern: date-time

APIs such as registries supporting event types, may extend the model, including the set of supported categories and schema formats. For example the Nakadi API’s event category registration also allows the declaration of validation and enrichment strategies for events, along with supplemental information, such as how events are partitioned in the stream (see {SHOULD} use the hash partition strategy for data change events).

{MUST} follow naming convention for event type names

Event type names must (or should, see [223] for details and definition) conform to the functional naming depending on the audience as follows:

<event-type-name>       ::= <functional-event-name> | <application-event-name>

<functional-event-name> ::= <functional-name>.<event-name>[.<version>]

<event-name>            ::= [a-z][a-z0-9-]* -- free event name (functional name)

<version>               ::= [Vv][0-9.]* -- major version of non compatible schemas

Hint: The following convention (e.g. used by legacy STUPS infrastructure) is deprecated and only allowed for internal event type names:

<application-event-name> ::= [<organization-id>.]<application-id>.<event-name>
<organization-id>  ::= [a-z][a-z0-9-]* -- organization identifier, e.g. team identifier
<application-id>   ::= [a-z][a-z0-9-]* -- application identifier

Note: consistent naming should be used whenever the same entity is exposed by a data change event and a RESTful API.

{MUST} indicate ownership of event types

Event definitions must have clear ownership - this can be indicated via the owning_application field of the EventType.

Typically there is one producer application, which owns the EventType and is responsible for its definition, akin to how RESTful API definitions are managed. However, the owner may also be a particular service from a set of multiple services that are producing the same kind of event.

{MUST} carefully define the compatibility mode

Event type owners must pay attention to the choice of compatibility mode. The mode provides a means to evolve the schema. The range of modes are designed to be flexible enough so that producers can evolve schemas while not inadvertently breaking existing consumers:

  • none: Any schema modification is accepted, even if it might break existing producers or consumers. When validating events, undefined properties are accepted unless declared in the schema.

  • forward: A schema S1 is forward compatible if the previously registered schema, S0 can read events defined by S1 - that is, consumers can read events tagged with the latest schema version using the previous version as long as consumers follow the robustness principle described in the guideline’s [api-design-principles].

  • compatible: This means changes are fully compatible. A new schema, S1, is fully compatible when every event published since the first schema version will validate against the latest schema. In compatible mode, only the addition of new optional properties and definitions to an existing schema is allowed. Other changes are forbidden.

{MUST} ensure event schema conforms to OpenAPI schema object

To align the event schema specifications to API specifications, we use the Schema Object as defined by the OpenAPI Specifications to define event schemas. This is particularly useful for events that represent data changes about resources also used in other APIs.

The OpenAPI Schema Object is an extended subset of JSON Schema Draft 4. For convenience, we highlight some important differences below. Please refer to the OpenAPI Schema Object specification for details.

As the OpenAPI Schema Object specification removes some JSON Schema keywords, the following properties must not be used in event schemas:

  • additionalItems

  • contains

  • patternProperties

  • dependencies

  • propertyNames

  • const

  • not

  • oneOf

On the other side Schema Object redefines some JSON Schema keywords:

Finally, the Schema Object extends JSON Schema with some keywords:

  • readOnly: events are logically immutable, so readOnly can be considered redundant, but harmless.

  • discriminator: to support polymorphism, as an alternative to oneOf.

  • ^x-: patterned objects in the form of vendor extensions can be used in event type schema, but it might be the case that general purpose validators do not understand them to enforce a validation check, and fall back to must-ignore processing. A future version of the guidelines may define well known vendor extensions for events.

{SHOULD} avoid additionalProperties in event type schemas

Event type schema should avoid using additionalProperties declarations, in order to support schema evolution.

Events are often intermediated by publish/subscribe systems and are commonly captured in logs or long term storage to be read later. In particular, the schemas used by publishers and consumers can
drift over time. As a result, compatibility and extensibility issues that happen less frequently with client-server style APIs become important and regular considerations for event design. The guidelines recommend the following to enable event schema evolution:

  • Publishers who intend to provide compatibility and allow their schemas to evolve safely over time must not declare an additionalProperties field with a value of true (i.e., a wildcard extension point). Instead they must define new optional fields and update their schemas in advance of publishing those fields.

  • Consumers must ignore fields they cannot process and not raise errors. This can happen if they are processing events with an older copy of the event schema than the one containing the new definitions specified by the publishers.

The above constraint does not mean fields can never be added in future revisions of an event type schema - additive compatible changes are allowed, only that the new schema for an event type must define the field first before it is published within an event. By the same turn the consumer must ignore fields it does not know about from its copy of the schema, just as they would as an API client - that is, they cannot treat the absence of an additionalProperties field as though the event type schema was closed for extension.

Requiring event publishers to define their fields ahead of publishing avoids the problem of field redefinition. This is when a publisher defines a field to be of a different type that was already being emitted, or, is changing the type of an undefined field. Both of these are prevented by not using additionalProperties.

See also rule [111] in the [compatibility] section for further guidelines on the use of additionalProperties.

{MUST} use semantic versioning of event type schemas

Event schemas must be versioned — analog to [116] for REST API definitions. The compatibility mode interact with revision numbers in the schema version field, which follows semantic versioning (MAJOR.MINOR.PATCH):

  • Changing an event type with compatibility mode compatible or forward can lead to a PATCH or MINOR version revision. MAJOR breaking changes are not allowed.

  • Changing an event type with compatibility mode none can lead to PATCH, MINOR or MAJOR level changes.

The following examples illustrate these relations:

  • Changes to the event type’s title or description are considered PATCH level.

  • Adding new optional fields to an event type’s schema is considered a MINOR level change.

  • All other changes are considered MAJOR level, such as renaming or removing fields, or adding new required fields.

EVENT Basics - Event Categories

An event category describes a generic class of event types. The guidelines define two such categories:

  • General Event: a general purpose category.

  • Data Change Event: a category to inform about data entity changes and used e.g. for data replication based data integration.

{MUST} ensure that events conform to an event category

A category describes a predefined structure (e.g. including event metadata as part of the event payload) that event publishers must conform to along with standard information specific for the event category (e.g. the operation for data change events).

The general event category

The structure of the General Event Category is shown below as an OpenAPI Schema Object definition:

GeneralEvent:
  description: |
    A general kind of event. Event kinds based on this event define their
    custom schema payload as the top level of the document, with the
    "metadata" field being required and reserved for standard metadata. An
    instance of an event based on the event type thus conforms to both the
    EventMetadata definition and the custom schema definition.
    Hint: In earlier versions this category was called the Business Category.
  required:
    - metadata
  properties:
    metadata:
        $ref: '#/definitions/EventMetadata'

Event types based on the General Event Category define their custom schema payload at the top-level of the document, with the metadata field being reserved for standard information (the contents of metadata are described further down in this section).

Note:

  • The General Event was called a Business Event in earlier versions of the guidelines. Implementation experience has shown that the category’s structure gets used for other kinds of events, hence the name has been generalized to reflect how teams are using it.

  • The General Event is still useful and recommended for the purpose of defining events that drive a business process.

  • The Nakadi broker still refers to the General Category as the Business Category and uses the keyword business for event type registration. Other than that, the JSON structures are identical.

See {MUST} use general events to signal steps in business processes for more guidance on how to use the category.

The data change event category

The Data Change Event Category structure is shown below as an OpenAPI Schema Object:

DataChangeEvent:
  description: |
    Represents a change to an entity. The required fields are those
    expected to be sent by the producer, other fields may be added
    by intermediaries such as a publish/subscribe broker. An instance
    of an event based on the event type conforms to both the
    DataChangeEvent's definition and the custom schema definition.
  required:
    - metadata
    - data_op
    - data_type
    - data
  properties:
    metadata:
      description: The metadata for this event.
      $ref: '#/definitions/EventMetadata'
    data:
      description: |
        Contains custom payload for the event type. The payload must conform
        to a schema associated with the event type declared in the metadata
        object's `event_type` field.
      type: object
    data_type:
      description: name of the (business) data entity that has been mutated
      type: string
      example: 'sales_order.order'
    data_op:
      type: string
      enum: ['C', 'U', 'D', 'S']
      description: |
        The type of operation executed on the entity:

        - C: Creation of an entity
        - U: An update to an entity.
        - D: Deletion of an entity.
        - S: A snapshot of an entity at a point in time.

The Data Change Event Category is structurally different to the General Event Category by defining a field called data as container for the custom payload, as well as specific information related to data changes in the data_op.

The following guidelines specifically apply to Data Change Events:

Event metadata

{MUST} provide mandatory event metadata

The General and Data Change event categories share a common structure for metadata representing generic event information. Parts of the metadata is provided by the Nakadi event messaging platform, but event identifier (eid) and event creation timestamp (occurred_at) have to be provided by the event producers. The metadata structure is defined below as an OpenAPI Schema Object:

EventMetadata:
  type: object
  description: |
    Carries metadata for an Event along with common fields. The required
    fields are those expected to be sent by the producer, other fields may be
    added by intermediaries such as publish/subscribe broker.
  required:
    - eid
    - occurred_at
  properties:
    eid:
      description: Identifier of this event.
      type: string
      format: uuid
      example: '105a76d8-db49-4144-ace7-e683e8f4ba46'
    event_type:
      description: The name of the EventType of this Event.
      type: string
      example: 'example.important-business-event'
    occurred_at:
      description: |
         Technical timestamp of when the event object was created during processing
         of the business event by the producer application. Note, it may differ from
         the timestamp when the related real-world business event happened (e.g. when
         the packet was handed over to the customer), which should be passed separately
         via an event type specific attribute.
         Depending on the producer implementation, the timestamp is typically some
         milliseconds earlier than when the event is published and received by the
         API event post endpoint server -- see below.
      type: string
      format: date-time
      example: '1996-12-19T16:39:57-08:00'
    received_at:
      description: |
        Timestamp of when the event was received via the API event post endpoints.
        It is automatically enriched, and events will be rejected if set by the
        event producer.
      type: string
      readOnly: true
      format: date-time
      example: '1996-12-19T16:39:57-08:00'
    version:
      description: |
        Version of the schema used for validating this event. This may be
        enriched upon reception by intermediaries. This string uses semantic
        versioning.
      type: string
      readOnly: true
    parent_eids:
      description: |
        Event identifiers of the Event that caused the generation of
        this Event. Set by the producer.
      type: array
      items:
        type: string
        format: uuid
      example: '105a76d8-db49-4144-ace7-e683e8f4ba46'
    flow_id:
      description: |
        A flow-id for this event (corresponds to the X-Flow-Id HTTP header).
      type: string
      example: 'JAh6xH4OQhCJ9PutIV_RYw'
    partition:
      description: |
        Indicates the partition assigned to this Event. Used for systems
        where an event type's events can be sub-divided into partitions.
      type: string
      example: '0'

Please note that intermediaries acting between the producer of an event and its ultimate consumers, may perform operations like validation of events and enrichment of an event’s metadata. For example brokers such as Nakadi, can validate and enrich events with arbitrary additional fields that are not specified here and may set default or other values, if some of the specified fields are not supplied. How such systems work is outside the scope of these guidelines but producers and consumers working with such systems should look into their documentation for additional information.

{MUST} provide unique event identifiers

Event publishers must provide the eid (event identifier) as a standard event object field and part of the event metadata. The eid must be a unique identifier for the event in the scope of the event type / stream lifetime.

Event producers must use the same eid when publishing the same event object multiple times. For instance, producers must ensure that publish event retries — e.g. due to temporary Nakadi or network failures or fail-overs — use the same eid value as the initial (possibly failed) attempt.

The eid supports event consumers in detecting and being robust against event duplicates — see {MUST} be robust against duplicates when consuming events.

Hint: Using the same eid for retries can be ensured, for instance, by generating a UUID either (i) as part of the event object construction and using some form of intermediate persistence, like an event publish retry queue, or (ii) via a deterministic function that computes the UUID value from the event fields as input (without random salt).

{MUST} use general events to signal steps in business processes

When publishing events that represent steps in a business process, event types must be based on the General Event category. All your events of a single business process will conform to the following rules:

  • Business events must contain a specific identifier field (a business process id or "bp-id") similar to flow-id to allow for efficient aggregation of all events in a business process execution.

  • Business events must contain a means to correctly order events in a business process execution. In distributed settings where monotonically increasing values (such as a high precision timestamp that is assured to move forwards) cannot be obtained, the parent_eids data structure allows causal relationships to be declared between events.

  • Business events should only contain information that is new to the business process execution at the specific step/arrival point.

  • Each business process sequence should be started by a business event containing all relevant context information.

  • Business events must be published reliably by the service.

At the moment we cannot state whether it’s best practice to publish all the events for a business process using a single event type and represent the specific steps with a state field, or whether to use multiple event types to represent each step. For now we suggest assessing each option and sticking to one for a given business process.

{SHOULD} provide explicit event ordering for general events

Event processing consumer applications need the order information to reconstruct the business event stream, for instance, in order to replay events in error situations, or to execute analytical use cases outside the context of the original event stream consumption. All general events (fka business events) should be provided with the explicit information about the business ordering of the events. To accomplish this event ordering the event type definition

  • must specify a the ordering_key_fields property to indicate which field(s) contain the ordering key, and

  • should specify the ordering_instance_ids property to define which field(s) represents the business entity instance identifier.

Note: The ordering_instance_ids restrict the scope in which the ordering_key_fields provide the strict order. If undefined, the ordering is assumed to be provided in scope of all events.

The business ordering information can be provided – among other ways – by maintaining…​

  • a strictly monotonically increasing version of entity instances (e.g. created as row update counter by a database), or

  • a strictly monotonically increasing sequence counter (maintained per partition or event type).

Hint: timestamps are often a bad choice, since in distributed systems events may occur at the same time, or clocks are not exactly synchronized, or jump forward and backward to compensate drifts or leap-seconds. If you use anyway timestamps to indicate event ordering, you must carefully ensure that the designated event order is not messed up by these effects and use UTC time zone format.

Note: The received_at and partition_offset metadata set by Nakadi typically is different from the business event ordering, since (1) Nakadi is a distributed concurrent system without atomic, ordered event creation and (2) the application’s implementation of event publishing may not exactly reflect the business order. The business ordering information is application knowledge, and implemented in the scope of event partitions or specific entities, but may also comprise all events, if scaling requirements are low.

{MUST} use data change events to signal mutations

You must use data change events to signal changes of stored entity instances and facilitate e.g. change data capture (CDC). Event sourced change data capture is crucial for our data integration architecture as it supports the logical replication (and reconstruction) of the application datastores to the data analytics and AI platform as transactional source datasets.

  • Change events must be provided when publishing events that represent created, updated, or deleted data.

  • Change events must provide the complete entity data including the identifier of the changed instance to allow aggregation of all related events for the entity.

  • Change events {MUST} provide explicit event ordering for data change events.

  • Change events must be published reliably by the service.

{MUST} provide explicit event ordering for data change events

While the order information is recommended for business events, it must be provided for data change events. The ordering information defines the (create, update, delete) change order of the data entity instances managed via the application’s transactional datastore. It is needed for change data capture to keep transactional dataset replicas in sync as source for data analytics.

For details about how to provide the data change ordering information, please check {SHOULD} provide explicit event ordering for general events.

Exception: In situations where the transactional data is 'append only', i.e. entity instances are only created, but never updated or deleted, the ordering information may not be provided.

{SHOULD} use the hash partition strategy for data change events

The hash partition strategy allows a producer to define which fields in an event are used as input to compute a logical partition the event should be added to. Partitions are useful as they allow supporting systems to scale their throughput while provide local ordering for event entities.

The hash option is particularly useful for data changes as it allows all related events for an entity to be consistently assigned to a partition, providing a relative ordered stream of events for that entity. This is because while each partition has a total ordering, ordering across partitions is not assured by a supporting system, thus it is possible for events sent across partitions to appear in a different order to consumers that the order they arrived at the server.

When using the hash strategy the partition key in almost all cases should represent the entity being changed and not a per event or change identifier such as the eid field or a timestamp. This ensures data changes arrive at the same partition for a given entity and can be consumed effectively by clients.

There may be exceptional cases where data change events could have their partition strategy set to be the producer defined or random options, but generally hash is the right option - that is while the guidelines here are a "should", they can be read as "must, unless you have a very good reason".

EVENT Design

{SHOULD} avoid writing sensitive data to events

Event data security is supported by Nakadi Event Bus mechanisms for access control and authorization of publishing or consuming events. However, we avoid writing sensitive data (e.g. personal data like e-mail or address) to events unless it is needed for the business. Sensitive data create additional obligations for access control and compliance and generally increases data protection risks.

{MUST} be robust against duplicates when consuming events

Duplicate events are multiple occurrences of the same event object representing the same (business or data change) event instance.

Event consumers must be robust against duplicate events. Depending on the use case, being robust implies that event consumers need to deduplicate events, i.e. to ignore duplicates in event processing. For instance, for accounting reporting a high accuracy is required by the business, and duplicates need to be explicitly ignored, whereas customer behavior reporting (click rates) might not care about event duplicates since accuracy in per-mille range is not needed.

Hint: Event consumers are supported in deduplication:

Context: One source of duplicate events are the event producers, for instance, due to publish call retries or publisher client failovers. Another source is Nakadi’s 'at-least-once' delivery promise (like provided by most message broker distributed systems). It also currently applies to the Data Lake event materialization as raw event datasets (in Delta or JSON format) for Data Analytics. From an event consumer point of view, duplicate events may also be created when consumers reprocess parts of the event stream, for instance, due to inaccurate continuation after failures. Event publishers and infrastructure systems should keep event duplication at a minimum typically below per-mille range. (In Nov. 2022, for instance, we observed <0.2 ‰ daily event duplicate rate (95th percentile) for high volume events.)

{SHOULD} design for idempotent out-of-order processing

Events that are designed for idempotent out-of-order processing allow for extremely resilient systems: If processing an event fails, consumers and producers can skip/delay/retry it without stopping the world or corrupting the processing result.

To enable this freedom of processing, you must explicitly design for idempotent out-of-order processing: Either your events must contain enough information to infer their original order during consumption or your domain must be designed in a way that order becomes irrelevant.

As common example similar to data change events, idempotent out-of-order processing can be supported by sending the following information:

A receiver that is interested in the current state can then ignore events that are older than the last processed event of each resource. A receiver interested in the history of a resource can use the ordering key to recreate a (partially) ordered sequence of events.

{MUST} ensure that events define useful business resources

Events are intended to be used by other services including business process/data analytics and monitoring. They should be based around the resources and business processes you have defined for your service domain and adhere to its natural lifecycle (see also [139] and [140]).

As there is a cost in creating an explosion of event types and topics, prefer to define event types that are abstract/generic enough to be valuable for multiple use cases, and avoid publishing event types without a clear need.

{SHOULD} ensure that data change events match the APIs resources

A data change event’s representation of an entity should correspond to the REST API representation.

There’s value in having the fewest number of published structures for a service. Consumers of the service will be working with fewer representations, and the service owners will have less API surface to maintain. In particular, you should only publish events that are interesting in the domain and abstract away from implementation or local details - there’s no need to reflect every change that happens within your system.

There are cases where it could make sense to define data change events that don’t directly correspond to your API resource representations. Some examples are -

  • Where the API resource representations are very different from the datastore representation, but the physical data are easier to reliably process for data integration.

  • Publishing aggregated data. For example a data change to an individual entity might cause an event to be published that contains a coarser representation than that defined for an API

  • Events that are the result of a computation, such as a matching algorithm, or the generation of enriched data, and which might not be stored as entity by the service.

{MUST} maintain backwards compatibility for events

Changes to events must be based around making additive and backward compatible changes. This follows the guideline, "Must: Don’t Break Backward Compatibility" from the [compatibility] guidelines.

In the context of events, compatibility issues are complicated by the fact that producers and consumers of events are highly asynchronous and can’t use content-negotiation techniques that are available to REST style clients and servers. This places a higher bar on producers to maintain compatibility as they will not be in a position to serve versioned media types on demand.

For event schema, these are considered backward compatible changes, as seen by consumers -

  • Adding new optional fields to JSON objects.

  • Changing the order of fields (field order in objects is arbitrary).

  • Changing the order of values with same type in an array.

  • Removing optional fields.

  • Removing an individual value from an enumeration.

These are considered backwards-incompatible changes, as seen by consumers -

  • Removing required fields from JSON objects.

  • Changing the default value of a field.

  • Changing the type of a field, object, enum or array.

  • Changing the order of values with different type in an array (also known as a tuple).

  • Adding a new optional field to redefine the meaning of an existing field (also known as a co-occurrence constraint).

  • Adding a value to an enumeration (note that x-extensible-enum is not available in JSON Schema)