-
Notifications
You must be signed in to change notification settings - Fork 215
Adaptors
To get data from a database or to write data to a destination, Transporter uses Adaptors. These are software connectors that are built into Transporter to handle input and output. Running transporter about
will list the available Adaptors. An adaptor can be configured as a Source, where the adaptor reads from a database and sends messages into the Transporter pipeline, or as a Sink, where the adaptor gets messages from the Transporter pipeline and writes them to a database.
The Elasticsearch adaptor can only be a Sink and write to Elasticsearch. Incoming messages from the pipeline become JSON documents in an Elasticsearch index.
The MongoDB adaptor can read from (Source) and write to (Sink) MongoDB databases of all types. Messages are converted to and from JSON documents by the adaptor. When reading from a MongoDB instance, it can be put into tail mode which, once initial data has been copied, allows it to generate a synchronizing stream of changes. When writing, the MongoDB adapter can put into bulk mode to send multiple updates or new records in a single batch.
The PostgreSQL/Postgres adaptor can read from (Source) and write to (Sink) PostgreSQL databases. When reading, Postgres table rows are converted to a flat JSON document of column names as keys and row values as values. Using logical replication, reading can create a stream of changes in the table. When writing, the message's top level key/values are mapped to column names and values.
The RethinkDB adaptor can read and write to RethinkDB databases. Messages are converted to and from JSON documents by the adaptor. When reading from a RethinkDB cluster, it can be put into tail mode which, once initial data has been copied, allows it to generate a synchronizing stream of changes.
The file adaptor can read (Source) and write (Sink) disk files, standard in/out or pipes. When writing, the adaptor writes a single JSON string generated from the Message contents. When reading, the adaptor assumes each line read is a JSON object, parses it and populates the Message with the result.
The RabbitMQ adaptor handles publish/subscribe messages to and from the Transporter. It reads and writes message bodies as JSON objects encoded as strings. The routing key for RabbitMQ can be set in the configuration or extracted from a named field in the messages.
Transporter is open source, built by the good people of Compose (and you, if you want to contribute).