diff --git a/.README.md.html b/.README.md.html
deleted file mode 100644
index 2d1e3b0..0000000
--- a/.README.md.html
+++ /dev/null
@@ -1,801 +0,0 @@
-
-
-
-
-/Users/heinz/git/solace-kafka-connector-sink/.README.md.html
-
-
-
-
-
-
-
-
-
-
-
- Solace Sink Kafka Connector v1.0
- Synopsis
-This project provides a Solace/Kafka Sink Connector (adapter) that makes use of the Kafka Connect libraries. The Solace/Kafka adapter consumes Kafka topic records and streams the data events to the Solace Event Mesh as a Topic and/or Queue data event.
-On the Solace side of the Sink Connector the adapter is using Solace's high performance Java API to stream Solace messages to a Solace Broker (PubSub+ appliance, software or Solace Cloud service). Unlike many other message brokers, Solace supports transparent protocol and API messaging transformations. Therefore, any message that reaches the Solace broker is not limited to being consumed from the Solace broker only by Java clients using the same JCSMP libraries that were used to send the messages to the Solace Broker. Solace supports transparent interoperability with many message transports and languages/APIs. Therefore, from the single Solace Sink Connector any Kafka Topic (Key or not Keyed) Sink Record is instantly available for consumption by any consumer that uses one of the Solace supported open standards languages or transport protocols.
-Consider the following diagram:
-
-It does not matter that the Kakfa record was consumed by the Connector and sent using Java JCSMP transport to a Solace broker (appliance, software or cloud). The Solace event message can transparently be consumed by a Cell Phone, a REST Server or an AMQP, JMS, MQTT message, etc. as a real-time asynchronous data event.
-The Solace Sink Connector also ties Kafka records into the Solace Event Mesh. The Event Mesh is a clustered group of Solace PubSub+ Brokers that transparently, in real-time, route data events to any Service that is part of the Event Mesh. Solace PubSub+ Brokers (Appliances, Software and SolaceCloud) are connected to each other as a multi-connected mesh that to individual services (consumers or producers of data events) appears to be a single Event Broker. Events messages are seamlessly transported within the entire Solace Event Mesh regardless of where the event is created and where the process exists that has registered interested in consuming the event. Simply by registering interest in receiving events, the entire Event Mesh becomes aware of the registration request and will know how to securely route the appropriate events generated by the Solace Sink Connector.
-The Solace Sink Connector allows the creation and storage of a new Kafka record to become an event in the Solace Event Mesh. The Solace Sink Connector provides the ability to transparently push any new Kafka Record that is placed onto a Kafka Topic into the Solace Event Mesh. That new event can be consumed by any other service that is connected to the Solace Event Mesh and has registered interest in the event. As a result, all other service that are part of the Event Mesh will be able to receive the Kafka Records through this single Solace Sink Connector. There is no longer a requirement for separate Kakfa Sink Connectors to each of the separate services. The single Solace Sink Connector is all that is required. Once the Record is in the Event Mesh, it is available to all other services.
-The Solace Sink Connector eliminates the complexity and overhead of maintaining separate Sink Connectors for each and every service that may be interested in the same data that is placed into a Kafka Topic. There is the added benefit of access to services where there is no Kafka Sink Connector available, thereby eliminating the need to create and maintain a new connector for new services that may be interested in Kafka Records.
-Consider the following:
-
-A single Solace Sink Connector will be able to move the new Kakfa Record to any downstream service via a single connector.
-The Solace Sink Connector also ties into Solace's location transparency for the Event Mesh PubSub+ brokers. Solace supports a wide range of brokers for deployment. There are three major categories of Solace PubSub+ brokers: dedicated extreme performance hardware appliances, high performance software brokers that are deployed as software images (deployable under most Hypervisors, Cloud IaaS and PaaS layers and in Docker) and provided as a fully managed Cloud MaaS (Messaging as a Service).
-It does not matter what Solace Broker is used or where it is deployed, it can become part of the Solace Event Mesh. Therefore, there are no restrictions on where the Solace Sink Connector is deployed or what PubSub+ broker is used to connect Kafka to the Solace Event Bus. The Solace Event Mesh infrastructure will allow, via the Solace Sink Connector, Kafka events to be consumed by any Service anywhere that is part of the Event Mesh.
-Consider the following:
-
-It does not matter if the Kakfa record storage event was generated by a Solace Sink Connector in the Cloud or on premise. It does not matter if Solace Sink Connector was connected to a Solace PubSub+ broker that was an appliance, on premise or Cloud software, or the the Cloud managed MaaS, it will immediately in real time be available to all Solace Event Mesh connected services that are located anywhere.
-It is important to mention that there is also a Solace Source Connector for Kafka available. The Solace Source Connector allows registration of interest in specific events on the Solace Event Mesh. When these events of interest are consumed by the Solace Source Connector, they are placed as a Kakfa Record onto a Kafka Topic. These events that are stored in Kafka are now transparently available to any application that is consuming Kafka records directly from the Kafka brokers. Please refer to the Solace Source Connector GitHub repository for more details.
- Usage
-This is a Gradle project that references all the required dependencies. To check the code style and find bugs you can use:
-./gradlew clean check
-
-To actually create the Connector Jar file use:
-./gradlew clean jar
-
- Deployment
-The Solace Sink Connector has been tested in three environments: Apache Kafka, Confluent Kafka and the AWS Confluent Platform. For testing, it is recommended to use the single node deployment of Apache or Confluent Kafka software.
-To deploy the Connector, as described in the Kafka documentation, it is necessary to move the Connector jar file and the required third party jar files to a directory that is part of the Worker-defined classpath. Details for installing the Solace Sink Connector are described in the next two sub sections.
- Apache Kafka
-For Apache Kafka, the software is typically found, for example for the 2.11 version, under the root directory: "/opt/kafka-apache/"kafka_2.11-1.1.0". Typically the Solace Sink Connector would be placed under the "libs" directory under the root directory. All required Solace JCSMP JAR files should be placed under the same "libs" directory. The properties file for the connector would typically be placed under the "config" directory below the root directory.
-To start the connector in stand-alone mode while in the "bin" directory the command would be similar to:
-./connect-standalone.sh ../config/connect-standalone.properties ../config/solaceSink.properties
-
-In this case "solaceSink.properties" is the configuration file that you created to define the connectors behavior. Please refer to the sample included in this project.
-When the connector starts in stand-alone mode, all output goes to the console. If there are errors they should be visible on the console. If you do not want the output to console, simply add the "-daemon" option and all output will be directed to the logs directory.
- Confluent Kafka
-The Confluent Kakfa software is typically placed under the root directory: "/opt/confluent/confluent-4.1.1". In this case it is for the 4.1.1 version of Confluent. By default, the Confluent software is started in distributed mode with the REST Gateway started.
-THe Solace Sink Connector would typically be placed in the "/opt/confluent/confluent-4.1.1/share/java/kafka-connect-solace". You will need to create the "kafka-connect-solace" directory. You must place all the required Solace JCSMP JAR files under this same directory. If you plan to run the Sink Connector in stand-alone mode, it is suggested to place the properties file under the same directory.
-After the Solace files are installed and if you are familiar with Kakfa administration, it is recommended to restart the Confluent Connect software if Confluent is running in Distributed mode. Alternatively, it is simpler to just start and restart the Confluent software with the "confluent" command.
-At this point you can test to confirm the Solace Sink Connector is available for use in distributed mode with the command:
-curl http://18.218.82.209:8083/connector-plugins | jq
-
-In this case the IP address is one of the nodes running the Distributed mode Worker process. If the Connector is loaded correctly, you should see something similar to:
-
-At this point, it is now possible to start the connector in distributed mode with a command similar to:
-curl -X POST -H "Content-Type: application/json" -d @solace_sink_properties.json http://18.218.82.209:8083/connectors
-
-Again, the IP address is one of the nodes running the Distributed mode Worker process. The connector's JSON configuration file, in this case, is called "solace_sink_properties.json".
-You can determine if the Sink Connector is running with the following command:
-
-
curl 18.218.82.209:8083/connectors/solaceSinkConnector/status | jq
-
-If there was an error in starting, the details will be returned with this command. If the Sink Connector was successfully started the status of the connector and task processes will be "running":
-
- Configuration
-The Solace Sink Connector configuration is managed by the configuration file. For stand-alone Kafka deployments a properties file is used. A sample is enclosed with the project.
-For distributed Kafka deployments the connector can be deployed via REST as a JSON configuration file. A sample is enclosed with the project.
- Solace Configuration for the Sink Connector
-The Solace configuration of the connector's Solace Session, Transport and Security properties are all available and defined in the SolaceSinkConstants.java file. These are the equivalent to the details for the Solace JCSMPSessionProperties class. Details and documentation for this JCSMPProperies class can be found here:
-Solace Java API
-For tuning, performance and scaling (multiple tasks is supported with this connector) of the Solace Sink Connector, please refer to the Solace PubSub+ documentation that can be found here:
-Solace PubSub+ Documentation
-There is a bare minimum requirement to configure access to the Solace PubSub+ broker. A username, their password and VPN (Solace Virtual Private Network - a "virtual broker" used in Solace multi-tenancy configurations) and host reference are mandatory configuration details. An example of the required configuration file entries is as follows:
-
-
sol.username=user1
-sol.password=password1
-sol.vpn_name=kafkavpn
-sol.host=160.101.136.33
-
-If you have just installed a Solace PubSub+ broker and you are not that familiar with Solace administration, you can test your Sink Connector by using "default" as value for the username, password and VPN name. The host should match the IP address of the broker.
-For connectivity to Kafka, the Sink Connector has four basic configuration requirements: name for the Connector Plugin, the name of the Java Class for the connector, the number of Tasks the connector should deploy and the name of the Kakfa Topic. The following is an example for the Solace Source Connector:
-
-
name=solaceSinkConnector
-connector.class=com.solace.sink.connector.SolaceSinkConnector
-tasks.max=1
-topics=solacetest
-
-A more details example is included with this project. This project also includes a JSON configuration file.
- Solace Record Processor
-The processing of the Kafka Source Record to create a Solace message is handled by an interface definition defined in SolaceRecordProcessor.java - This is a simple interface that is used to create the Solace event message from the Kafka record. There are three examples included of classes that implement this interface:
-
- - SolSimpleRecordProcessor.java - Takes the Kafka Sink record as a binary payload with a Binary Schema for the value ( which becomes the Solace message payload) and a Binary Schema for the record key and creates and sends the appropriate Solace event message. The Kafka Sink Record key and value scheme can be changed via the configuration file.
- - SolSimpleKeyedRecordProcessor - A more complex sample that allows the flexibility of changing the Source Record Key Schema and which value from the Solace message to use as a key. The option of no key in the record is also possible. The Kafka Key is also used as a Correlation ID in the Solace messages and the original Kafka Topic is included for reference in the Solace Message as UserData in the Solace message header.
- - SolSimpleKeyedRecordProcessorDTO - This is the same as the "SolSimpleKeyedRecordProcessor", but it adds a DTO (Deliver-to-One) flag to the Solace Event message. The DTO flag is part of topic consumer scaling. For more details refer to the Solace documentation. The Solace Source Connector also support consuming Solace Event Messages with the DTO flag. More details are available in in GitHub where you will find the Solace Source Task code and details.
-
-The desired message processor is loaded at runtime based on the configuration of the JSON or properties configuration file, for example:
-
-
sol.record_processor_class=com.solace.sink.connector.recordprocessor.SolSimpleKeyedRecordProcessor
-
-It is possible to create more custom Record Processors based on your Kafka record requirements for keying and/or value serialization and the desired format of the Solace event message. Simply add the new record processor classes to the project. The desired record processor is installed at run time based on the configuration file.
-More information on Kakfa Connect can be found here:
-Apache Kafka Connect
-Confluent Kafka Connect
- Scaling the Sink Connector
-The Solace Sink Connector will scale when more performance is required. There is only so much throughput that can be pushed through the Connect API. The Solace broker supports far greater throughput than can be afforted through a single instance of the Connect API. The Kafka Broker can also produce records at a rate far greater than available through a single instance of the Connector. Therefore, multiple instances of the Sink Connector will increase throughput from the Kafka broker to the Solace PubSub+ broker.
-Multiple Connector tasks are automatically deployed and spread across all available Connect Workers simply by indicating the number of desired tasks in the connector configuration file.
-When the Sink Connector is consuming from Kafka and the event records are expected to be placed in to a Solace Queue, there are no special requirements for the Solace Queue definition. As more instance of the connector are defined in the configuration, they will each simultaneously push event messages into the defined queue.
-The Solace Sink Connector can also be configured to generate Solace Topic event messages when new records are placed into Kafka. There is no special setup on the Solace Broker that the multiple scaled connector instances to scale the performance of Solace topic-based event messages.
-When a Solace Sink Connector is scaled, it will automatically use a Kakfa Consumer Group to allow Kafka to move the records for the multiple Topic Partitions in parallel.
- Sending Solace Event Messages
-The Kafka Connect API automatically keeps track of the offset that the Sink Connector has read and processed. If the Sink Connector stops or is restarted, the Connect API will start passing records to the Sink Connector based on the last saved offset. Generally, the offset is saved on a timed basis every 10 seconds. This is tunable in the Connect Worker configuration file.
-When the Solace Sink Connector is sending Solace Topic message data events, the chances of duplication and message loss will mimic the underlying reliability and QoS configured for the Kakfa Topic and is also controlled by the timer for flushing the offset value to disk.
-It is also possible to send Kafka Topic messages to Solace queues. A Solace Queue guarantees order of deliver, provides High Availability and Disaster Recovery (depending on the setup of the PubSub+ brokers) and provides an acknowledgment to the message producer (in this case the Solace Sink Connector) when the event is stored in all HA and DR members and flushed to disk. This is a higher guarantee than is provided by Kakfa even for Kafka idempotent delivery.
-When the Solace Sink Connector is sending data events to Queues, the messages are send using a Session Transaction. When 200 events are processed, the Solace Connector automatically forces a flush of the offset and then commits the Solace transactions. If the timer goes off before the 200 messages are send the same flush/commit is executed. Therefore, there should be no duplicates sent to the Service Mesh. However, data loss is a factor of the Kafka Topic's reliability and QoS configuration.
-If there is any error or failure and the Offset location is not synced, the Solace transaction will roll back messages in the queue up until the last offset flush. After the connector is restarted, processing will begin again from the last stored Offset.
-It is recommended to use Solace Topics when sending events if high throughput is required and the Kakfa Topic is configured for high performance. When a Kakfa topic is configured for it's highest throughput it will potentially result in loss or duplication within the processing of records in the Kafka Topic.
-Increasing the reliability of the Kafka Topic processing to reduce the potential loss or duplication, but will also greatly reduce throughput. When Kafka reliability is critical, it may be recommended to mimic this reliability with the Solace Sink Connector and configure the connector to send the Kafka records to the Event Mesh using Solace Queues.
- Dynamic Destinations
-By default, the Sink Connector will send messages from the Kafka Records to the Destinations (Topic or Queues) defined in the configuration file (Properties or JSON file). In some cases, it may be desirable to send each Kafka Record to a different Solace Topic based on the details in the Kafka Record. This would mean that rather than using the static Solace Topic defined in the configuration file, a dynamic Solace Topic would need to be created for each record.
-Generally, a Solace Topic is a hierarchical meta-data representation that describes the message payload. Therefore, it is generally possible to form a Solace Topic that matches the "rules" defined to generate a topic from the data in the payload. In this way each Kafka Record from the same Kafka Topic could be targeted to a potentially different Solace Topic.
-To make use of dynamic topics in the Solace Record Processors, it is necessary to update the configuration to indicate to the Solace Sink Connector to ignore the configuration destination references with the following entry:
-
-
sol.dynamic_destination=true
-
-This entry in the configuration indicates that the actual destination must be defined in the record processor. To add the dynamic Solace Topic in the record processor, it necessary to add the details into the user defined Solace Header, for example:
-
-
SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
- try {
- userHeader.putString("k_topic", record.topic());
- userHeader.putInteger("k_partition", record.kafkaPartition());
- userHeader.putLong("k_offset", record.kafkaOffset());
- userHeader.putDestination("dynamicDestination", topic);
- } catch (SDTException e) {
- log.info("Received Solace SDTException {}, with the following: {} ",
- e.getCause(), e.getStackTrace());
- }
-
-In this case the "topic" is the Solace Topic that was created based on data in the Kafka record. Please refer to sample record processor for more details:
-
-
SolDynamicDestinationRecordProcessor.java
-
-The sample is included with this project.
-It is important to note that if the destination is a Solace Queue, the network topic name for queues can be used. For example, if the queue is "testQueue", the dynamic topic would be "$P2P/QUE/testQueue".
- Message Replay
-By default, the Solace Sink Connector will start sending Solace events based on the last Kafka Topic offset that was flushed before the connector was stopped. It is possible to use the Solace Sink Connector to replay messages from the Kafka Topic.
-Adding a configuration entry allows the Solace Sink Connector to start processing from an offset position that is different from the last offset that was stored before the connector was stopped. This is controlled by adding the following entry to the connector configuration file:
-
-
sol.kakfa_replay_offset=<offset>
-
-The offset is a Java Long value. A value of 0 will result in the replay of the entire Kafka Topic. A positive value will result in the replay from that offset value for the Kafka Topic. The same offset value will be used against all active partitions for that Kafka Topic.
-To make is easier to determine offset values for the Kafka Topic records, the three Record Processor samples included with this project include the sending of Solace message events that includes the Kafka Topic, Partition and Offset for every Kafka record that corresponds to the specific Solace event message. The Kafka information can be stored in multiple places in the Solace event message without adding the details to the data portion of the event. The three record processing samples add the data to the UserData Solace transport header and the Solace user-specific transport header that is sent as a "User Property Map".
-A message dump from the Sink Connector generated Solace event messages that is generated using one of the sample Record Processors would be similar to:
-
- Contributing
-Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
- Authors
-See the list of contributors who participated in this project.
- License
-This project is licensed under the Apache License, Version 2.0. - See the LICENSE file for details.
- Resources
-For more information about Solace technology in general please visit these resources:
-
-
-
diff --git a/.gitignore b/.gitignore
index 67b4156..2f4dd92 100644
--- a/.gitignore
+++ b/.gitignore
@@ -26,7 +26,7 @@ tmp/**/*
*~.nib
local.properties
.classpath
-.settings/
+.settings
.loadpath
.checkstyle
@@ -35,3 +35,7 @@ local.properties
# Locally stored "Eclipse launch configurations"
*.launch
+/build/
+
+# Unzipped test connector
+src/integrationTest/resources/pubsubplus-connector-kafka*/
\ No newline at end of file
diff --git a/.travis.yml b/.travis.yml
index 28e8d5d..2f7fb1a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,8 +1,30 @@
language: java
install: true
-
+sudo: required
+services:
+ - docker
jdk:
- openjdk8
script:
- - ./gradlew clean check jar
+ - ./gradlew clean integrationTest --tests com.solace.connector.kafka.connect.sink.it.SinkConnectorIT
+
+after_success:
+- >
+ if [ "$TRAVIS_PULL_REQUEST" = "false" ] && [ "$TRAVIS_BRANCH" = "master" ]; then
+ git config --global user.email "travis@travis-ci.org";
+ git config --global user.name "travis-ci";
+ mkdir gh-pages; # Now update gh-pages
+ git clone --quiet --branch=gh-pages https://${GH_TOKEN}@github.com/SolaceProducts/pubsubplus-connector-kafka-sink gh-pages > /dev/null 2>&1;
+ rm gh-pages/downloads/pubsubplus-connector-kafka-sink*
+ mv build/distributions/pubsubplus-connector-kafka-sink* gh-pages/downloads
+ cd gh-pages;
+ pushd downloads
+ cp index.template index.html; FILENAME=`find . | grep *.zip | cut -d'/' -f2 | sed 's/.\{4\}$//'`; sed -i "s/CONNECTOR_NAME/$FILENAME/g" index.html;
+ popd;
+ git add -f .;
+ git commit -m "Latest connector distribution on successful travis build $TRAVIS_BUILD_NUMBER auto-pushed to gh-pages";
+ git remote add origin-pages https://${GH_TOKEN}@github.com/SolaceProducts/pubsubplus-connector-kafka-sink.git > /dev/null 2>&1;
+ git push --quiet --set-upstream origin-pages gh-pages;
+ echo "Updated and pushed GH pages!";
+ fi
diff --git a/README.md b/README.md
index 052efd3..7267b00 100644
--- a/README.md
+++ b/README.md
@@ -1,300 +1,362 @@
-[![Build Status](https://travis-ci.org/SolaceLabs/pubsubplus-connector-kafka-sink.svg?branch=development)](https://travis-ci.org/SolaceLabs/pubsubplus-connector-kafka-sink)
+[![Build Status](https://travis-ci.org/SolaceProducts/pubsubplus-connector-kafka-sink.svg?branch=master)](https://travis-ci.org/SolaceProducts/pubsubplus-connector-kafka-sink)
-# PubSub+ Connector Kafka Sink v1.0
-## Synopsis
+# PubSub+ Connector Kafka Sink
-This project provides a Solace/Kafka Sink Connector (adapter) that makes use of the Kafka Connect libraries. The Solace/Kafka adapter consumes Kafka topic records and streams the data events to the Solace Event Mesh as a Topic and/or Queue data event.
+This project provides a Kafka to Solace PubSub+ Event Broker [Sink Connector](//kafka.apache.org/documentation.html#connect_concepts) (adapter) that makes use of the [Kafka Connect API](//kafka.apache.org/documentation/#connect).
-On the Solace side of the Sink Connector the adapter is using Solace's high performance Java API to stream Solace messages to a Solace Broker (PubSub+ appliance, software or Solace Cloud service). Unlike many other message brokers, Solace supports transparent protocol and API messaging transformations. Therefore, any message that reaches the Solace broker is not limited to being consumed from the Solace broker only by Java clients using the same JCSMP libraries
-that were used to send the messages to the Solace Broker. Solace supports transparent interoperability with many
-message transports and languages/APIs. Therefore, from the single Solace Sink Connector any Kafka Topic (Key or not Keyed) Sink Record is instantly available for consumption by any consumer that uses one of the Solace supported open standards languages or transport protocols.
+**Note**: there is also a PubSub+ Kafka Source Connector available from the [PubSub+ Connector Kafka Source](https://github.com/SolaceProducts/pubsubplus-connector-kafka-source) GitHub repository.
-Consider the following diagram:
+Contents:
-![Architecture Overview](resources/KSink3.png)
+ * [Overview](#overview)
+ * [Use Cases](#use-cases)
+ * [Downloads](#downloads)
+ * [Quick Start](#quick-start)
+ * [Parameters](#parameters)
+ * [User Guide](#user-guide)
+ + [Deployment](#deployment)
+ + [Troubleshooting](#troubleshooting)
+ + [Event Processing](#event-processing)
+ + [Performance and Reliability Considerations](#performance-and-reliability-considerations)
+ + [Security Considerations](#security-considerations)
+ * [Developers Guide](#developers-guide)
-It does not matter that the Kafka record was consumed by the Connector and sent using Java JCSMP transport to a Solace broker (appliance, software or cloud). The Solace event message can transparently be consumed by a Cell Phone, a REST Server or an AMQP, JMS, MQTT message, etc. as a real-time asynchronous data event.
+## Overview
-The Solace Sink Connector also ties Kafka records into the Solace Event Mesh. The Event Mesh is a clustered group of Solace PubSub+ Brokers that transparently, in real-time, route data events to any Service that is part of the Event Mesh. Solace PubSub+ Brokers (Appliances, Software and SolaceCloud) are connected to each other as a multi-connected mesh that to individual services (consumers or producers of data events) appears to be a single Event Broker. Events messages are seamlessly transported within the entire Solace Event Mesh regardless of where the event is created and where the process exists that has registered interested in consuming the event. Simply by registering interest in receiving events, the entire Event Mesh becomes aware of the registration request and will know how to securely route the appropriate events generated by the Solace Sink Connector.
+The Solace/Kafka adapter consumes Kafka topic records and streams them to the PubSub+ Event Mesh as topic and/or queue data events.
-The Solace Sink Connector allows the creation and storage of a new Kafka record to become an event in the Solace Event Mesh. The Solace Sink Connector provides the ability to transparently push any new Kafka Record that is placed onto a Kafka Topic into the Solace Event Mesh. That new event can be consumed by any other service that is connected to the Solace Event Mesh and has registered interest in the event. As a result, all other service that are part of the Event Mesh will be able to receive the Kafka Records through this single Solace Sink Connector. There is no longer a requirement for separate Kafka Sink Connectors to each of the separate services. The single Solace Sink Connector is all that is required. Once the Record is in the Event Mesh, it is available to all other services.
+The connector was created using PubSub+ high performance Java API to move data to PubSub+.
-The Solace Sink Connector eliminates the complexity and overhead of maintaining separate Sink Connectors for each and every service that may be interested in the same data that is placed into a Kafka Topic. There is the added benefit of access to services where there is no Kafka Sink Connector available, thereby eliminating the need to create and maintain a new connector for new services that may be interested in Kafka Records.
+## Use Cases
-Consider the following:
+#### Protocol and API Messaging Transformations
-![Event Mesh](resources/EventMesh.png)
+Unlike many other message brokers, the Solace PubSub+ Event Broker supports transparent protocol and API messaging transformations.
+As the following diagram shows, any Kafka topic (keyed or non-keyed) sink record is instantly available for consumption by a consumer that uses one of the Solace supported open standards languages or transport protocols.
-A single Solace Sink Connector will be able to move the new Kafka Record to any downstream service via a single connector.
+![Messaging Transformations](/doc/images/KSink.png)
-The Solace Sink Connector also ties into Solace's location transparency for the Event Mesh PubSub+ brokers. Solace supports a wide range of brokers for deployment. There are three major categories of Solace PubSub+ brokers: dedicated extreme performance hardware appliances, high performance software brokers that are deployed as software images (deployable under most Hypervisors, Cloud IaaS and PaaS layers and in Docker) and provided as a fully managed Cloud MaaS (Messaging as a Service).
+#### Tying Kafka into the PubSub+ Event Mesh
-It does not matter what Solace Broker is used or where it is deployed, it can become part of the Solace Event Mesh. Therefore, there are no restrictions on where the Solace Sink Connector is deployed or what PubSub+ broker is used to connect Kafka to the Solace Event Bus. The Solace Event Mesh infrastructure will allow, via the Solace Sink Connector, Kafka events to be consumed by any Service anywhere that is part of the Event Mesh.
+The [PubSub+ Event Mesh](//docs.solace.com/Solace-PubSub-Platform.htm#PubSub-mesh) is a clustered group of PubSub+ Event Brokers, which appears to individual services (consumers or producers of data events) to be a single transparent event broker. The Event Mesh routes data events in real-time to any of its client services. The Solace PubSub+ brokers can be any of the three categories: dedicated extreme performance hardware appliances, high performance software brokers that are deployed as software images (deployable under most Hypervisors, Cloud IaaS and PaaS layers and in Docker) or provided as a fully-managed Cloud MaaS (Messaging as a Service).
-Consider the following:
+When an application registers interest in receiving events, the entire Event Mesh becomes aware of the registration request and knows how to securely route the appropriate events generated by the Solace Sink Connector.
-![Location Independence](resources/SolaceCloud2.png)
+![Messaging Transformations](/doc/images/EventMesh.png)
-It does not matter if the Kafka record storage event was generated by a Solace Sink Connector in the Cloud or on premise. It does not matter if Solace Sink Connector was connected to a Solace PubSub+ broker that was an appliance, on premise or Cloud software, or the the Cloud managed MaaS, it will immediately in real time be available to all Solace Event Mesh connected services that are located anywhere.
+Because of the Solace architecture, a single PubSub+ Sink Connector can move a new Kafka record to any downstream service.
-It is important to mention that there is also a Solace Source Connector for Kafka available. The Solace Source Connector allows registration of interest in specific events on the Solace Event Mesh. When these events of interest are consumed by the Solace Source Connector, they are placed as a Kafka Record onto a Kafka Topic. These events that are stored in Kafka are now transparently available to any application that is consuming Kafka records directly from the Kafka brokers. Please refer to the Solace Source Connector GitHub repository for more details.
+#### Distributing Messages to IoT Devices
-## Usage
+PubSub+ brokers support bi-directional messaging and the unique addressing of millions of devices through fine-grained filtering. Using the Sink Connector, messages created from Kafka records can be efficiently distributed to a controlled set of destinations.
-This is a Gradle project that references all the required dependencies. To check the code style and find bugs you can use:
+![Messaging Transformations](/doc/images/IoT-Command-Control.png)
-```
-./gradlew clean check
-```
+## Downloads
-To actually create the Connector Jar file use:
+The PubSub+ Kafka Sink Connector is available as a ZIP or TAR package from the [downloads](//solaceproducts.github.io/pubsubplus-connector-kafka-sink/downloads/) page.
-```
-./gradlew clean jar
-```
+The package includes the jar libraries, documentation with license information, and sample property files. Download and extract it into a directory that is on the `plugin.path` of your `connect-standalone` or `connect-distributed` properties file.
-## Deployment
+## Quick Start
-The Solace Sink Connector has been tested in three environments: Apache Kafka, Confluent Kafka and the AWS Confluent Platform. For testing, it is recommended to use the single node deployment of Apache or Confluent Kafka software.
+This example demonstrates an end-to-end scenario similar to the [Protocol and API messaging transformations](#protocol-and-api-messaging-transformations) use case, using the WebSocket API to receive an exported Kafka record as a message at the PubSub+ event broker.
-To deploy the Connector, as described in the Kafka documentation, it is necessary to move the Connector jar file and the required third party jar files to a directory that is part of the Worker-defined classpath. Details for installing the Solace Sink Connector are described in the next two sub sections.
+It builds on the open source [Apache Kafka Quickstart tutorial](//kafka.apache.org/quickstart) and walks through getting started in a standalone environment for development purposes. For setting up a distributed environment for production purposes, refer to the User Guide section.
-#### Apache Kafka
+**Note**: The steps are similar if using [Confluent Kafka](//www.confluent.io/download/); there may be differences in the root directory where the Kafka binaries (`bin`) and properties (`etc/kafka`) are located.
-For Apache Kafka, the software is typically found, for example for the 2.11 version, under the root directory: "/opt/kafka-apache/"kafka_2.11-1.1.0". Typically the Solace Sink Connector would be placed under the "libs" directory under the root directory. All required Solace JCSMP JAR files should be placed under the same "libs" directory. The properties file for the connector would typically be placed under the "config" directory below the root directory.
+**Steps**
-To start the connector in stand-alone mode while in the "bin" directory the command would be similar to:
+1. Install Kafka. Follow the [Apache tutorial](//kafka.apache.org/quickstart#quickstart_download) to download the Kafka release code, start the Zookeeper and Kafka servers in separate command line sessions, then create a topic named `test` and verify it exists.
-```
-./connect-standalone.sh ../config/connect-standalone.properties ../config/solaceSink.properties
-```
+2. Install PubSub+ Sink Connector. Designate and create a directory for the PubSub+ Sink Connector (here, we assume it is named `connectors`). Edit the `config/connect-standalone.properties` file, and ensure the `plugin.path` parameter value includes the absolute path of the `connectors` directory.
+[Download]( https://solaceproducts.github.io/pubsubplus-connector-kafka-sink/downloads ) and extract the PubSub+ Sink Connector into the `connectors` directory.
-In this case "solaceSink.properties" is the configuration file that you created to define the connectors behavior. Please refer to the sample included in this project.
+3. Acquire access to a PubSub+ message broker. If you don't already have one available, the easiest option is to get a free-tier service in a few minutes in [PubSub+ Cloud](//solace.com/try-it-now/), following the instructions in [Creating Your First Messaging Service](https://docs.solace.com/Solace-Cloud/ggs_signup.htm).
-When the connector starts in stand-alone mode, all output goes to the console. If there are errors they should be visible on the console. If you do not want the output to console, simply add the "-daemon" option and all output will be directed to the logs directory.
+4. Configure the PubSub+ Sink Connector:
-#### Confluent Kafka
+ a) Locate the following connection information of your messaging service for the "Solace Java API" (this is what the connector is using inside):
+ * Username
+ * Password
+ * Message VPN
+ * one of the Host URIs
-The Confluent Kafka software is typically placed under the root directory: "/opt/confluent/confluent-4.1.1". In this case it is for the 4.1.1 version of Confluent. By default, the Confluent software is started in distributed mode with the REST Gateway started.
+ b) Edit the PubSub+ Sink Connector properties file located at `connectors/pubsubplus-connector-kafka-sink-/etc/solace_sink.properties` updating following respective parameters so the connector can access the PubSub+ event broker:
+ * `sol.username`
+ * `sol.password`
+ * `sol.vpn_name`
+ * `sol.host`
-THe Solace Sink Connector would typically be placed in the "/opt/confluent/confluent-4.1.1/share/java/kafka-connect-solace". You will need to create the "kafka-connect-solace" directory. You must place all the required Solace JCSMP JAR files under this same directory. If you plan to run the Sink Connector in stand-alone mode, it is suggested to place the properties file under the same directory.
+ **Note**: In the configured source and destination information, `topics` is the Kafka source topic (`test`), created in Step 1 and the `sol.topics` parameter specifies the destination topic on PubSub+ (`sinktest`).
-After the Solace files are installed and if you are familiar with Kafka administration, it is recommended to restart the Confluent Connect software if Confluent is running in Distributed mode. Alternatively, it is simpler to just start and restart the Confluent software with the "confluent" command.
+5. Start the connector in standalone mode. In a command line session run:
+ ```sh
+ bin/connect-standalone.sh \
+ config/connect-standalone.properties \
+ connectors/pubsubplus-connector-kafka-sink-/etc/solace_sink.properties
+ ```
+ After startup, the logs will eventually contain following line:
+ ```
+ ================Session is Connected
+ ```
-At this point you can test to confirm the Solace Sink Connector is available for use in distributed mode with the command:
+6. To watch messages arriving into PubSub+, we use the "Try Me!" test service of the browser-based administration console to subscribe to messages to the `sinktest` topic. Behind the scenes, "Try Me!" uses the JavaScript WebSocket API.
-```
-curl http://18.218.82.209:8083/connector-plugins | jq
-```
+ * If you are using PubSub+ Cloud for your messaging service, follow the instructions in [Trying Out Your Messaging Service](//docs.solace.com/Solace-Cloud/ggs_tryme.htm).
+ * If you are using an existing event broker, log into its [PubSub+ Manager admin console](//docs.solace.com/Solace-PubSub-Manager/PubSub-Manager-Overview.htm#mc-main-content) and follow the instructions in [How to Send and Receive Test Messages](//docs.solace.com/Solace-PubSub-Manager/PubSub-Manager-Overview.htm#Test-Messages).
-In this case the IP address is one of the nodes running the Distributed mode Worker process. If the Connector is loaded correctly, you should see something similar to:
+ In both cases ensure to set the topic to `sinktest`, which the connector is publishing to.
-![Connector List](resources/RESTConnectorListSmall.png)
+7. Demo time! Start to write messages to the Kafka `test` topic. Get back to the Kafka [tutorial](//kafka.apache.org/quickstart#quickstart_send), type and send `Hello world!`.
-At this point, it is now possible to start the connector in distributed mode with a command similar to:
+ The "Try Me!" consumer from Step 6 should now display the new message arriving to PubSub+ through the PubSub+ Kafka Sink Connector:
+ ```
+ Hello world!
+ ```
-```
-curl -X POST -H "Content-Type: application/json" -d @solace_sink_properties.json http://18.218.82.209:8083/connectors
-```
+## Parameters
-Again, the IP address is one of the nodes running the Distributed mode Worker process. The connector's JSON configuration file, in this case, is called "solace_sink_properties.json".
+The Connector parameters consist of [Kafka-defined parameters](https://kafka.apache.org/documentation/#connect_configuring) and PubSub+ connector-specific parameters.
-You can determine if the Sink Connector is running with the following command:
+Refer to the in-line documentation of the [sample PubSub+ Kafka Sink Connector properties file](/etc/solace_sink.properties) and additional information in the [Configuration](#Configuration) section.
-```ini
-curl 18.218.82.209:8083/connectors/solaceSinkConnector/status | jq
-```
+## User Guide
-If there was an error in starting, the details will be returned with this command. If the Sink Connector was successfully started the status of the connector and task processes will be "running":
+### Deployment
-![Connector Status](resources/RESTStatusSmall.png)
+The PubSub+ Sink Connector deployment has been tested on Apache Kafka 2.4 and Confluent Kafka 5.4 platforms. The Kafka software is typically placed under the root directory: `/opt//`.
-## Configuration
+Kafka distributions may be available as install bundles, Docker images, Kubernetes deployments, etc. They all support Kafka Connect which includes the scripts, tools and sample properties for Kafka connectors.
-The Solace Sink Connector configuration is managed by the configuration file. For stand-alone Kafka deployments a properties file is used. A sample is enclosed with the project.
+Kafka provides two options for connector deployment: [standalone mode and distributed mode](//kafka.apache.org/documentation/#connect_running).
-For distributed Kafka deployments the connector can be deployed via REST as a JSON configuration file. A sample is enclosed with the project.
+* In standalone mode, recommended for development or testing only, configuration is provided together in the Kafka `connect-standalone.properties` and in the PubSub+ Sink Connector `solace_sink.properties` files and passed to the `connect-standalone` Kafka shell script running on a single worker node (machine), as seen in the [Quick Start](#quick-start).
-#### Solace Configuration for the Sink Connector
+* In distributed mode, the Kafka configuration is provided in `connect-distributed.properties` and passed to the `connect-distributed` Kafka shell script, which is started on each worker node. The `group.id` parameter identifies worker nodes belonging the same group. The script starts a REST server on each worker node and PubSub+ Sink Connector configuration is passed to any one of the worker nodes in the group through REST requests in JSON format.
-The Solace configuration of the connector's Solace Session, Transport and Security properties are all available and defined in the **SolaceSinkConstants.java** file. These are the equivalent to the details for the Solace **JCSMPSessionProperties** class. Details and documentation for this JCSMPProperies class can be found here:
+To deploy the Connector, for each target machine, [download]( https://solaceproducts.github.io/pubsubplus-connector-kafka-sink/downloads) and extract the PubSub+ Sink Connector into a directory and ensure the `plugin.path` parameter value in the `connect-*.properties` includes the absolute path to that directory. Note that Kafka Connect, i.e., the `connect-standalone` or `connect-distributed` Kafka shell scripts, must be restarted (or an equivalent action from a Kafka console is required) if the PubSub+ Sink Connector deployment is updated.
-[Solace Java API](https://docs.solace.com/API-Developer-Online-Ref-Documentation/java/index.html)
+Some PubSub+ Sink Connector configurations may require the deployment of additional specific files, like keystores, truststores, Kerberos config files, etc. It does not matter where these additional files are located, but they must be available on all Kafka Connect Cluster nodes and placed in the same location on all the nodes because they are referenced by absolute location and configured only once through one REST request for all.
-For tuning, performance and scaling (multiple tasks is supported with this connector) of the Solace Sink Connector, please refer to the Solace PubSub+ documentation that can be found here:
+#### REST JSON Configuration
-[Solace PubSub+ Documentation](https://docs.solace.com/)
+First test to confirm the PubSub+ Sink Connector is available for use in distributed mode with the command:
+```ini
+curl http://18.218.82.209:8083/connector-plugins | jq
+```
-There is a bare minimum requirement to configure access to the Solace PubSub+ broker. A username, their password and VPN (Solace Virtual Private Network - a "virtual broker" used in Solace multi-tenancy configurations) and host reference are mandatory configuration details. An example of the required configuration file entries is as follows:
+In this case the IP address is one of the nodes running the distributed mode worker process, and the port defaults to 8083 or as specified in the `rest.port` property in `connect-distributed.properties`. If the connector is loaded correctly, you should see a response similar to:
-```ini
-sol.username=user1
-sol.password=password1
-sol.vpn_name=kafkavpn
-sol.host=160.101.136.33
+```
+ {
+ "class": "com.solace.connector.kafka.connect.sink.SolaceSinkConnector",
+ "type": "sink",
+ "version": "2.0.0"
+ },
```
-If you have just installed a Solace PubSub+ broker and you are not that familiar with Solace administration, you can test your Sink Connector by using "default" as value for the username, password and VPN name. The host should match the IP address of the broker.
+At this point, it is now possible to start the connector in distributed mode with a command similar to:
+
+```ini
+curl -X POST -H "Content-Type: application/json" \
+ -d @solace_sink_properties.json \
+ http://18.218.82.209:8083/connectors
+```
-For connectivity to Kafka, the Sink Connector has four basic configuration requirements: name for the Connector Plugin, the name of the Java Class
-for the connector, the number of Tasks the connector should deploy and the name of the Kafka Topic. The following is an example for the Solace Source Connector:
+The connector's JSON configuration file, in this case, is called `solace_sink_properties.json`. A sample is available [here](/etc/solace_sink_properties.json), which can be extended with the same properties as described in the [Parameters section](#parameters).
+Determine whether the Sink Connector is running with the following command:
```ini
-name=solaceSinkConnector
-connector.class=com.solace.sink.connector.SolaceSinkConnector
-tasks.max=1
-topics=solacetest
+curl 18.218.82.209:8083/connectors/solaceSourceConnector/status | jq
```
+If there was an error in starting, the details are returned with this command.
-A more details example is included with this project. This project also includes a JSON configuration file.
+### Troubleshooting
-### Security Considerations
-
-The Sink Connector supports both PKI and Kerberos for more secure authentication beyond the simple user name/password. The PKI/TLS support is well document in
-the Solace literature, and will not be repeated here. All the PKI required configuration parameters are part of the configuration variable for the Solace session and transport as referenced above in the Configuration Section. Sample parameters are found in the included properties and JSON configuration files.
+In standalone mode, the connect logs are written to the console. If you do not want to send the output to the console, simply add the `-daemon` option to have all output directed to the logs directory.
-Kerberos authentication support is also available. It requires a bit more configuration than PKI since it is not defined as part of the Solace session or transport. Typical Kerberos client applications require details about the Kerberos configuration and details for the authentication. Since the Sink Connector is a server application (i.e. no direct user interaction) a Kerberos keytab file is required as part of the authentication.
+In distributed mode, the logs location is determined by the `connect-log4j.properties` located at the `config` directory in the Apache Kafka distribution or under `etc/kafka/` in the Confluent distribution.
-The enclosed configuration files are samples that will allow automatic Kerberos authentication for the Source Connector when it is deployed to the Connect Cluster. The sample files included are
-the "krb5.conf" and "login.conf". It does not matter where the files are located, but they must be available on all Kafka Connect CLuster nodes and placed in the same location on all the nodes. The files are then referenced in the connector configuration files, for example:
+If logs are redirected to the standard output, here is a sample log4j.properties snippet to direct them to a file:
+```
+log4j.rootLogger=INFO, file
+log4j.appender.file=org.apache.log4j.RollingFileAppender
+log4j.appender.file.File=/var/log/kafka/connect.log
+log4j.appender.file.layout=org.apache.log4j.PatternLayout
+log4j.appender.file.layout.ConversionPattern=[%d] %p %m (%c:%L)%n
+log4j.appender.file.MaxFileSize=10MB
+log4j.appender.file.MaxBackupIndex=5
+log4j.appender.file.append=true
+```
-```ini
-sol.kerberos.login.conf=/opt/kerberos/login.conf
-sol.kerberos.krb5.conf=/opt/kerberos/krb5.conf
+To troubleshoot PubSub+ connection issues, increase logging level to DEBUG by adding following line:
+```
+log4j.logger.com.solacesystems.jcsmp=DEBUG
```
+Ensure that you set it back to INFO or WARN for production.
-There is also one other important configuration file entry that is required to tell the Solace connector to use Kerberos Authentication, which is also part of the Solace parameters mentioned in the Configuration Section of this document. The properties is:
+### Event Processing
-```ini
-sol.authentication_scheme=AUTHENTICATION_SCHEME_GSS_KRB
-```
+#### Record Processors
-Sample configuration files that include the rquires Kerberos parameters are also included with this project:
+There are many ways to map topics, partitions, keys, and values of Kafka records to PubSub+ messages, depending on the application.
-```ini
-solace_sink_kerb5_poroperties.json
-solace_sink_kerb5.properties
-```
+The PubSub+ Sink Connector comes with three sample record processors that can be used as-is, or as a starting point to develop a customized record processor.
-Kerberos has some very specific requirements to operate correctly. If these are also not configured, the Kerberos Authentication will not operate correctly:
-* DNS must be operating correctly both in the Kafka brokers and on the Solace PS+ broker.
-* Time services are recommended for use with the Kafka Cluster nodes and the Solace PS+ broker. If there is too much drift in the time between the nodes Kerberos will fail.
-* You must use the DNS name in the Solace PS+ host URI in the Connector configuration file and not the IP address
-* You must use the full Kerberos user name (including the Realm) in the configuration property, obviously no password is required.
+* **SolSimpleRecordProcessor**: Takes the Kafka sink record as a binary payload with a binary schema for the value, which becomes the PubSub+ message payload. The key and value schema can be changed via the configuration file.
-The security setup and operation between he PS+ broker and the Sink Connector and Kafka and the Sink Connector operate completely independently.
-The security setup between the Source Connector and the Kafka Brokers is controlled by the Kafka Connect Libraries. These are exposed in the configuration file as parameters based on the Kafka-documented parameters and configuration. Please refer to the Kafka documentation for details on securing the Connector to the Kafka brokers for both PKI/TLS and Kerberos.
+* **SolSimpleKeyedRecordProcessor**: A more complex sample that allows the flexibility of mapping the sink record key to PubSub+ message contents. In this sample, the kafka key is set as a Correlation ID in the Solace messages. The option of no key in the record is also possible.
-#### Solace Record Processor
+* **SolDynamicDestinationRecordProcessor**: By default, the Sink Connector sends messages to destinations (Topics or Queues) defined in the configuration file. This example shows how to route each Kafka record to a potentially different PubSub+ topic based on the record binary payload. In this imaginary transportation example, the records are distributed to buses listening to topics like `ctrl/bus//`, where the `busId` is encoded in the first 4 bytes in the record value and `command` in the rest. Note that `sol.dynamic_destination=true` must be specified in the configuration file to enable this mode (otherwise destinations are taken from sol.topics or sol.queue).
-The processing of the Kafka Source Record to create a Solace message is handled by an interface definition defined in **SolaceRecordProcessor.java** - This is a simple interface that is used to create the Solace event message from the Kafka record. There are three examples included of classes that implement this interface:
+In all processors the original Kafka topic, partition and offset are included for reference in the PubSub+ Message as UserData in the Solace message header, sent as a "User Property Map". The message dump is similar to:
+```
+Destination: Topic 'sinktest'
+AppMessageType: ResendOfKafkaTopic: test
+Priority: 4
+Class Of Service: USER_COS_1
+DeliveryMode: DIRECT
+Message Id: 4
+User Property Map: 3 entries
+ Key 'k_offset' (Long): 0
+ Key 'k_topic' (String): test
+ Key 'k_partition' (Integer): 0
+
+Binary Attachment: len=11
+ 48 65 6c 6c 6f 20 57 6f 72 6c 64 Hello.World
+```
-* **SolSimpleRecordProcessor.java** - Takes the Kafka Sink record as a binary payload with a Binary Schema for the value ( which becomes the Solace message payload) and a Binary Schema for the record key and creates and sends the appropriate Solace event message. The Kafka Sink Record key and value scheme can be changed via the configuration file.
-* **SolSimpleKeyedRecordProcessor** - A more complex sample that allows the flexibility of changing the Source Record Key Schema and which value from the Solace message to use as a key. The option of no key in the record is also possible. The Kafka Key is also used as a Correlation ID in the Solace messages and the original Kafka Topic is included for reference in the Solace Message as UserData in the Solace message header.
-* **SolSimpleKeyedRecordProcessorDTO** - This is the same as the "SolSimpleKeyedRecordProcessor", but it adds a DTO (Deliver-to-One) flag to the Solace Event message. The DTO flag is part of topic consumer scaling. For more details refer to the Solace documentation. The Solace Source Connector also support consuming Solace Event Messages with the DTO flag. More details are available in in GitHub where you will find the Solace Source Task code and details.
+The desired record processor is loaded at runtime based on the configuration of the JSON or properties configuration file, for example:
+```
+sol.record_processor_class=com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleRecordProcessor
+```
+It is possible to create more custom record processors based on you Kafka record requirements for keying and/or value serialization and the desired format of the PubSub+ event message. Simply add the new record processor classes to the project. The desired record processor is installed at run time based on the configuration file.
-The desired message processor is loaded at runtime based on the configuration of the JSON or properties configuration file, for example:
+Refer to the [Developers Guide](#developers-guide) for more information about building the Sink Connector and extending record processors.
-```ini
-sol.record_processor_class=com.solace.sink.connector.recordprocessor.SolSimpleKeyedRecordProcessor
+#### Message Replay
+
+By default, the Sink Connector starts to send events based on the last Kafka topic offset that was flushed before the connector was stopped. It is possible to use the Sink Connector to replay messages from the Kafka topic.
+
+To start processing from an offset position that is different from the one stored before the connector was stopped, add the following configuration entry:
+```
+sol.kafka_replay_offset=
```
-It is possible to create more custom Record Processors based on your Kafka record requirements for keying and/or value serialization and the desired format of the Solace event message. Simply add the new record processor classes to the project. The desired record processor is installed at run time based on the configuration file.
+A value of 0 results in the replay of the entire Kafka Topic. A positive value results in the replay from that offset value for the Kafka Topic. The same offset value is used against all active partitions for that Kafka Topic.
-More information on Kafka Connect can be found here:
+### Performance and Reliability Considerations
-[Apache Kafka Connect](https://kafka.apache.org/documentation/)
+#### Sending to PubSub+ Topics
-[Confluent Kafka Connect](https://docs.confluent.io/current/connect/index.html)
+We recommend using PubSub+ Topics if high throughput is required and the Kafka Topic is configured for high performance. Message duplication and loss mimics the underlying reliability and QoS configured for the Kafka topic.
-#### Scaling the Sink Connector
+#### Sending to PubSub+ Queue
-The Solace Sink Connector will scale when more performance is required. There is only so much throughput that can be pushed through the Connect API. The Solace broker supports far greater throughput than can be afforted through a single instance of the Connect API. The Kafka Broker can also produce records at a rate far greater than available through a single instance of the Connector. Therefore, multiple instances of the Sink Connector will increase throughput from the Kafka broker to the Solace PubSub+ broker.
+When Kafka records reliability is critical, we recommend configuring the Sink Connector to send records to the Event Mesh using PubSub+ queues at the cost of reduced throughput.
-Multiple Connector tasks are automatically deployed and spread across all available Connect Workers simply by indicating the number of desired tasks in the connector configuration file.
+A PubSub+ queue guarantees order of delivery, provides High Availability and Disaster Recovery (depending on the setup of the PubSub+ brokers) and provides an acknowledgment to the connector when the event is stored in all HA and DR members and flushed to disk. This is a higher guarantee than is provided by Kafka, even for Kafka idempotent delivery.
-When the Sink Connector is consuming from Kafka and the event records are expected to be placed in to a Solace Queue, there are no special requirements for the Solace Queue definition. As more instance of the connector are defined in the configuration, they will each simultaneously push event messages into the defined queue.
+The connector uses local transactions to deliver to the queue by default. The transaction is committed if messages are flushed by Kafka Connect (see below how to tune flush interval) or the outstanding messages size reaches the `sol.autoflush.size` (default 200) configuration.
-The Solace Sink Connector can also be configured to generate Solace Topic event messages when new records are placed into Kafka. There is no special setup on the Solace Broker that the multiple scaled connector instances to scale the performance of Solace topic-based event messages.
+Note that generally one connector can send to only one queue.
-When a Solace Sink Connector is scaled, it will automatically use a Kafka Consumer Group to allow Kafka to move the records for the multiple Topic Partitions in parallel.
+##### Recovery from Kafka Connect API or Kafka Broker Failure
-#### Sending Solace Event Messages
+The Kafka Connect API automatically keeps track of the offset that the Sink Connector has read and processed. If the connector stops or is restarted, the Connect API starts passing records to the connector based on the last saved offset.
-The Kafka Connect API automatically keeps track of the offset that the Sink Connector has read and processed. If the Sink Connector stops or is restarted, the Connect API will start passing records to the Sink Connector based on the last saved offset. Generally, the offset is saved on a timed basis every 10 seconds. This is tunable in the Connect Worker configuration file.
+The time interval to save the last offset can be tuned via the `offset.flush.interval.ms` parameter (default 60,000 ms) in the worker's `connect-distributed.properties` configuration file.
-When the Solace Sink Connector is sending Solace Topic message data events, the chances of duplication and message loss will mimic the underlying reliability and QoS configured for the Kafka Topic and is also controlled by the timer for flushing the offset value to disk.
+Recovery may result in duplicate PubSub+ events published to the Event Mesh. As described [above](#record-processors), the Solace message header "User Property Map" contains all the Kafka unique record information which enables identifying and filtering duplicates.
-It is also possible to send Kafka Topic messages to Solace queues. A Solace Queue guarantees order of deliver, provides High Availability and Disaster Recovery (depending on the setup of the PubSub+ brokers) and provides an acknowledgment to the message producer (in this case the Solace Sink Connector) when the event is stored in all HA and DR members and flushed to disk. This is a higher guarantee than is provided by Kafka even for Kafka idempotent delivery.
+#### Multiple Workers
-When the Solace Sink Connector is sending data events to Queues, the messages are send using a Session Transaction. When 200 events are processed, the Solace Connector automatically forces a flush of the offset and then commits the Solace transactions. If the timer goes off before the 200 messages are send the same flush/commit is executed. Therefore, there should be no duplicates sent to the Service Mesh. However, data loss is a factor of the Kafka Topic's reliability and QoS configuration.
+The Sink Connector can scale when more performance is required. Throughput is limited with a single instance of the Connect API; the Kafka Broker can produce records and the Solace PubSub+ broker can consume messages at a far greater rate than the connector can handle them.
-If there is any error or failure and the Offset location is not synced, the Solace transaction will roll back messages in the queue up until the last offset flush. After the connector is restarted, processing will begin again from the last stored Offset.
+You can deploy and automatically and spread multiple connector tasks across all available Connect API workers simply by indicating the number of desired tasks in the connector configuration file. There are no special configuration requirements for PubSub+ queue or topics to support scaling.
-It is recommended to use Solace Topics when sending events if high throughput is required and the Kafka Topic is configured for high performance. When a Kafka topic is configured for it's highest throughput it will potentially result in loss or duplication within the processing of records in the Kafka Topic.
+On the Kafka side, the Connect API automatically uses a Kafka consumer group to allow moving of records from multiple topic partitions in parallel.
-Increasing the reliability of the Kafka Topic processing to reduce the potential loss or duplication, but will also greatly reduce throughput. When Kafka reliability is critical, it may be recommended to mimic this reliability with the Solace Sink Connector and configure the connector to send the Kafka records to the Event Mesh using Solace Queues.
+### Security Considerations
-#### Dynamic Destinations
+The security setup and operation between the PubSub+ broker and the Sink Connector and Kafka broker and the Sink Connector operate completely independently.
+
+The Sink Connector supports both PKI and Kerberos for more secure authentication beyond the simple user name/password, when connecting to the PubSub+ event broker.
-By default, the Sink Connector will send messages from the Kafka Records to the Destinations (Topic or Queues) defined in the configuration file (Properties or JSON file). In some cases, it may be desirable to send each Kafka Record to a different Solace Topic based on the details in the Kafka Record. This would mean that rather than using the static Solace Topic defined in the configuration file, a dynamic Solace Topic would need to be created for each record.
+The security setup between the Sink Connector and the Kafka brokers is controlled by the Kafka Connect libraries. These are exposed in the configuration file as parameters based on the Kafka-documented parameters and configuration. Please refer to the [Kafka documentation](//docs.confluent.io/current/connect/security.html) for details on securing the Sink Connector to the Kafka brokers for both PKI/TLS and Kerberos.
-Generally, a Solace Topic is a hierarchical meta-data representation that describes the message payload. Therefore, it is generally possible to form a Solace Topic that matches the "rules" defined to generate a topic from the data in the payload. In this way each Kafka Record from the same Kafka Topic could be targeted to a potentially different Solace Topic.
+#### PKI/TLS
-To make use of dynamic topics in the Solace Record Processors, it is necessary to update the configuration to indicate to the Solace Sink Connector to ignore the configuration destination references with the following entry:
+The PKI/TLS support is well documented in the [Solace Documentation](//docs.solace.com/Configuring-and-Managing/TLS-SSL-Service-Connections.htm), and will not be repeated here. All the PKI required configuration parameters are part of the configuration variable for the Solace session and transport as referenced above in the [Parameters section](#parameters). Sample parameters are found in the included [properties file](/etc/solace_sink.properties).
-```ini
-sol.dynamic_destination=true
-```
+#### Kerberos Authentication
+
+Kerberos authentication support requires a bit more configuration than PKI since it is not defined as part of the Solace session or transport.
-This entry in the configuration indicates that the actual destination must be defined in the record processor. To add the dynamic Solace Topic in the record processor, it necessary to add the details into the user defined Solace Header, for example:
+Typical Kerberos client applications require details about the Kerberos configuration and details for the authentication. Since the Sink Connector is a server application (i.e. no direct user interaction) a Kerberos _keytab_ file is required as part of the authentication, on each Kafka Connect Cluster worker node where the connector is deployed.
+The included [krb5.conf](/etc/krb5.conf) and [login.conf](/etc/login.conf) configuration files are samples that allow automatic Kerberos authentication for the Sink Connector when it is deployed to the Connect Cluster. Together with the _keytab_ file, they must be also available on all Kafka Connect cluster nodes and placed in the same (any) location on all the nodes. The files are then referenced in the Sink Connector properties, for example:
```ini
- SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
- try {
- userHeader.putString("k_topic", record.topic());
- userHeader.putInteger("k_partition", record.kafkaPartition());
- userHeader.putLong("k_offset", record.kafkaOffset());
- userHeader.putDestination("dynamicDestination", topic);
- } catch (SDTException e) {
- log.info("Received Solace SDTException {}, with the following: {} ",
- e.getCause(), e.getStackTrace());
- }
+sol.kerberos.login.conf=/opt/kerberos/login.conf
+sol.kerberos.krb5.conf=/opt/kerberos/krb5.conf
```
-In this case the "topic" is the Solace Topic that was created based on data in the Kafka record. Please refer to sample record processor for more details:
-
+The following property entry is also required to specify Kerberos Authentication:
```ini
-SolDynamicDestinationRecordProcessor.java
+sol.authentication_scheme=AUTHENTICATION_SCHEME_GSS_KRB
```
-The sample is included with this project.
+Kerberos has some very specific requirements to operate correctly. Some additional tips are as follows:
+* DNS must be operating correctly both in the Kafka brokers and on the Solace PS+ broker.
+* Time services are recommended for use with the Kafka Cluster nodes and the Solace PS+ broker. If there is too much drift in the time between the nodes, Kerberos will fail.
+* You must use the DNS name and not the IP address in the Solace PS+ host URI in the Connector configuration file
+* You must use the full Kerberos user name (including the Realm) in the configuration property; obviously, no password is required.
-It is important to note that if the destination is a Solace Queue, the network topic name for queues can be used. For example, if the queue is "testQueue", the dynamic topic would be "#P2P/QUE/testQueue".
+## Developers Guide
-#### Message Replay
+### Build and Test the Project
-By default, the Solace Sink Connector will start sending Solace events based on the last Kafka Topic offset that was flushed before the connector was stopped. It is possible to use the Solace Sink Connector to replay messages from the Kafka Topic.
+JDK 8 or higher is required for this project.
-Adding a configuration entry allows the Solace Sink Connector to start processing from an offset position that is different from the last offset that was stored before the connector was stopped. This is controlled by adding the following entry to the connector configuration file:
+First, clone this GitHub repo:
+```
+git clone https://github.com/SolaceProducts/pubsubplus-connector-kafka-sink.git
+cd pubsubplus-connector-kafka-sink
+```
-```ini
-sol.Kafka_replay_offset=
+Then run the build script:
+```
+gradlew clean build
```
-The offset is a Java Long value. A value of 0 will result in the replay of the entire Kafka Topic. A positive value will result in the replay from that offset value for the Kafka Topic. The same offset value will be used against all active partitions for that Kafka Topic.
-To make is easier to determine offset values for the Kafka Topic records, the three Record Processor samples included with this project include the sending of Solace message events that includes the Kafka Topic, Partition and Offset for every Kafka record that corresponds to the specific Solace event message. The Kafka information can be stored in multiple places in the Solace event message without adding the details to the data portion of the event. The three record processing samples add the data to the UserData Solace transport header and the Solace user-specific transport header that is sent as a "User Property Map".
+This script creates artifacts in the `build` directory, including the deployable packaged PubSub+ Sink Connector archives under `build\distributions`.
-A message dump from the Sink Connector generated Solace event messages that is generated using one of the sample Record Processors would be similar to:
+An integration test suite is also included, which spins up a Docker-based deployment environment that includes a PubSub+ event broker, Zookeeper, Kafka broker, Kafka Connect. It deploys the connector to Kafka Connect and runs end-to-end tests.
+```
+gradlew clean integrationTest --tests com.solace.connector.kafka.connect.sink.it.SinkConnectorIT
+```
-![Event Message Dump](resources/replayDump.png)
+### Build a New Record Processor
-## Additional Information
+The processing of a Kafka record to create a PubSub+ message is handled by an interface defined in [`SolRecordProcessorIF.java`](/src/main/java/com/solace/connector/kafka/connect/sink/SolRecordProcessorIF.java). This is a simple interface that creates the Kafka source records from the PubSub+ messages. This project includes three examples of classes that implement this interface:
+
+* [SolSimpleRecordProcessor](/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleRecordProcessor.java)
+* [SolSimpleKeyedRecordProcessor](/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleKeyedRecordProcessor.java)
+* [SolDynamicDestinationRecordProcessor](/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolDynamicDestinationRecordProcessor.java)
-For additional information, use cases and explanatory videos, please visit the [Solace/Kafka Integration Guide](https://docs.solace.com/Developer-Tools/Integration-Guides/Kafka-Connect.htm).
+You can use these examples as starting points for implementing your own custom record processors.
+More information on Kafka sink connector development can be found here:
+- [Apache Kafka Connect](https://kafka.apache.org/documentation/)
+- [Confluent Kafka Connect](https://docs.confluent.io/current/connect/index.html)
+
+## Additional Information
+
+For additional information, use cases and explanatory videos, please visit the [PubSub+/Kafka Integration Guide](https://docs.solace.com/Developer-Tools/Integration-Guides/Kafka-Connect.htm).
## Contributing
@@ -306,13 +368,12 @@ See the list of [contributors](../../graphs/contributors) who participated in th
## License
-This project is licensed under the Apache License, Version 2.0. - See the [LICENSE](LICENSE) file for details.
+This project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details.
## Resources
-For more information about Solace technology in general please visit these resources:
+For more information about Solace technology in general, please visit these resources:
- The [Solace Developers website](https://www.solace.dev/)
- Understanding [Solace technology]( https://solace.com/products/tech/)
- Ask the [Solace Community]( https://solace.community/)
-
diff --git a/build.gradle b/build.gradle
index 4b4250c..d0d8836 100644
--- a/build.gradle
+++ b/build.gradle
@@ -1,110 +1,86 @@
-/*
- * This build file was generated by the Gradle 'init' task.
- *
- * This generated file contains a sample Java Library project to get you started.
- * For more details take a look at the Java Libraries chapter in the Gradle
- * user guapply plugin: e available at https://docs.gradle.org/3.5/userguapply plugin: e/java_library_plugin.html
- */
-
-// Apply the java-library plugin to add support for Java Library
-apply plugin: 'java-library'
apply plugin: 'java'
-apply plugin: 'checkstyle'
-apply plugin: 'findbugs'
-apply plugin: 'pmd'
-apply plugin: 'jacoco'
-//apply plugin: 'net.researchgate.release' version '2.4.0'
-//apply plugin: "com.jfrog.bintray" version '1.7'
-apply plugin: 'maven'
-apply plugin: 'maven-publish'
+apply plugin: 'distribution'
+apply plugin: 'org.unbroken-dome.test-sets'
ext {
- //kafkaVersion = '0.10.0.0'
- //kafkaVersion = '0.11.0.0'
- //kafkaVersion = '1.1.0'
- kafkaVersion = '2.0.0'
+ kafkaVersion = '2.4.1'
+ solaceJavaAPIVersion = '10.6.0'
}
-// In this section you declare where to find the dependencies of your project
repositories {
- // Use jcenter for resolving your dependencies.
- // You can declare any Maven/Ivy/file repository here.
- jcenter()
+ mavenLocal()
+ mavenCentral()
}
-dependencies {
- // This dependency is exported to consumers, that is to say found on their compile classpath.
- api 'org.apache.commons:commons-math3:3.6.1'
-
- // This dependency is used internally, and not exposed to consumers on their own compile classpath.
- implementation 'com.google.guava:guava:21.0'
-
- // Use JUnit test framework
- testImplementation 'junit:junit:4.12'
+buildscript {
+ repositories {
+ maven {
+ url "https://plugins.gradle.org/m2/"
+ }
+ }
+ dependencies {
+ classpath "com.github.spotbugs:spotbugs-gradle-plugin:3.0.0"
+ classpath "org.unbroken-dome.test-sets:org.unbroken-dome.test-sets.gradle.plugin:2.2.1"
+ }
}
-// In this section you declare where to find the dependencies of your project
-repositories {
- // Use jcenter for resolving your dependencies.
- // You can declare any Maven/Ivy/file repository here.
- jcenter()
- mavenCentral()
-
- maven { url "https://mvnrepository.com/artifact/com.solacesystems/sol-jcsmp" }
- // https://mvnrepository.com/artifact/com.solacesystems/sol-jcsmp
-
-
+testSets {
+ integrationTest
}
dependencies {
- // This dependency is exported to consumers, that is to say found on their compile classpath.
- //api 'org.apache.commons:commons-math3:3.6.1'
-
- // This dependency is used internally, and not exposed to consumers on their own compile classpath.
- implementation 'com.google.guava:guava:21.0'
-
- // Use JUnit test framework
- testImplementation 'junit:junit:4.12'
-
- testCompile group: 'junit', name: 'junit', version: '4.12'
+ integrationTestImplementation 'junit:junit:4.12'
+ integrationTestImplementation 'org.junit.jupiter:junit-jupiter-api:5.5.2'
+ integrationTestImplementation 'org.junit.jupiter:junit-jupiter-engine:5.5.2'
+ integrationTestImplementation 'org.junit.jupiter:junit-jupiter-params:5.5.2'
+ integrationTestImplementation 'org.junit.platform:junit-platform-engine:1.5.2'
+ integrationTestImplementation 'org.mockito:mockito-core:3.2.4'
+ integrationTestImplementation 'org.mockito:mockito-junit-jupiter:3.2.4'
+ integrationTestImplementation 'org.testcontainers:testcontainers:1.12.4'
+ integrationTestImplementation 'org.testcontainers:junit-jupiter:1.12.4'
+ integrationTestImplementation 'org.slf4j:slf4j-api:1.7.28'
+ integrationTestImplementation 'org.slf4j:slf4j-simple:1.7.28'
+ integrationTestImplementation 'org.apache.commons:commons-configuration2:2.6'
+ integrationTestImplementation 'commons-beanutils:commons-beanutils:1.9.4'
+ integrationTestImplementation 'com.google.code.gson:gson:2.3.1'
+ integrationTestImplementation 'commons-io:commons-io:2.4'
+ integrationTestImplementation 'com.squareup.okhttp3:okhttp:4.4.0'
+ integrationTestImplementation 'org.apache.kafka:kafka-clients:$kafkaVersion'
compile "org.apache.kafka:connect-api:$kafkaVersion"
- compile 'org.eclipse.paho:org.eclipse.paho.client.mqttv3:1.0.2'
- compile 'org.bouncycastle:bcprov-jdk15on:1.54'
- compile 'org.bouncycastle:bcpkix-jdk15on:1.54'
- compile 'org.bouncycastle:bcpg-jdk15on:1.54'
- compile 'commons-io:commons-io:2.4'
- compile 'org.slf4j:slf4j-api:1.7.14'
- testCompile 'org.slf4j:slf4j-simple:1.7.14'
- compile group: 'com.solacesystems', name: 'sol-jcsmp', version: '10.4.0'
- //compile 'com.puppycrawl.tools:checkstyle:8.12'
+ compile "com.solacesystems:sol-jcsmp:$solaceJavaAPIVersion"
}
-tasks.withType(FindBugs) {
- reports {
- xml.enabled = true
- html.enabled = false
+task('prepDistForIntegrationTesting') {
+ dependsOn assembleDist
+ doLast {
+ copy {
+ from zipTree(file('build/distributions').listFiles().findAll {it.name.endsWith('.zip')}[0])
+ into (file('src/integrationTest/resources'))
+ }
+ copy {
+ from zipTree(file('build/distributions').listFiles().findAll {it.name.endsWith('.zip')}[0])
+ into (file('build/resources/integrationTest'))
+ }
}
}
-
-task copyRuntimeLibs(type: Copy) {
- into "$buildDir/output/lib"
- from configurations.runtime
+project.integrationTest {
+ useJUnitPlatform()
+ outputs.upToDateWhen { false }
+ dependsOn prepDistForIntegrationTesting
}
-checkstyle {
- repositories {
- mavenCentral()
- }
- configurations {
- checkstyle
+distributions {
+ main {
+ contents {
+ from('etc/solace_sink.properties') { into 'etc' }
+ from('etc/solace_sink_properties.json') { into 'etc' }
+ from('doc/distribution-readme.md') { into 'doc' }
+ from('LICENSE') { into 'doc' }
+ into('lib') {
+ from jar
+ from(project.configurations.runtime)
+ }
+ // from jar
+ }
}
- dependencies {
- //checkstyle 'com.puppycrawl.tools:checkstyle:6.12.1'
- checkstyle 'com.puppycrawl.tools:checkstyle:8.12'
-
- }
-}
-
-processResources {
- expand project.properties
}
diff --git a/config/checkstyle/checkstyle.xml b/config/checkstyle/checkstyle.xml
deleted file mode 100644
index c4bb069..0000000
--- a/config/checkstyle/checkstyle.xml
+++ /dev/null
@@ -1,255 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/doc/distribution-readme.md b/doc/distribution-readme.md
new file mode 100644
index 0000000..3a71a8c
--- /dev/null
+++ b/doc/distribution-readme.md
@@ -0,0 +1,11 @@
+# Solace PubSub+ Connector Kafka Sink
+
+This package provides a Solace PubSub+ Event Broker to Kafka Sink Connector.
+
+For detailed description refer to the project GitHub page at [https://github.com/SolaceProducts/pubsubplus-connector-kafka-sink](https://github.com/SolaceProducts/pubsubplus-connector-kafka-sink)
+
+Package directory contents:
+
+- doc: this readme and license information
+- lib: Sink Connector jar file and dependencies
+- etc: sample configuration properties and JSON file
diff --git a/resources/EventMesh.png b/doc/images/EventMesh.png
similarity index 100%
rename from resources/EventMesh.png
rename to doc/images/EventMesh.png
diff --git a/doc/images/IoT-Command-Control.png b/doc/images/IoT-Command-Control.png
new file mode 100644
index 0000000..9784e58
Binary files /dev/null and b/doc/images/IoT-Command-Control.png differ
diff --git a/resources/KSink3.png b/doc/images/KSink.png
similarity index 100%
rename from resources/KSink3.png
rename to doc/images/KSink.png
diff --git a/krb5.conf b/etc/krb5.conf
similarity index 100%
rename from krb5.conf
rename to etc/krb5.conf
diff --git a/login.conf b/etc/login.conf
similarity index 100%
rename from login.conf
rename to etc/login.conf
diff --git a/etc/solace_sink.properties b/etc/solace_sink.properties
new file mode 100644
index 0000000..d57d775
--- /dev/null
+++ b/etc/solace_sink.properties
@@ -0,0 +1,136 @@
+# PubSub+ Kafka Sink Connector parameters
+# GitHub project https://github.com/SolaceProducts/pubsubplus-connector-kafka-sink
+#######################################################################################
+
+# Kafka connect params
+# Refer to https://kafka.apache.org/documentation/#connect_configuring
+name=solaceSinkConnector
+connector.class=com.solace.connector.kafka.connect.sink.SolaceSinkConnector
+tasks.max=1
+value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
+key.converter=org.apache.kafka.connect.storage.StringConverter
+
+# If tasks.max>1 related tasks will share the same group.id.
+group.id=solSinkConnectorGroup
+
+# Kafka topics to read from
+topics=test
+
+# PubSub+ connection information
+sol.host=tcp://192.168.99.113:55555
+sol.username=default
+sol.password=default
+sol.vpn_name=default
+
+# Comma separated list of PubSub+ topics to send to
+# If tasks.max>1, use shared subscriptions otherwise each task's subscription will receive same message
+# Refer to https://docs.solace.com/PubSub-Basics/Direct-Messages.htm#Shared
+sol.topics=sinktest
+
+# PubSub+ queue name to send to, must exist on event broker
+#sol.queue=testQ
+
+# PubSub+ Kafka Sink connector record processor
+# Refer to https://github.com/SolaceProducts/pubsubplus-connector-kafka-sink
+sol.record_processor_class=com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleRecordProcessor
+
+# When using SolSimpleKeyedRecordProcessor, defines how to convert a Kafka record key
+# to part of which part of a PubSub+ message
+# Allowable values include: NONE, DESTINATION, CORRELATION_ID, CORRELATION_ID_AS_BYTES
+#sol.kafka_message_key=NONE
+
+# Set to true only if using SolDynamicDestinationRecordProcessor and dynamic destinations
+#sol.dynamic_destination=false
+
+# Whether to use transacted session and transactions to publish messages to PubSub+ queue
+#sol.use_transactions_for_queue=true
+
+# Max outstanding number of transacted messages if using transactions to reliably publish records to a queue. Must be <255
+# If outstanding messages limit is reached will auto-commit - will not wait for Kafka Connect "flush" initiated.
+#sol.autoflush.size=200
+
+# Starting offset to publish records to PubSub+. If not specified then will only publish new messages.
+# If specified it applies to all partitions: set to the desired position or 0 to publish all records from the beginning
+#sol.kafka_replay_offset=
+
+# Connector TLS session to PubSub+ message broker properties
+# Specify if required when using TLS / Client certificate authentication
+# May require setup of keystore and truststore on each host where the connector is deployed
+# Refer to https://docs.solace.com/Overviews/TLS-SSL-Message-Encryption-Overview.htm
+# and https://docs.solace.com/Overviews/Client-Authentication-Overview.htm#Client-Certificate
+#sol.authentication_scheme=
+#sol.ssl_connection_downgrade_to=
+#sol.ssl_excluded_protocols=
+#sol.ssl_cipher_suites=
+#sol.ssl_validate_certificate=
+#sol.ssl_validate_certicate_date=
+#sol.ssl_trust_store=
+#sol.ssl_trust_store_password=
+#sol.ssl_trust_store_format=
+#sol.ssl_trusted_common_name_list=
+#sol.ssl_key_store=
+#sol.ssl_key_store_password=
+#sol.ssl_key_store_format=
+#sol.ssl_key_store_normalized_format=
+#sol.ssl_private_key_alias=
+#sol.ssl_private_key_password=
+
+# Connector Kerberos authentication of PubSub+ message broker properties
+# Specify if required when using Kerberos authentication
+# Refer to https://docs.solace.com/Overviews/Client-Authentication-Overview.htm#Kerberos
+# Example:
+#sol.authentication_scheme=AUTHENTICATION_SCHEME_GSS_KRB
+#sol.kerberos.login.conf=/opt/kerberos/login.conf
+#sol.kerberos.krb5.conf=/opt/kerberos/krb5.conf
+#sol.krb_service_name=
+
+# Solace Java properties to tune for creating a channel connection
+# Leave at default unless required
+# Look up meaning at https://docs.solace.com/API-Developer-Online-Ref-Documentation/java/com/solacesystems/jcsmp/JCSMPChannelProperties.html
+#sol.channel_properties.connect_timout_in_millis=
+#sol.channel_properties.read_timeout_in_millis=
+#sol.channel_properties.connect_retries=
+#sol.channel_properties.reconnect_retries=
+#sol.channnel_properties.connect_retries_per_host=
+#sol.channel_properties.reconnect_retry_wait_in_millis=
+#sol.channel_properties.keep_alive_interval_in_millis=
+#sol.channel_properties.keep_alive_limit=
+#sol.channel_properties.send_buffer=
+#sol.channel_properties.receive_buffer=
+#sol.channel_properties.tcp_no_delay=
+#sol.channel_properties.compression_level=
+
+# Solace Java tuning properties
+# Leave at default unless required
+# Look up meaning at https://docs.solace.com/API-Developer-Online-Ref-Documentation/java/com/solacesystems/jcsmp/JCSMPProperties.html
+#sol.message_ack_mode=
+#sol.session_name=
+#sol.localhost=
+#sol.client_name=
+#sol.generate_sender_id=
+#sol.generate_rcv_timestamps=
+#sol.generate_send_timestamps=
+#sol.generate_sequence_numbers=
+#sol.calculate_message_expiration=
+#sol.reapply_subscriptions=
+#sol.pub_multi_thread=
+#sol.pub_use_immediate_direct_pub=
+#sol.message_callback_on_reactor=
+#sol.ignore_duplicate_subscription_error=
+#sol.ignore_subscription_not_found_error=
+#sol.no_local=
+#sol.ack_event_mode=
+#sol.sub_ack_window_size=
+#sol.pub_ack_window_size=
+#sol.sub_ack_time=
+#sol.pub_ack_time=
+#sol.sub_ack_window_threshold=
+#sol.max_resends=
+#sol.gd_reconnect_fail_action=
+#sol.susbcriber_local_priority=
+#sol.susbcriber_network_priority=
+#sol.subscriber_dto_override=
+#sol.supported_ack_event_mode
+#sol.publisher_window_size
+
+
diff --git a/etc/solace_sink_properties.json b/etc/solace_sink_properties.json
new file mode 100644
index 0000000..2f4f289
--- /dev/null
+++ b/etc/solace_sink_properties.json
@@ -0,0 +1,17 @@
+{
+ "name": "solaceSinkConnector",
+ "config": {
+ "connector.class": "com.solace.connector.kafka.connect.sink.SolaceSinkConnector",
+ "tasks.max": "1",
+ "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
+ "value.converter": "org.apache.kafka.connect.storage.StringConverter",
+ "group.id": "solSinkConnectorGroup",
+ "topics": "sinktest",
+ "sol.host": "tcp://192.168.99.113:55555",
+ "sol.username": "default",
+ "sol.password": "default",
+ "sol.vpn_name": "default",
+ "sol.topics": "sinktest",
+ "sol.record_processor_class": "com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleRecordProcessor"
+ }
+}
\ No newline at end of file
diff --git a/gradle.properties b/gradle.properties
index 0ca44e8..e997a9a 100644
--- a/gradle.properties
+++ b/gradle.properties
@@ -1 +1 @@
-version=1.0.2
\ No newline at end of file
+version=2.0.0
\ No newline at end of file
diff --git a/gradle/wrapper/gradle-wrapper.jar b/gradle/wrapper/gradle-wrapper.jar
index e377d49..cc4fdc2 100644
Binary files a/gradle/wrapper/gradle-wrapper.jar and b/gradle/wrapper/gradle-wrapper.jar differ
diff --git a/gradle/wrapper/gradle-wrapper.properties b/gradle/wrapper/gradle-wrapper.properties
index c6723cf..1b16c34 100644
--- a/gradle/wrapper/gradle-wrapper.properties
+++ b/gradle/wrapper/gradle-wrapper.properties
@@ -1,6 +1,5 @@
-#Fri Aug 24 10:25:31 EDT 2018
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
+distributionUrl=https\://services.gradle.org/distributions/gradle-6.1.1-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
-distributionUrl=https\://services.gradle.org/distributions/gradle-3.5-bin.zip
diff --git a/gradlew b/gradlew
index 4453cce..2fe81a7 100755
--- a/gradlew
+++ b/gradlew
@@ -1,5 +1,21 @@
#!/usr/bin/env sh
+#
+# Copyright 2015 the original author or authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
##############################################################################
##
## Gradle start up script for UN*X
@@ -28,16 +44,16 @@ APP_NAME="Gradle"
APP_BASE_NAME=`basename "$0"`
# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
-DEFAULT_JVM_OPTS=""
+DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"'
# Use the maximum available, or set MAX_FD != -1 to use that value.
MAX_FD="maximum"
-warn ( ) {
+warn () {
echo "$*"
}
-die ( ) {
+die () {
echo
echo "$*"
echo
@@ -109,8 +125,8 @@ if $darwin; then
GRADLE_OPTS="$GRADLE_OPTS \"-Xdock:name=$APP_NAME\" \"-Xdock:icon=$APP_HOME/media/gradle.icns\""
fi
-# For Cygwin, switch paths to Windows format before running java
-if $cygwin ; then
+# For Cygwin or MSYS, switch paths to Windows format before running java
+if [ "$cygwin" = "true" -o "$msys" = "true" ] ; then
APP_HOME=`cygpath --path --mixed "$APP_HOME"`
CLASSPATH=`cygpath --path --mixed "$CLASSPATH"`
JAVACMD=`cygpath --unix "$JAVACMD"`
@@ -138,35 +154,30 @@ if $cygwin ; then
else
eval `echo args$i`="\"$arg\""
fi
- i=$((i+1))
+ i=`expr $i + 1`
done
case $i in
- (0) set -- ;;
- (1) set -- "$args0" ;;
- (2) set -- "$args0" "$args1" ;;
- (3) set -- "$args0" "$args1" "$args2" ;;
- (4) set -- "$args0" "$args1" "$args2" "$args3" ;;
- (5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
- (6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
- (7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
- (8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
- (9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
+ 0) set -- ;;
+ 1) set -- "$args0" ;;
+ 2) set -- "$args0" "$args1" ;;
+ 3) set -- "$args0" "$args1" "$args2" ;;
+ 4) set -- "$args0" "$args1" "$args2" "$args3" ;;
+ 5) set -- "$args0" "$args1" "$args2" "$args3" "$args4" ;;
+ 6) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" ;;
+ 7) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" ;;
+ 8) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" ;;
+ 9) set -- "$args0" "$args1" "$args2" "$args3" "$args4" "$args5" "$args6" "$args7" "$args8" ;;
esac
fi
# Escape application args
-save ( ) {
+save () {
for i do printf %s\\n "$i" | sed "s/'/'\\\\''/g;1s/^/'/;\$s/\$/' \\\\/" ; done
echo " "
}
-APP_ARGS=$(save "$@")
+APP_ARGS=`save "$@"`
# Collect all arguments for the java command, following the shell quoting and substitution rules
eval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS "\"-Dorg.gradle.appname=$APP_BASE_NAME\"" -classpath "\"$CLASSPATH\"" org.gradle.wrapper.GradleWrapperMain "$APP_ARGS"
-# by default we should be in the correct project dir, but when run from Finder on Mac, the cwd is wrong
-if [ "$(uname)" = "Darwin" ] && [ "$HOME" = "$PWD" ]; then
- cd "$(dirname "$0")"
-fi
-
exec "$JAVACMD" "$@"
diff --git a/gradlew.bat b/gradlew.bat
index e95643d..24467a1 100644
--- a/gradlew.bat
+++ b/gradlew.bat
@@ -1,3 +1,19 @@
+@rem
+@rem Copyright 2015 the original author or authors.
+@rem
+@rem Licensed under the Apache License, Version 2.0 (the "License");
+@rem you may not use this file except in compliance with the License.
+@rem You may obtain a copy of the License at
+@rem
+@rem https://www.apache.org/licenses/LICENSE-2.0
+@rem
+@rem Unless required by applicable law or agreed to in writing, software
+@rem distributed under the License is distributed on an "AS IS" BASIS,
+@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+@rem See the License for the specific language governing permissions and
+@rem limitations under the License.
+@rem
+
@if "%DEBUG%" == "" @echo off
@rem ##########################################################################
@rem
@@ -14,7 +30,7 @@ set APP_BASE_NAME=%~n0
set APP_HOME=%DIRNAME%
@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
+set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m"
@rem Find java.exe
if defined JAVA_HOME goto findJavaFromJavaHome
diff --git a/resources/RESTConnectorList.png b/resources/RESTConnectorList.png
deleted file mode 100644
index 0ed9e0b..0000000
Binary files a/resources/RESTConnectorList.png and /dev/null differ
diff --git a/resources/RESTConnectorListSmall.png b/resources/RESTConnectorListSmall.png
deleted file mode 100644
index 5ca5543..0000000
Binary files a/resources/RESTConnectorListSmall.png and /dev/null differ
diff --git a/resources/RESTStatus.png b/resources/RESTStatus.png
deleted file mode 100644
index 77b5781..0000000
Binary files a/resources/RESTStatus.png and /dev/null differ
diff --git a/resources/RESTStatusSmall.png b/resources/RESTStatusSmall.png
deleted file mode 100644
index b60176b..0000000
Binary files a/resources/RESTStatusSmall.png and /dev/null differ
diff --git a/resources/SolaceAPI.png b/resources/SolaceAPI.png
deleted file mode 100644
index 1dd36a1..0000000
Binary files a/resources/SolaceAPI.png and /dev/null differ
diff --git a/resources/SolaceCloud1.png b/resources/SolaceCloud1.png
deleted file mode 100644
index 87502bc..0000000
Binary files a/resources/SolaceCloud1.png and /dev/null differ
diff --git a/resources/SolaceCloud2.png b/resources/SolaceCloud2.png
deleted file mode 100644
index eddce74..0000000
Binary files a/resources/SolaceCloud2.png and /dev/null differ
diff --git a/resources/replayDump.png b/resources/replayDump.png
deleted file mode 100644
index 666f2a8..0000000
Binary files a/resources/replayDump.png and /dev/null differ
diff --git a/settings.gradle b/settings.gradle
index cbdd080..c35d8be 100644
--- a/settings.gradle
+++ b/settings.gradle
@@ -1,18 +1 @@
-/*
- * This settings file was generated by the Gradle 'init' task.
- *
- * The settings file is used to specify which projects to include in your build.
- * In a single project build this file can be empty or even removed.
- *
- * Detailed information about configuring a multi-project build in Gradle can be found
- * in the user guide at https://docs.gradle.org/3.5/userguide/multi_project_builds.html
- */
-
-/*
-// To declare projects as part of a multi-project build use the 'include' method
-include 'shared'
-include 'api'
-include 'services:webservice'
-*/
-
-rootProject.name = 'SolaceSinkConnector'
+rootProject.name = 'pubsubplus-connector-kafka-sink'
diff --git a/solace.properties b/solace.properties
deleted file mode 100644
index a523274..0000000
--- a/solace.properties
+++ /dev/null
@@ -1,47 +0,0 @@
-name=solaceSinkConnector
-connector.class=com.solace.sink.connector.SolaceSinkConnector
-tasks.max=1
-topics=solacetest
-sol.host=160.101.136.33
-#sol.host=tcps://160.101.136.33:55443
-sol.username=heinz1
-sol.password=heinz1
-sol.vpn_name=heinzvpn
-sol.topics=soltest, soltest1,solacetest2
-sol.queue=testQ
-sol.record_processor_class=com.solace.sink.connector.recordprocessor.SolSimpleRecordProcessor
-#sol.record_processor_class=com.solace.sink.connector.recordprocessor.SolSimpleKeyedRecordProcessor
-sol.generate_send_timestamps=true
-sol.generate_rcv_timestamps=true
-sol.sub_ack_window_size=255
-sol.generate_sequence_numbers=true
-sol.calculate_message_expiration=true
-sol.subscriber_dto_override=true
-sol.channel_properties.connect_retries=-1
-sol.channel_properties.reconnect_retries=-1
-# Message key can be: NONE, DESTINATION, CORRELATION_ID, CORRELATION_ID_AS_BYTES
-#sol.kafka_message_key=DESTINATION
-sol.kafka_message_key=NONE
-#sol.ssl_validate_certificate=false
-#sol.ssl_validate_certicate_date=false
-sol.ssl_trust_store=/opt/PKI/skeltonCA/heinz1.ts
-sol.ssl_trust_store_pasword=sasquatch
-sol.ssl_trust_store_format=JKS
-#sol.ssl_trusted_command_name_list
-sol.ssl_key_store=/opt/PKI/skeltonCA/heinz1.ks
-sol.ssl_key_store_password=sasquatch
-sol.ssl_key_store_format=JKS
-sol.ssl_key_store_normalized_format=JKS
-sol.ssl_private_key_alias=heinz1
-sol.ssl_private_key_password=sasquatch
-#sol.authentication_scheme=AUTHENTICATION_SCHEME_CLIENT_CERTIFICATE
-#consumer.group.id=solSinkConnector
-group.id=solSinkConnector
-key.converter.schemas.enable=true
-value.converter.schemas.enable=true
-#key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
-value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
-#key.converter=org.apache.kafka.connect.json.JsonConverter
-key.converter=org.apache.kafka.connect.storage.StringConverter
-#value.converter=org.apache.kafka.connect.json.JsonConverter
-#value.converter=org.apache.kafka.connect.storage.StringConverter
\ No newline at end of file
diff --git a/solace_sink_kerb5.properties b/solace_sink_kerb5.properties
deleted file mode 100644
index c4f793c..0000000
--- a/solace_sink_kerb5.properties
+++ /dev/null
@@ -1,40 +0,0 @@
-name=solaceSinkConnector
-offset.flush.interval.ms=1000
-connector.class=com.solace.sink.connector.SolaceSinkConnector
-asks.max=1
-#topics=solacetest
-#sol.kakfa_replay_offset=300
-sol.kakfa_replay_offset=332
-topics=solacetest
-sol.host=vmr90.heinz.org
-#sol.host=160.101.136.33
-#sol.host=tcps://160.101.136.33:55443
-sol.username=testKerb@HEINZ.ORG
-#sol.password=heinz1
-#sol.vpn_name=heinzvpn
-sol.vpn_name=heinzKerberos
-sol.topics=soltest, soltest1,solacetest2
-sol.queue=testQ
-sol.record_processor_class=com.solace.sink.connector.recordprocessor.SolSimpleRecordProcessor
-#sol.record_processor_class=com.solace.sink.connector.recordProcessor.SolSimpleKeyedRecordProcessorDTO
-sol.generate_send_timestamps=true
-sol.sol_generate_rcv_timestamps=true
-sol.sub_ack_window_size=255
-sol.generate_sequence_numbers=true
-sol.calculate_message_expiration=true
-sol.subscriber_dto_override=true
-sol.channel_properties.connect_retries=-1
-sol.channel_properties.reconnect_retries=-1
-sol.authentication_scheme=AUTHENTICATION_SCHEME_GSS_KRB
-#consumer.group.id=solSinkConnector
-group.id=solSinkConnector
-key.converter.schemas.enable=true
-value.converter.schemas.enable=true
-#key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
-value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
-#key.converter=org.apache.kafka.connect.json.JsonConverter
-key.converter=org.apache.kafka.connect.storage.StringConverter
-#value.converter=org.apache.kafka.connect.json.JsonConverter
-#value.converter=org.apache.kafka.connect.storage.StringConverter
-sol.kerberos.login.conf=/opt/kerberos/login.conf
-sol.kerberos.krb5.conf=/opt/kerberos/krb5.conf
\ No newline at end of file
diff --git a/solace_sink_kerb5_properties.json b/solace_sink_kerb5_properties.json
deleted file mode 100644
index 103ce0d..0000000
--- a/solace_sink_kerb5_properties.json
+++ /dev/null
@@ -1,33 +0,0 @@
-{
- "name": "solaceSinkConnector",
- "config": {
- "connector.class": "com.solace.sink.connector.SolaceSinkConnector",
- "offset.flush.interval.ms" : "1000",
- "tasks.max": "1",
- "topics": "solacetest",
- "sol.host": "vmr90.heinz.org",
- "sol.username": "testKerb@HEINZ.ORG",
- "sol.vpn_name": "heinzKerberos",
- "sol.topics": "soltestSink",
- "sol.record_processor_class": "com.solace.sink.connector.recordprocessor.SolSimpleRecordProcessor",
- "sol.generate_send_timestamps": "true",
- "sol.sol_generate_rcv_timestamps": "true",
- "sol.sub_ack_window_size": "255",
- "sol.generate_sequence_numbers": "true",
- "sol.calculate_message_expiration": "true",
- "sol.subscriber_dto_override": "true",
- "sol.channel_properties.connect_retries": "-1",
- "sol.channel_properties.reconnect_retries": "-1",
- "sol.kafka_message_key": "DESTINATION",
- "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
- "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
- "key.converter.schemas.enable": "true",
- "value.converter.schemas.enable": "true",
- "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
- "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
- "sol.kerberos.login.conf": "/opt/kerberos/login.conf",
- "sol.kerberos.krb5.conf": "/opt/kerberos/krb5.conf",
- "sol.authentication_scheme": "AUTHENTICATION_SCHEME_GSS_KRB"
- }
-
-}
\ No newline at end of file
diff --git a/solace_sink_properties.json b/solace_sink_properties.json
deleted file mode 100644
index 04c0b49..0000000
--- a/solace_sink_properties.json
+++ /dev/null
@@ -1,43 +0,0 @@
-{
- "name": "solaceSinkConnector",
- "config": {
- "connector.class": "com.solace.sink.connector.SolaceSinkConnector",
- "tasks.max": "2",
- "topics": "solacetest",
- "sol.host": "160.101.136.33",
- "sol.username": "heinz1",
- "sol.password": "heinz1",
- "sol.vpn_name": "heinzvpn",
- "sol.topics": "soltest, soltest1,solacetest2",
- "sol.queue": "testQ",
- "sol.record_processor_class": "com.solace.sink.connector.recordprocessor.SolSimpleRecordProcessor",
- "sol.generate_send_timestamps": "true",
- "sol.generate_rcv_timestamps": "true",
- "sol.sub_ack_window_size": "255",
- "sol.generate_sequence_numbers": "true",
- "sol.calculate_message_expiration": "true",
- "sol.subscriber_dto_override": "true",
- "sol.channel_properties.connect_retries": "-1",
- "sol.channel_properties.reconnect_retries": "-1",
- "sol.kafka_message_key": "DESTINATION",
- "sol.ssl_trust_store": "/opt/PKI/skeltonCA/heinz1.ts",
- "sol.ssl_trust_store_pasword": "sasquatch",
- "sol.ssl_trust_store_format": "JKS",
- "sol.ssl_key_store": "/opt/PKI/skeltonCA/heinz1.ks",
- "sol.ssl_key_store_password": "sasquatch",
- "sol.ssl_key_store_format": "JKS",
- "sol.ssl_key_store_normalized_format": "JKS",
- "sol.ssl_private_key_alias": "heinz1",
- "sol.ssl_private_key_password": "sasquatch",
- "internal.key.converter.schemas.enable": "false",
- "internal.value.converter.schemas.enable": "false",
- "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
- "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
- "key.converter.schemas.enable": "true",
- "value.converter.schemas.enable": "true",
- "key.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
- "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter"
-
- }
-
-}
\ No newline at end of file
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/DockerizedPlatformSetupApache.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/DockerizedPlatformSetupApache.java
new file mode 100644
index 0000000..b305e7e
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/DockerizedPlatformSetupApache.java
@@ -0,0 +1,63 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Nested;
+import org.junit.jupiter.api.Test;
+import org.testcontainers.containers.BindMode;
+import org.testcontainers.containers.FixedHostPortGenericContainer;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.wait.strategy.Wait;
+import org.testcontainers.junit.jupiter.Container;
+
+public class DockerizedPlatformSetupApache implements MessagingServiceFullLocalSetupApache {
+
+ @Container
+ public final static GenericContainer> KAFKA_CONNECT_REST = new FixedHostPortGenericContainer<>("bitnami/kafka:2")
+ .withEnv("KAFKA_CFG_ZOOKEEPER_CONNECT", dockerIpAddress + ":2181")
+ .withEnv("ALLOW_PLAINTEXT_LISTENER", "yes")
+ .withCommand("/bin/sh", "-c", //"sleep 10000")
+ "sed -i 's/bootstrap.servers=.*/bootstrap.servers=" + dockerIpAddress
+ + ":39092/g' /opt/bitnami/kafka/config/connect-distributed.properties; "
+ + "echo 'plugin.path=/opt/bitnami/kafka/jars' >> /opt/bitnami/kafka/config/connect-distributed.properties; "
+ + "echo 'rest.port=28083' >> /opt/bitnami/kafka/config/connect-distributed.properties; "
+ + "/opt/bitnami/kafka/bin/connect-distributed.sh /opt/bitnami/kafka/config/connect-distributed.properties")
+ .withFixedExposedPort(28083,28083)
+ .withExposedPorts(28083)
+////
+// // Enable remote debug session at default port 5005
+// .withEnv("KAFKA_DEBUG", "y")
+// .withEnv("DEBUG_SUSPEND_FLAG", "y")
+////
+ .withClasspathResourceMapping(Tools.getUnzippedConnectorDirName() + "/lib",
+ "/opt/bitnami/kafka/jars/pubsubplus-connector-kafka", BindMode.READ_ONLY)
+// .withStartupTimeout(Duration.ofSeconds(120))
+ .waitingFor( Wait.forLogMessage(".*Finished starting connectors and tasks.*", 1) )
+ ;
+
+ @BeforeAll
+ static void setUp() {
+ assert(KAFKA_CONNECT_REST != null); // Required to instantiate
+ }
+
+ @DisplayName("Local MessagingService connection tests")
+ @Nested
+ class MessagingServiceConnectionTests {
+ @DisplayName("Setup the dockerized platform")
+ @Test
+ @Disabled
+ void setupDockerizedPlatformTest() {
+ String host = COMPOSE_CONTAINER_PUBSUBPLUS.getServiceHost("solbroker_1", 8080);
+ assertNotNull(host);
+ try {
+ Thread.sleep(36000000l);
+ } catch (InterruptedException e) {
+ e.printStackTrace();
+ }
+
+ }
+ }
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/DockerizedPlatformSetupConfluent.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/DockerizedPlatformSetupConfluent.java
new file mode 100644
index 0000000..d7ead46
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/DockerizedPlatformSetupConfluent.java
@@ -0,0 +1,72 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Nested;
+import org.junit.jupiter.api.Test;
+import org.testcontainers.containers.BindMode;
+import org.testcontainers.containers.FixedHostPortGenericContainer;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.wait.strategy.Wait;
+import org.testcontainers.junit.jupiter.Container;
+
+public class DockerizedPlatformSetupConfluent implements MessagingServiceFullLocalSetupConfluent {
+
+ @DisplayName("Local MessagingService connection tests")
+ @Nested
+ class MessagingServiceConnectionTests {
+
+ @Container
+ public final GenericContainer> connector = new FixedHostPortGenericContainer<>("confluentinc/cp-kafka-connect-base:5.4.0")
+ .withEnv("CONNECT_BOOTSTRAP_SERVERS",
+ COMPOSE_CONTAINER_KAFKA.getServiceHost("kafka_1", 39092) + ":39092")
+ .withFixedExposedPort(28083,28083)
+ .withFixedExposedPort(5005,5005)
+ .withExposedPorts(28083,5005)
+ .withEnv("CONNECT_REST_PORT", "28083")
+//
+// // Enable remote debug session at default port 5005
+// .withEnv("KAFKA_DEBUG", "y")
+// .withEnv("DEBUG_SUSPEND_FLAG", "y")
+//
+ .withEnv("CONNECT_GROUP_ID", "quickstart-avro")
+ .withEnv("CONNECT_CONFIG_STORAGE_TOPIC", "quickstart-avro-config")
+ .withEnv("CONNECT_OFFSET_STORAGE_TOPIC", "quickstart-avro-offsets")
+ .withEnv("CONNECT_STATUS_STORAGE_TOPIC", "quickstart-avro-status")
+ .withEnv("CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR", "1")
+ .withEnv("CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR", "1")
+ .withEnv("CONNECT_STATUS_STORAGE_REPLICATION_FACTOR", "1")
+ .withEnv("CONNECT_KEY_CONVERTER", "io.confluent.connect.avro.AvroConverter")
+ .withEnv("CONNECT_VALUE_CONVERTER", "io.confluent.connect.avro.AvroConverter")
+ .withEnv("CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL",
+ "http://" + COMPOSE_CONTAINER_KAFKA.getServiceHost("schema-registry_1", 8081)
+ + ":8081")
+ .withEnv("CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL",
+ "http://" + COMPOSE_CONTAINER_KAFKA.getServiceHost("schema-registry_1", 8081)
+ + ":8081")
+ .withEnv("CONNECT_INTERNAL_KEY_CONVERTER", "org.apache.kafka.connect.json.JsonConverter")
+ .withEnv("CONNECT_INTERNAL_VALUE_CONVERTER", "org.apache.kafka.connect.json.JsonConverter")
+//
+ .withEnv("CONNECT_REST_ADVERTISED_HOST_NAME", "localhost")
+ .withEnv("CONNECT_LOG4J_ROOT_LOGLEVEL", "INFO")
+ .withEnv("CONNECT_PLUGIN_PATH", "/usr/share/java,/etc/kafka-connect/jars")
+ .withClasspathResourceMapping("pubsubplus-connector-kafka-sink/lib",
+ "/etc/kafka-connect/jars/pubsubplus-connector-kafka", BindMode.READ_ONLY)
+// .waitingFor( Wait.forHealthcheck() );
+ .waitingFor( Wait.forLogMessage(".*Kafka Connect started.*", 1) );
+
+ @DisplayName("Setup the dockerized platform")
+ @Test
+ void setupDockerizedPlatformTest() {
+ String host = COMPOSE_CONTAINER_PUBSUBPLUS.getServiceHost("solbroker_1", 8080);
+ assertNotNull(host);
+ try {
+ Thread.sleep(36000000l);
+ } catch (InterruptedException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ }
+
+ }
+ }
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/MessagingServiceFullLocalSetupApache.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/MessagingServiceFullLocalSetupApache.java
new file mode 100644
index 0000000..a8fd68a
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/MessagingServiceFullLocalSetupApache.java
@@ -0,0 +1,48 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+
+import java.io.File;
+import org.junit.jupiter.api.BeforeAll;
+import org.testcontainers.containers.DockerComposeContainer;
+import org.testcontainers.junit.jupiter.Container;
+import org.testcontainers.junit.jupiter.Testcontainers;
+import org.testcontainers.containers.wait.strategy.Wait;
+
+@Testcontainers
+public interface MessagingServiceFullLocalSetupApache extends TestConstants {
+
+ @Container
+ public static final DockerComposeContainer COMPOSE_CONTAINER_PUBSUBPLUS =
+ new DockerComposeContainer(
+ new File(FULL_DOCKER_COMPOSE_FILE_PATH + "docker-compose-solace.yml"))
+ .withEnv("PUBSUB_NETWORK_NAME", PUBSUB_NETWORK_NAME)
+ .withEnv("PUBSUB_HOSTNAME", PUBSUB_HOSTNAME)
+ .withEnv("PUBSUB_TAG", PUBSUB_TAG)
+ .withServices(SERVICES)
+ .withLocalCompose(true)
+ .withPull(false)
+ .waitingFor("solbroker_1",
+ Wait.forLogMessage(".*System startup complete.*", 1) );
+
+ public static final String dockerReportedAddress = COMPOSE_CONTAINER_PUBSUBPLUS.getServiceHost("solbroker_1", 8080);
+ public static final String dockerIpAddress = (dockerReportedAddress == "localhost" || dockerReportedAddress == "127.0.0.1" ?
+ Tools.getIpAddress() : dockerReportedAddress);
+
+ @Container
+ public static final DockerComposeContainer COMPOSE_CONTAINER_KAFKA =
+ new DockerComposeContainer(
+ new File(FULL_DOCKER_COMPOSE_FILE_PATH + "docker-compose-kafka-apache.yml"))
+ .withEnv("KAFKA_TOPIC", KAFKA_SINK_TOPIC)
+ .withEnv("KAFKA_HOST", dockerIpAddress)
+ .withLocalCompose(true)
+ .waitingFor("schema-registry_1",
+ Wait.forHttp("/subjects").forStatusCode(200));
+
+ @BeforeAll
+ static void checkContainer() {
+ String host = COMPOSE_CONTAINER_PUBSUBPLUS.getServiceHost("solbroker_1", 8080);
+ assertNotNull(host);
+ }
+}
+
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/MessagingServiceFullLocalSetupConfluent.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/MessagingServiceFullLocalSetupConfluent.java
new file mode 100644
index 0000000..130a75a
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/MessagingServiceFullLocalSetupConfluent.java
@@ -0,0 +1,59 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import static org.junit.jupiter.api.Assertions.assertNotNull;
+
+import com.google.gson.Gson;
+import com.google.gson.JsonElement;
+import com.google.gson.JsonObject;
+import com.google.gson.JsonParser;
+
+import java.io.File;
+import java.io.IOException;
+import java.time.Duration;
+
+import org.apache.commons.configuration2.Configuration;
+import org.apache.commons.configuration2.FileBasedConfiguration;
+import org.apache.commons.configuration2.PropertiesConfiguration;
+import org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder;
+import org.apache.commons.configuration2.builder.fluent.Parameters;
+import org.apache.commons.configuration2.ex.ConfigurationException;
+import org.apache.commons.io.FileUtils;
+import org.junit.jupiter.api.BeforeAll;
+import org.testcontainers.containers.DockerComposeContainer;
+import org.testcontainers.junit.jupiter.Container;
+import org.testcontainers.junit.jupiter.Testcontainers;
+import org.testcontainers.containers.wait.strategy.Wait;
+
+@Testcontainers
+public interface MessagingServiceFullLocalSetupConfluent extends TestConstants {
+
+ @Container
+ public static final DockerComposeContainer COMPOSE_CONTAINER_PUBSUBPLUS =
+ new DockerComposeContainer(
+ new File(FULL_DOCKER_COMPOSE_FILE_PATH + "docker-compose-solace.yml"))
+ .withEnv("PUBSUB_NETWORK_NAME", PUBSUB_NETWORK_NAME)
+ .withEnv("PUBSUB_HOSTNAME", PUBSUB_HOSTNAME)
+ .withEnv("PUBSUB_TAG", PUBSUB_TAG)
+ .withServices(SERVICES)
+ .withLocalCompose(true)
+ .withPull(false)
+ .waitingFor("solbroker_1",
+ Wait.forLogMessage(".*System startup complete.*", 1) );
+
+ @Container
+ public static final DockerComposeContainer COMPOSE_CONTAINER_KAFKA =
+ new DockerComposeContainer(
+ new File(FULL_DOCKER_COMPOSE_FILE_PATH + "docker-compose-kafka-confluent.yml"))
+ .withEnv("KAFKA_TOPIC", KAFKA_SINK_TOPIC)
+ .withEnv("KAFKA_HOST", COMPOSE_CONTAINER_PUBSUBPLUS.getServiceHost("solbroker_1", 8080))
+ .withLocalCompose(true)
+ .waitingFor("schema-registry_1",
+ Wait.forHttp("/subjects").forStatusCode(200));
+
+ @BeforeAll
+ static void checkContainer() {
+ String host = COMPOSE_CONTAINER_PUBSUBPLUS.getServiceHost("solbroker_1", 8080);
+ assertNotNull(host);
+ }
+}
+
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/SinkConnectorIT.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/SinkConnectorIT.java
new file mode 100644
index 0000000..2b869ad
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/SinkConnectorIT.java
@@ -0,0 +1,357 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import org.apache.kafka.clients.producer.RecordMetadata;
+import org.junit.jupiter.api.AfterAll;
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.Disabled;
+import org.junit.jupiter.api.DisplayName;
+import org.junit.jupiter.api.Nested;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.TestInstance;
+import org.junit.jupiter.api.TestInstance.Lifecycle;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.testcontainers.shaded.com.google.common.collect.ImmutableMap;
+
+import com.solacesystems.jcsmp.BytesXMLMessage;
+import com.solacesystems.jcsmp.JCSMPException;
+import com.solacesystems.jcsmp.SDTException;
+import com.solacesystems.jcsmp.SDTMap;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.fail;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+public class SinkConnectorIT extends DockerizedPlatformSetupApache implements TestConstants {
+
+ static Logger logger = LoggerFactory.getLogger(SinkConnectorIT.class.getName());
+ // Connectordeployment creates a Kafka topic "kafkaTestTopic", which is used next
+ static SolaceConnectorDeployment connectorDeployment = new SolaceConnectorDeployment();
+ static TestKafkaProducer kafkaProducer = new TestKafkaProducer(connectorDeployment.kafkaTestTopic);
+ static TestSolaceConsumer solaceConsumer = new TestSolaceConsumer();
+ // Used to request additional verification types
+ static enum AdditionalCheck { ATTACHMENTBYTEBUFFER, CORRELATIONID }
+
+ ////////////////////////////////////////////////////
+ // Main setup/teardown
+
+ @BeforeAll
+ static void setUp() {
+ try {
+ connectorDeployment.waitForConnectorRestIFUp();
+ connectorDeployment.provisionKafkaTestTopic();
+ // Start consumer
+ // Ensure test queue exists on PubSub+
+ solaceConsumer.initialize("tcp://" + MessagingServiceFullLocalSetupConfluent.COMPOSE_CONTAINER_PUBSUBPLUS
+ .getServiceHost("solbroker_1", 55555) + ":55555", "default", "default", "default");
+ solaceConsumer.provisionQueue(SOL_QUEUE);
+ solaceConsumer.start();
+ kafkaProducer.start();
+ Thread.sleep(1000l);
+ } catch (JCSMPException | InterruptedException e1) {
+ e1.printStackTrace();
+ }
+ }
+
+ @AfterAll
+ static void cleanUp() {
+ kafkaProducer.close();
+ solaceConsumer.stop();
+ }
+
+
+
+ ////////////////////////////////////////////////////
+ // Test types
+
+ void messageToKafkaTest(String expectedSolaceQueue, String[] expectedSolaceTopics, String kafkaKey, String kafkaValue,
+ Map additionalChecks) {
+ try {
+ // Clean catch queues first
+ // TODO: fix possible concurrency issue with cleaning/wring the queue later
+ TestSolaceConsumer.solaceReceivedQueueMessages.clear();
+ TestSolaceConsumer.solaceReceivedTopicMessages.clear();
+
+ // Received messages
+ List receivedMessages = new ArrayList<>();
+
+ // Send Kafka message
+ RecordMetadata metadata = kafkaProducer.sendMessageToKafka(kafkaKey, kafkaValue);
+ assertNotNull(metadata);
+
+ // Wait for PubSub+ to report messages - populate queue and topics if provided
+ if (expectedSolaceQueue != null) {
+ BytesXMLMessage queueMessage = TestSolaceConsumer.solaceReceivedQueueMessages.poll(5,TimeUnit.SECONDS);
+ assertNotNull(queueMessage);
+ receivedMessages.add(queueMessage);
+ } else {
+ assert(TestSolaceConsumer.solaceReceivedQueueMessages.size() == 0);
+ }
+ for(String s : expectedSolaceTopics) {
+ BytesXMLMessage newTopicMessage = TestSolaceConsumer.solaceReceivedTopicMessages.poll(5,TimeUnit.SECONDS);
+ assertNotNull(newTopicMessage);
+ receivedMessages.add(newTopicMessage);
+ }
+
+ // Evaluate messages
+ // ensure each solacetopic got a respective message
+ for(String topicname : expectedSolaceTopics) {
+ boolean topicFound = false;
+ for (BytesXMLMessage message : receivedMessages) {
+ if (message.getDestination().getName().equals(topicname)) {
+ topicFound = true;
+ break;
+ }
+ }
+ if (!topicFound) fail("Nothing was delivered to topic " + topicname);
+ }
+ // check message contents
+ for (BytesXMLMessage message : receivedMessages) {
+ SDTMap userHeader = message.getProperties();
+ assert(userHeader.getString("k_topic").contentEquals(metadata.topic()));
+ assert(userHeader.getString("k_partition").contentEquals(Long.toString(metadata.partition())));
+ assert(userHeader.getString("k_offset").contentEquals(Long.toString(metadata.offset())));
+ assert(message.getApplicationMessageType().contains(metadata.topic()));
+ // additional checks as requested
+ if (additionalChecks != null) {
+ for (Map.Entry check : additionalChecks.entrySet()) {
+ if (check.getKey() == AdditionalCheck.ATTACHMENTBYTEBUFFER) {
+ // Verify contents of the message AttachmentByteBuffer
+ assert(Arrays.equals((byte[])message.getAttachmentByteBuffer().array(),check.getValue().getBytes()));
+ }
+ if (check.getKey() == AdditionalCheck.CORRELATIONID) {
+ // Verify contents of the message correlationId
+ assert(message.getCorrelationId().contentEquals(check.getValue()));
+ }
+ }
+ }
+ }
+
+ } catch (InterruptedException e) {
+ e.printStackTrace();
+ } catch (SDTException e) {
+ e.printStackTrace();
+ }
+ }
+
+ ////////////////////////////////////////////////////
+ // Scenarios
+
+ @DisplayName("Sink SimpleMessageProcessor tests")
+ @Nested
+ @TestInstance(Lifecycle.PER_CLASS)
+ class SinkConnectorSimpleMessageProcessorTests {
+
+ String topics[] = {SOL_ROOT_TOPIC+"/TestTopic1/SubTopic", SOL_ROOT_TOPIC+"/TestTopic2/SubTopic"};
+
+ @BeforeAll
+ void setUp() {
+ Properties prop = new Properties();
+ prop.setProperty("sol.record_processor_class", "com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleRecordProcessor");
+ prop.setProperty("sol.dynamic_destination", "false");
+ prop.setProperty("sol.topics", String.join(", ", topics));
+ prop.setProperty("sol.queue", SOL_QUEUE);
+ connectorDeployment.startConnector(prop);
+ }
+
+
+ @DisplayName("TextMessage-QueueAndTopics-SolSampleSimpleMessageProcessor")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest() {
+ messageToKafkaTest(SOL_QUEUE, topics,
+ // kafka key and value
+ "Key", "Hello TextMessageToTopicTest world!",
+ // additional checks
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "Hello TextMessageToTopicTest world!"));
+ }
+ }
+
+
+ @DisplayName("Sink KeyedMessageProcessor-NONE tests")
+ @Nested
+ @TestInstance(Lifecycle.PER_CLASS)
+ class SinkConnectorNoneKeyedMessageProcessorTests {
+
+ String topics[] = {SOL_ROOT_TOPIC+"/TestTopic1/SubTopic", SOL_ROOT_TOPIC+"/TestTopic2/SubTopic"};
+
+ @BeforeAll
+ void setUp() {
+ Properties prop = new Properties();
+ prop.setProperty("sol.record_processor_class", "com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleKeyedRecordProcessor");
+ prop.setProperty("sol.dynamic_destination", "false");
+ prop.setProperty("sol.topics", String.join(", ", topics));
+ prop.setProperty("sol.kafka_message_key", "NONE");
+ prop.setProperty("sol.queue", SOL_QUEUE);
+ connectorDeployment.startConnector(prop);
+ }
+
+
+ @DisplayName("TextMessage-QueueAndTopics-KeyedMessageProcessor-NONE")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest() {
+ messageToKafkaTest(SOL_QUEUE, topics,
+ // kafka key and value
+ "Key", "Hello TextMessageToTopicTest world!",
+ // additional checks
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "Hello TextMessageToTopicTest world!"));
+ }
+ }
+
+
+ @DisplayName("Sink KeyedMessageProcessor-DESTINATION tests")
+ @Nested
+ @TestInstance(Lifecycle.PER_CLASS)
+ class SinkConnectorDestinationKeyedMessageProcessorTests {
+
+ String topics[] = {SOL_ROOT_TOPIC+"/TestTopic1/SubTopic", SOL_ROOT_TOPIC+"/TestTopic2/SubTopic"};
+
+ @BeforeAll
+ void setUp() {
+ Properties prop = new Properties();
+ prop.setProperty("sol.record_processor_class", "com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleKeyedRecordProcessor");
+ prop.setProperty("sol.dynamic_destination", "false");
+ prop.setProperty("sol.topics", String.join(", ", topics));
+ prop.setProperty("sol.kafka_message_key", "DESTINATION");
+ prop.setProperty("sol.queue", SOL_QUEUE);
+ connectorDeployment.startConnector(prop);
+ }
+
+
+ @DisplayName("TextMessage-QueueAndTopics-KeyedMessageProcessor-DESTINATION")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest() {
+ messageToKafkaTest(SOL_QUEUE, topics,
+ // kafka key and value
+ "Destination", "Hello TextMessageToTopicTest world!",
+ // additional checks with expected values
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "Hello TextMessageToTopicTest world!",
+ AdditionalCheck.CORRELATIONID, "Destination"));
+ }
+ }
+
+
+ @DisplayName("Sink KeyedMessageProcessor-CORRELATION_ID tests")
+ @Nested
+ @TestInstance(Lifecycle.PER_CLASS)
+ class SinkConnectorCorrelationIdKeyedMessageProcessorTests {
+
+ String topics[] = {SOL_ROOT_TOPIC+"/TestTopic1/SubTopic", SOL_ROOT_TOPIC+"/TestTopic2/SubTopic"};
+
+ @BeforeAll
+ void setUp() {
+ Properties prop = new Properties();
+ prop.setProperty("sol.record_processor_class", "com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleKeyedRecordProcessor");
+ prop.setProperty("sol.dynamic_destination", "false");
+ prop.setProperty("sol.topics", String.join(", ", topics));
+ prop.setProperty("sol.kafka_message_key", "CORRELATION_ID");
+ prop.setProperty("sol.queue", SOL_QUEUE);
+ connectorDeployment.startConnector(prop);
+ }
+
+
+ @DisplayName("TextMessage-QueueAndTopics-KeyedMessageProcessor-CORRELATION_ID")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest() {
+ messageToKafkaTest(SOL_QUEUE, topics,
+ // kafka key and value
+ "TestCorrelationId", "Hello TextMessageToTopicTest world!",
+ // additional checks with expected values
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "Hello TextMessageToTopicTest world!",
+ AdditionalCheck.CORRELATIONID, "TestCorrelationId"));
+ }
+ }
+
+
+ @DisplayName("Sink KeyedMessageProcessor-CORRELATION_ID_AS_BYTES tests")
+ @Nested
+ @TestInstance(Lifecycle.PER_CLASS)
+ class SinkConnectorCorrelationIdAsBytesKeyedMessageProcessorTests {
+
+ String topics[] = {SOL_ROOT_TOPIC+"/TestTopic1/SubTopic", SOL_ROOT_TOPIC+"/TestTopic2/SubTopic"};
+
+ @BeforeAll
+ void setUp() {
+ Properties prop = new Properties();
+ prop.setProperty("sol.record_processor_class", "com.solace.connector.kafka.connect.sink.recordprocessor.SolSimpleKeyedRecordProcessor");
+ prop.setProperty("sol.dynamic_destination", "false");
+ prop.setProperty("sol.topics", String.join(", ", topics));
+ prop.setProperty("sol.kafka_message_key", "CORRELATION_ID_AS_BYTES");
+ prop.setProperty("sol.queue", SOL_QUEUE);
+ connectorDeployment.startConnector(prop);
+ }
+
+
+ @DisplayName("TextMessage-QueueAndTopics-KeyedMessageProcessor-CORRELATION_ID_AS_BYTES")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest() {
+ messageToKafkaTest(SOL_QUEUE, topics,
+ // kafka key and value
+ "TestCorrelationId", "Hello TextMessageToTopicTest world!",
+ // additional checks with expected values
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "Hello TextMessageToTopicTest world!",
+ AdditionalCheck.CORRELATIONID, "TestCorrelationId"));
+ }
+ }
+
+
+ @DisplayName("Sink DynamicDestinationMessageProcessor tests")
+ @Nested
+ @TestInstance(Lifecycle.PER_CLASS)
+ class SinkDynamicDestinationMessageProcessorMessageProcessorTests {
+
+ String topics[] = {SOL_ROOT_TOPIC+"/TestTopic1/SubTopic", SOL_ROOT_TOPIC+"/TestTopic2/SubTopic"};
+
+ @BeforeAll
+ void setUp() {
+ Properties prop = new Properties();
+ prop.setProperty("sol.record_processor_class", "com.solace.connector.kafka.connect.sink.recordprocessor.SolDynamicDestinationRecordProcessor");
+ prop.setProperty("sol.dynamic_destination", "true");
+ prop.setProperty("sol.topics", String.join(", ", topics));
+ prop.setProperty("sol.queue", SOL_QUEUE);
+ connectorDeployment.startConnector(prop);
+ }
+
+
+ @DisplayName("TextMessage-DynamicDestinationMessageProcessor-start")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest() {
+ messageToKafkaTest(
+ // expected list of delivery queue and topics
+ null, new String[] {"ctrl/bus/1234/start"},
+ // kafka key and value
+ "ignore", "1234:start",
+ // additional checks with expected values
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "start"));
+ }
+
+ @DisplayName("TextMessage-DynamicDestinationMessageProcessor-stop")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest2() {
+ messageToKafkaTest(
+ // expected list of delivery queue and topics
+ null, new String[] {"ctrl/bus/1234/stop"},
+ // kafka key and value
+ "ignore", "1234:stop",
+ // additional checks with expected values
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "stop"));
+ }
+
+ @DisplayName("TextMessage-DynamicDestinationMessageProcessor-other")
+ @Test
+ void kafkaConsumerTextMessageToTopicTest3() {
+ messageToKafkaTest(
+ // expected list of delivery queue and topics
+ null, new String[] {"comms/bus/1234"},
+ // kafka key and value
+ "ignore", "1234:other",
+ // additional checks with expected values
+ ImmutableMap.of(AdditionalCheck.ATTACHMENTBYTEBUFFER, "other"));
+ }
+ }
+
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/SolaceConnectorDeployment.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/SolaceConnectorDeployment.java
new file mode 100644
index 0000000..d4df7be
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/SolaceConnectorDeployment.java
@@ -0,0 +1,147 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.time.Instant;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import org.apache.commons.configuration2.Configuration;
+import org.apache.commons.configuration2.FileBasedConfiguration;
+import org.apache.commons.configuration2.PropertiesConfiguration;
+import org.apache.commons.configuration2.builder.FileBasedConfigurationBuilder;
+import org.apache.commons.configuration2.builder.fluent.Parameters;
+import org.apache.commons.configuration2.ex.ConfigurationException;
+import org.apache.commons.io.FileUtils;
+import org.apache.kafka.clients.admin.AdminClient;
+import org.apache.kafka.clients.admin.NewTopic;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.gson.Gson;
+import com.google.gson.JsonElement;
+import com.google.gson.JsonObject;
+import com.google.gson.JsonParser;
+
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class SolaceConnectorDeployment implements TestConstants {
+
+ static Logger logger = LoggerFactory.getLogger(SolaceConnectorDeployment.class.getName());
+
+ static String kafkaTestTopic = KAFKA_SINK_TOPIC + "-" + Instant.now().getEpochSecond();
+ OkHttpClient client = new OkHttpClient();
+ String connectorAddress = new TestConfigProperties().getProperty("kafka.connect_rest_url");
+
+ public void waitForConnectorRestIFUp() {
+ Request request = new Request.Builder().url("http://" + connectorAddress + "/connector-plugins").build();
+ Response response = null;
+ do {
+ try {
+ Thread.sleep(1000l);
+ response = client.newCall(request).execute();
+ } catch (IOException | InterruptedException e) {
+ // Continue looping
+ }
+ } while (response == null || !response.isSuccessful());
+ }
+
+ public void provisionKafkaTestTopic() {
+ // Create a new kafka test topic to use
+ String bootstrapServers = MessagingServiceFullLocalSetupConfluent.COMPOSE_CONTAINER_KAFKA.getServiceHost("kafka_1",
+ 39092) + ":39092";
+ Properties properties = new Properties();
+ properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+ AdminClient adminClient = AdminClient.create(properties);
+ NewTopic newTopic = new NewTopic(kafkaTestTopic, 1, (short) 1); // new NewTopic(topicName, numPartitions,
+ // replicationFactor)
+ List newTopics = new ArrayList();
+ newTopics.add(newTopic);
+ adminClient.createTopics(newTopics);
+ adminClient.close();
+ }
+
+ void startConnector() {
+ startConnector(null); // Defaults only, no override
+ }
+
+ void startConnector(Properties props) {
+ String configJson = null;
+ // Prep config files
+ try {
+ // Configure .json connector params
+ File jsonFile = new File(
+ UNZIPPEDCONNECTORDESTINATION + "/" + Tools.getUnzippedConnectorDirName() + "/" + CONNECTORJSONPROPERTIESFILE);
+ String jsonString = FileUtils.readFileToString(jsonFile);
+ JsonElement jtree = new JsonParser().parse(jsonString);
+ JsonElement jconfig = jtree.getAsJsonObject().get("config");
+ JsonObject jobject = jconfig.getAsJsonObject();
+ // Set properties defaults
+ jobject.addProperty("sol.host", "tcp://" + new TestConfigProperties().getProperty("sol.host") + ":55555");
+ jobject.addProperty("sol.username", SOL_ADMINUSER_NAME);
+ jobject.addProperty("sol.password", SOL_ADMINUSER_PW);
+ jobject.addProperty("sol.vpn_name", SOL_VPN);
+ jobject.addProperty("topics", kafkaTestTopic);
+ jobject.addProperty("sol.topics", SOL_TOPICS);
+ jobject.addProperty("sol.autoflush.size", "1");
+ jobject.addProperty("sol.message_processor_class", CONN_MSGPROC_CLASS);
+ jobject.addProperty("sol.kafka_message_key", CONN_KAFKA_MSGKEY);
+ jobject.addProperty("value.converter", "org.apache.kafka.connect.converters.ByteArrayConverter");
+ jobject.addProperty("key.converter", "org.apache.kafka.connect.storage.StringConverter");
+ // Override properties if provided
+ if (props != null) {
+ props.forEach((key, value) -> {
+ jobject.addProperty((String) key, (String) value);
+ });
+ }
+ Gson gson = new Gson();
+ configJson = gson.toJson(jtree);
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+
+ // Configure and start the solace connector
+ try {
+ // check presence of Solace plugin: curl
+ // http://18.218.82.209:8083/connector-plugins | jq
+ Request request = new Request.Builder().url("http://" + connectorAddress + "/connector-plugins").build();
+ Response response;
+ response = client.newCall(request).execute();
+ assert (response.isSuccessful());
+ String results = response.body().string();
+ logger.info("Available connector plugins: " + results);
+ assert (results.contains("solace"));
+
+ // Delete a running connector, if any
+ Request deleterequest = new Request.Builder()
+ .url("http://" + connectorAddress + "/connectors/solaceSinkConnector").delete().build();
+ Response deleteresponse = client.newCall(deleterequest).execute();
+ logger.info("Delete response: " + deleteresponse);
+
+ // configure plugin: curl -X POST -H "Content-Type: application/json" -d
+ // @solace_source_properties.json http://18.218.82.209:8083/connectors
+ Request configrequest = new Request.Builder().url("http://" + connectorAddress + "/connectors")
+ .post(RequestBody.create(configJson, MediaType.parse("application/json"))).build();
+ Response configresponse = client.newCall(configrequest).execute();
+ // if (!configresponse.isSuccessful()) throw new IOException("Unexpected code "
+ // + configresponse);
+ String configresults = configresponse.body().string();
+ logger.info("Connector config results: " + configresults);
+ // check success
+ Thread.sleep(5000); // Give some time to start
+ } catch (IOException e) {
+ e.printStackTrace();
+ } catch (InterruptedException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ }
+ }
+
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestConfigProperties.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestConfigProperties.java
new file mode 100644
index 0000000..f44c291
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestConfigProperties.java
@@ -0,0 +1,66 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import java.io.FileReader;
+import java.io.IOException;
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.Properties;
+
+public class TestConfigProperties {
+
+ static String testConfigPropertiesFile = "src/integrationTest/resources/manual-setup.properties";
+ // This class helps determine the docker host's IP address and avoids getting "localhost"
+ static class DockerHost {
+ static public String getIpAddress() {
+ String dockerReportedAddress = MessagingServiceFullLocalSetupConfluent.COMPOSE_CONTAINER_KAFKA
+ .getServiceHost("kafka_1", 9092);
+ if (dockerReportedAddress == "localhost" || dockerReportedAddress == "127.0.0.1") {
+ return Tools.getIpAddress();
+ } else {
+ return MessagingServiceFullLocalSetupConfluent.COMPOSE_CONTAINER_KAFKA
+ .getServiceHost("kafka_1", 9092);
+ }
+ }
+ }
+
+
+ private Properties properties = new Properties();
+
+ TestConfigProperties() {
+ try(FileReader fileReader = new FileReader(testConfigPropertiesFile)){
+ properties.load(fileReader);
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ }
+
+ String getProperty(String name) {
+ String configuredProperty = properties.getProperty(name);
+ if (configuredProperty != null) {
+ return configuredProperty;
+ }
+ switch(name) {
+ case "sol.host":
+ // No port here
+ return DockerHost.getIpAddress();
+
+ case "sol.username":
+ return "default";
+
+ case "sol.password":
+ return "default";
+
+ case "sol.vpn_name":
+ return "default";
+
+ case "kafka.connect_rest_url":
+ return (DockerHost.getIpAddress() + ":28083");
+
+ case "kafka.bootstrap_servers":
+ return (DockerHost.getIpAddress() + ":39092");
+
+ default:
+ return null;
+ }
+ }
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestConstants.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestConstants.java
new file mode 100644
index 0000000..2b6dbbd
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestConstants.java
@@ -0,0 +1,31 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+public interface TestConstants {
+
+ public static final String PUBSUB_TAG = "latest";
+ public static final String PUBSUB_HOSTNAME = "solbroker";
+ public static final String PUBSUB_NETWORK_NAME = "solace_msg_network";
+ public static final String FULL_DOCKER_COMPOSE_FILE_PATH = "src/integrationTest/resources/";
+ public static final String[] SERVICES = new String[]{"solbroker"};
+ public static final long MAX_STARTUP_TIMEOUT_MSEC = 120000l;
+ public static final String DIRECT_MESSAGING_HTTP_HEALTH_CHECK_URI = "/health-check/direct-active";
+ public static final int DIRECT_MESSAGING_HTTP_HEALTH_CHECK_PORT = 5550;
+ public static final String GUARANTEED_MESSAGING_HTTP_HEALTH_CHECK_URI = "/health-check/guaranteed-active";
+ public static final int GUARANTEED_MESSAGING_HTTP_HEALTH_CHECK_PORT = 5550;
+
+ public static final String CONNECTORSOURCE = "build/distributions/pubsubplus-connector-kafka-sink.zip";
+
+ public static final String UNZIPPEDCONNECTORDESTINATION = "src/integrationTest/resources";
+ public static final String CONNECTORPROPERTIESFILE = "etc/solace_sink.properties";
+ public static final String CONNECTORJSONPROPERTIESFILE = "etc/solace_sink_properties.json";
+
+ public static final String SOL_ADMINUSER_NAME = "default";
+ public static final String SOL_ADMINUSER_PW = "default";
+ public static final String SOL_VPN = "default";
+ public static final String KAFKA_SINK_TOPIC = "kafka-test-topic-sink";
+ public static final String SOL_ROOT_TOPIC = "pubsubplus-test-topic-sink";
+ public static final String SOL_TOPICS = "pubsubplus-test-topic-sink";
+ public static final String SOL_QUEUE = "pubsubplus-test-queue-sink";
+ public static final String CONN_MSGPROC_CLASS = "com.solace.sink.connector.recordprocessor.SolSimpleRecordProcessor";
+ public static final String CONN_KAFKA_MSGKEY = "DESTINATION";
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestKafkaProducer.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestKafkaProducer.java
new file mode 100644
index 0000000..0c8cc8c
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestKafkaProducer.java
@@ -0,0 +1,66 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.clients.producer.RecordMetadata;
+import org.apache.kafka.common.serialization.ByteArraySerializer;
+import org.apache.kafka.common.serialization.StringSerializer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Properties;
+import java.util.concurrent.ExecutionException;
+
+public class TestKafkaProducer implements TestConstants {
+
+ static Logger logger = LoggerFactory.getLogger(TestKafkaProducer.class.getName());
+ private String kafkaTopic;
+ private KafkaProducer producer;
+
+ public TestKafkaProducer(String kafkaTestTopic) {
+ kafkaTopic = kafkaTestTopic;
+ }
+
+ public void start() {
+ String bootstrapServers = MessagingServiceFullLocalSetupConfluent.COMPOSE_CONTAINER_KAFKA.getServiceHost("kafka_1", 39092)
+ + ":39092";
+
+ // create Producer properties
+ Properties properties = new Properties();
+ properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
+ properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
+ properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
+
+ // create safe Producer
+ properties.setProperty(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
+ properties.setProperty(ProducerConfig.ACKS_CONFIG, "all");
+ properties.setProperty(ProducerConfig.RETRIES_CONFIG, Integer.toString(Integer.MAX_VALUE));
+ properties.setProperty(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
+
+ // high throughput producer (at the expense of a bit of latency and CPU usage)
+// properties.setProperty(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
+// properties.setProperty(ProducerConfig.LINGER_MS_CONFIG, "20");
+// properties.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32*1024)); // 32 KB batch size
+
+ // create the producer
+ producer = new KafkaProducer(properties);
+ }
+
+ public RecordMetadata sendMessageToKafka(String msgKey, String msgValue) {
+ assert(msgValue != null);
+ RecordMetadata recordmetadata = null;
+ try {
+ recordmetadata = producer.send(new ProducerRecord<>(kafkaTopic, msgKey.getBytes(), msgValue.getBytes())).get();
+ logger.info("Message sent to Kafka topic " + kafkaTopic);
+ } catch (InterruptedException | ExecutionException e) {
+ // TODO Auto-generated catch block
+ e.printStackTrace();
+ }
+ return recordmetadata;
+ }
+
+ public void close() {
+ producer.close();
+ }
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestSolaceConsumer.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestSolaceConsumer.java
new file mode 100644
index 0000000..1c56dd7
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/TestSolaceConsumer.java
@@ -0,0 +1,121 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.solacesystems.jcsmp.BytesMessage;
+import com.solacesystems.jcsmp.BytesXMLMessage;
+import com.solacesystems.jcsmp.ConsumerFlowProperties;
+import com.solacesystems.jcsmp.DeliveryMode;
+import com.solacesystems.jcsmp.EndpointProperties;
+import com.solacesystems.jcsmp.FlowReceiver;
+import com.solacesystems.jcsmp.JCSMPException;
+import com.solacesystems.jcsmp.JCSMPFactory;
+import com.solacesystems.jcsmp.JCSMPProperties;
+import com.solacesystems.jcsmp.JCSMPSession;
+import com.solacesystems.jcsmp.JCSMPStreamingPublishEventHandler;
+import com.solacesystems.jcsmp.Message;
+import com.solacesystems.jcsmp.Queue;
+import com.solacesystems.jcsmp.TextMessage;
+import com.solacesystems.jcsmp.Topic;
+import com.solacesystems.jcsmp.XMLMessageConsumer;
+import com.solacesystems.jcsmp.XMLMessageListener;
+import com.solacesystems.jcsmp.XMLMessageProducer;
+
+public class TestSolaceConsumer {
+
+ // Queue to communicate received messages
+ public static BlockingQueue solaceReceivedTopicMessages = new ArrayBlockingQueue<>(10);
+ public static BlockingQueue solaceReceivedQueueMessages = new ArrayBlockingQueue<>(10);
+
+ static Logger logger = LoggerFactory.getLogger(SinkConnectorIT.class.getName());
+ private JCSMPSession session;
+ private XMLMessageConsumer topicSubscriber;
+ private FlowReceiver queueConsumer;
+ private String queueName;
+
+ public void initialize(String host, String user, String password, String messagevpn) {
+ TestConfigProperties configProps = new TestConfigProperties();
+ final JCSMPProperties properties = new JCSMPProperties();
+ properties.setProperty(JCSMPProperties.HOST, "tcp://" + configProps.getProperty("sol.host") + ":55555"); // host:port
+ properties.setProperty(JCSMPProperties.USERNAME, configProps.getProperty("sol.username")); // client-username
+ properties.setProperty(JCSMPProperties.VPN_NAME, configProps.getProperty("sol.vpn_name")); // message-vpn
+ properties.setProperty(JCSMPProperties.PASSWORD, configProps.getProperty("sol.password")); // client-password
+ try {
+ session = JCSMPFactory.onlyInstance().createSession(properties);
+ session.connect();
+ } catch (JCSMPException e1) {
+ e1.printStackTrace();
+ }
+ }
+
+ public void provisionQueue(String queueName) throws JCSMPException {
+ this.queueName = queueName;
+ final Queue queue = JCSMPFactory.onlyInstance().createQueue(queueName);
+ // Provision queue in case it doesn't exist, and do not fail if it already exists
+ final EndpointProperties endpointProps = new EndpointProperties();
+ endpointProps.setPermission(EndpointProperties.PERMISSION_CONSUME);
+ endpointProps.setAccessType(EndpointProperties.ACCESSTYPE_EXCLUSIVE);
+ session.provision(queue, endpointProps, JCSMPSession.FLAG_IGNORE_ALREADY_EXISTS);
+ logger.info("Ensured Solace queue " + queueName + " exists.");
+ }
+
+ public void start() throws JCSMPException {
+
+ // Create and start topic subscriber
+
+ topicSubscriber = session.getMessageConsumer(new XMLMessageListener() {
+ @Override
+ public void onReceive(BytesXMLMessage msg) {
+ logger.info("Message received to topic: " + msg.getDestination());
+ solaceReceivedTopicMessages.add(msg);
+ }
+ @Override
+ public void onException(JCSMPException e) {
+ System.out.printf("Consumer received exception: %s%n",e);
+ }
+ });
+ // Subscribe to all topics starting a common root
+ session.addSubscription(JCSMPFactory.onlyInstance().createTopic(TestConstants.SOL_ROOT_TOPIC + "/>"));
+ // Also add subscriptions for DynamicDestination record processor testing
+ session.addSubscription(JCSMPFactory.onlyInstance().createTopic("ctrl" + "/>"));
+ session.addSubscription(JCSMPFactory.onlyInstance().createTopic("comms" + "/>"));
+ logger.info("Topic subscriber connected. Awaiting message...");
+ topicSubscriber.start();
+
+ // Create and start queue consumer
+ final ConsumerFlowProperties flow_prop = new ConsumerFlowProperties();
+ flow_prop.setEndpoint(JCSMPFactory.onlyInstance().createQueue(queueName));
+ flow_prop.setAckMode(JCSMPProperties.SUPPORTED_MESSAGE_ACK_CLIENT);
+ EndpointProperties endpoint_props = new EndpointProperties();
+ endpoint_props.setAccessType(EndpointProperties.ACCESSTYPE_EXCLUSIVE);
+ queueConsumer = session.createFlow(new XMLMessageListener() {
+ @Override
+ public void onReceive(BytesXMLMessage msg) {
+ logger.info("Queue message received");
+ solaceReceivedQueueMessages.add(msg);
+ msg.ackMessage();
+ }
+
+ @Override
+ public void onException(JCSMPException e) {
+ System.out.printf("Consumer received exception: %s%n", e);
+ }
+ }, flow_prop, endpoint_props);
+
+ // Start the consumer
+ logger.info("Queue receiver connected. Awaiting message...");
+ queueConsumer.start();
+ }
+
+ public void stop() {
+ queueConsumer.stop();
+ topicSubscriber.stop();
+ session.closeSession();
+ }
+
+}
diff --git a/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/Tools.java b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/Tools.java
new file mode 100644
index 0000000..c187830
--- /dev/null
+++ b/src/integrationTest/java/com/solace/connector/kafka/connect/sink/it/Tools.java
@@ -0,0 +1,49 @@
+package com.solace.connector.kafka.connect.sink.it;
+
+import java.io.IOException;
+import java.net.InterfaceAddress;
+import java.net.NetworkInterface;
+import java.net.SocketException;
+import java.nio.file.DirectoryStream;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.Set;
+
+public class Tools {
+ static public String getIpAddress() {
+ Set HostAddresses = new HashSet<>();
+ try {
+ for (NetworkInterface ni : Collections.list(NetworkInterface.getNetworkInterfaces())) {
+ if (!ni.isLoopback() && ni.isUp() && ni.getHardwareAddress() != null) {
+ for (InterfaceAddress ia : ni.getInterfaceAddresses()) {
+ if (ia.getBroadcast() != null) { //If limited to IPV4
+ HostAddresses.add(ia.getAddress().getHostAddress());
+ }
+ }
+ }
+ }
+ } catch (SocketException e) { }
+ return (String) HostAddresses.toArray()[0];
+ }
+
+ static public String getUnzippedConnectorDirName() {
+ String connectorUnzippedPath = null;
+ try {
+ DirectoryStream dirs = Files.newDirectoryStream(
+ Paths.get(TestConstants.UNZIPPEDCONNECTORDESTINATION), "pubsubplus-connector-kafka-*");
+ for (Path entry: dirs) {
+ connectorUnzippedPath = entry.toString();
+ break; //expecting only one
+ }
+ } catch (IOException e) {
+ e.printStackTrace();
+ }
+ if (connectorUnzippedPath.contains("\\")) {
+ return connectorUnzippedPath.substring(connectorUnzippedPath.lastIndexOf("\\") + 1);
+ }
+ return connectorUnzippedPath.substring(connectorUnzippedPath.lastIndexOf("/") + 1);
+ }
+}
diff --git a/src/integrationTest/resources/docker-compose-kafka-apache.yml b/src/integrationTest/resources/docker-compose-kafka-apache.yml
new file mode 100644
index 0000000..18c2a2e
--- /dev/null
+++ b/src/integrationTest/resources/docker-compose-kafka-apache.yml
@@ -0,0 +1,29 @@
+version: '3.7'
+
+services:
+ zookeeper:
+ image: bitnami/zookeeper:3
+ ports:
+ - 2181:2181
+ environment:
+ ZOOKEEPER_CLIENT_PORT: 2181
+ ZOOKEEPER_TICK_TIME: 2000
+ ALLOW_ANONYMOUS_LOGIN: 'yes'
+ kafka:
+ image: bitnami/kafka:2
+ ports:
+ - 9092:9092
+ - 29092:29092
+ - 39092:39092
+ environment:
+ KAFKA_CFG_BROKER_ID: 1
+ KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
+ ALLOW_PLAINTEXT_LISTENER: 'yes'
+ KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,PLAINTEXT_EXTHOST:PLAINTEXT
+ KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,PLAINTEXT_HOST://:29092,PLAINTEXT_EXTHOST://:39092
+ KAFKA_CFG_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092,PLAINTEXT_EXTHOST://$KAFKA_HOST:39092
+# KAFKA_CFG_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092,PLAINTEXT_EXTHOST://$KAFKA_HOST:39092
+# KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
+# KAFKA_CFG_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
+ depends_on:
+ - zookeeper
diff --git a/src/integrationTest/resources/docker-compose-kafka-confluent.yml b/src/integrationTest/resources/docker-compose-kafka-confluent.yml
new file mode 100644
index 0000000..f474f44
--- /dev/null
+++ b/src/integrationTest/resources/docker-compose-kafka-confluent.yml
@@ -0,0 +1,71 @@
+version: '3.7'
+
+services:
+ zookeeper:
+ image: confluentinc/cp-zookeeper:5.4.0
+ ports:
+ - 2181:2181
+ environment:
+ ZOOKEEPER_CLIENT_PORT: 2181
+ ZOOKEEPER_TICK_TIME: 2000
+ kafka:
+ image: confluentinc/cp-kafka:5.4.0
+ ports:
+ - 9092:9092
+ - 29092:29092
+ - 39092:39092
+ environment:
+ KAFKA_BROKER_ID: 1
+ KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
+ KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT,PLAINTEXT_EXTHOST:PLAINTEXT
+ KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092,PLAINTEXT_EXTHOST://$KAFKA_HOST:39092
+ KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
+ KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
+ KAFKA_TOPIC: $KAFKA_TOPIC
+ depends_on:
+ - zookeeper
+ kafka-setup:
+ image: confluentinc/cp-kafka:5.4.0
+ hostname: kafka-setup
+ depends_on:
+ - kafka
+ - zookeeper
+ command: "bash -c 'echo Waiting for Kafka to be ready... && \
+ cub kafka-ready -b kafka:9092 1 30 && \
+ kafka-topics --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic $KAFKA_TOPIC && \
+ sleep 30'"
+ environment:
+ # The following settings are listed here only to satisfy the image's requirements.
+ # We override the image's `command` anyways, hence this container will not start a broker.
+ KAFKA_BROKER_ID: ignored
+ KAFKA_ZOOKEEPER_CONNECT: ignored
+
+ schema-registry:
+ image: confluentinc/cp-schema-registry:5.4.0
+ ports:
+ - 8081:8081
+ environment:
+ SCHEMA_REGISTRY_HOST_NAME: localhost
+ SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
+ SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:9092
+ depends_on:
+ - kafka
+
+ control-center:
+ image: confluentinc/cp-enterprise-control-center:latest
+ hostname: control-center
+ depends_on:
+ - zookeeper
+ - kafka
+ - schema-registry
+ ports:
+ - "9021:9021"
+ environment:
+ CONTROL_CENTER_BOOTSTRAP_SERVERS: 'kafka:9092'
+ CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181'
+ CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
+ CONTROL_CENTER_REPLICATION_FACTOR: 1
+ CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
+ CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
+ CONFLUENT_METRICS_TOPIC_REPLICATION: 1
+ PORT: 9021
\ No newline at end of file
diff --git a/src/integrationTest/resources/docker-compose-solace.yml b/src/integrationTest/resources/docker-compose-solace.yml
new file mode 100644
index 0000000..67b4105
--- /dev/null
+++ b/src/integrationTest/resources/docker-compose-solace.yml
@@ -0,0 +1,25 @@
+version: '3.5'
+
+services:
+ solbroker:
+ image: solace/solace-pubsub-standard:$PUBSUB_TAG
+ hostname: $PUBSUB_HOSTNAME
+ env_file:
+ - ./solace.env
+ ports:
+ - "2222:2222"
+ - "8080:8080"
+ - "55003:55003"
+ - "55443:55443"
+ - "55445:55445"
+ - "55555:55555"
+ - "55556:55556"
+ - "5672:5672"
+ - "5550:5550"
+ - "8008:8008"
+ shm_size: 2g
+ ulimits:
+ memlock: -1
+ nofile:
+ soft: 2448
+ hard: 42192
diff --git a/src/integrationTest/resources/logback-test.xml b/src/integrationTest/resources/logback-test.xml
new file mode 100644
index 0000000..985c68e
--- /dev/null
+++ b/src/integrationTest/resources/logback-test.xml
@@ -0,0 +1,14 @@
+
+
+
+ %d{HH:mm:ss.SSS} [%thread] %-5level %logger -%msg%n%rEx{full, org}
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/src/integrationTest/resources/manual-setup.properties b/src/integrationTest/resources/manual-setup.properties
new file mode 100644
index 0000000..94b0e6c
--- /dev/null
+++ b/src/integrationTest/resources/manual-setup.properties
@@ -0,0 +1,6 @@
+#sol.host=mr1u6o37qn3lar.-cloud-clmessaging.solace.cloud
+sol.username=test
+sol.password=test
+#sol.vpn_name=b-1
+#kafka.connect_rest_host=A:28083
+#kafka.bootstrap_servers=B:39092
\ No newline at end of file
diff --git a/src/integrationTest/resources/solace.env b/src/integrationTest/resources/solace.env
new file mode 100644
index 0000000..863a835
--- /dev/null
+++ b/src/integrationTest/resources/solace.env
@@ -0,0 +1,4 @@
+username_admin_globalaccesslevel=admin
+username_admin_password=admin
+system_scaling_maxconnectioncount=100
+logging_debug_output=all
\ No newline at end of file
diff --git a/src/main/java/com/solace/sink/connector/SolProducerEventCallbackHandler.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolProducerEventCallbackHandler.java
similarity index 96%
rename from src/main/java/com/solace/sink/connector/SolProducerEventCallbackHandler.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolProducerEventCallbackHandler.java
index 5944efa..328b1b8 100644
--- a/src/main/java/com/solace/sink/connector/SolProducerEventCallbackHandler.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolProducerEventCallbackHandler.java
@@ -17,7 +17,7 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import com.solacesystems.jcsmp.JCSMPProducerEventHandler;
import com.solacesystems.jcsmp.ProducerEventArgs;
diff --git a/src/main/java/com/solace/sink/connector/SolRecordProcessor.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolRecordProcessorIF.java
similarity index 91%
rename from src/main/java/com/solace/sink/connector/SolRecordProcessor.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolRecordProcessorIF.java
index b6f72e0..ef2df8f 100644
--- a/src/main/java/com/solace/sink/connector/SolRecordProcessor.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolRecordProcessorIF.java
@@ -17,13 +17,13 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import com.solacesystems.jcsmp.BytesXMLMessage;
import org.apache.kafka.connect.sink.SinkRecord;
-public interface SolRecordProcessor {
+public interface SolRecordProcessorIF {
BytesXMLMessage processRecord(String skey, SinkRecord record);
diff --git a/src/main/java/com/solace/sink/connector/SolSessionEventCallbackHandler.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolSessionEventCallbackHandler.java
similarity index 96%
rename from src/main/java/com/solace/sink/connector/SolSessionEventCallbackHandler.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolSessionEventCallbackHandler.java
index 205b0be..6981134 100644
--- a/src/main/java/com/solace/sink/connector/SolSessionEventCallbackHandler.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolSessionEventCallbackHandler.java
@@ -17,7 +17,7 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import com.solacesystems.jcsmp.SessionEvent;
import com.solacesystems.jcsmp.SessionEventArgs;
diff --git a/src/main/java/com/solace/sink/connector/SolSessionCreate.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolSessionHandler.java
similarity index 79%
rename from src/main/java/com/solace/sink/connector/SolSessionCreate.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolSessionHandler.java
index 08f7608..00a5a1b 100644
--- a/src/main/java/com/solace/sink/connector/SolSessionCreate.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolSessionHandler.java
@@ -17,37 +17,34 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
-import com.solacesystems.jcsmp.InvalidPropertiesException;
import com.solacesystems.jcsmp.JCSMPChannelProperties;
import com.solacesystems.jcsmp.JCSMPException;
import com.solacesystems.jcsmp.JCSMPFactory;
import com.solacesystems.jcsmp.JCSMPProperties;
import com.solacesystems.jcsmp.JCSMPSession;
+import com.solacesystems.jcsmp.JCSMPSessionStats;
+import com.solacesystems.jcsmp.statistics.StatType;
import com.solacesystems.jcsmp.transaction.TransactedSession;
+import java.util.Enumeration;
+
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-public class SolSessionCreate {
- private static final Logger log = LoggerFactory.getLogger(SolSessionCreate.class);
+public class SolSessionHandler {
+ private static final Logger log = LoggerFactory.getLogger(SolSessionHandler.class);
- private SolaceSinkConfig lconfig;
+ private SolaceSinkConnectorConfig lconfig;
final JCSMPProperties properties = new JCSMPProperties();
final JCSMPChannelProperties chanProperties = new JCSMPChannelProperties();
- private JCSMPSession session;
- private TransactedSession txSession;
-
- private enum KeyHeader {
- NONE, DESTINATION, CORRELATION_ID, CORRELATION_ID_AS_BYTES
- }
+ private JCSMPSession session = null;
+ private TransactedSession txSession = null;
- protected KeyHeader keyheader = KeyHeader.NONE;
-
- public SolSessionCreate(SolaceSinkConfig lconfig) {
+ public SolSessionHandler(SolaceSinkConnectorConfig lconfig) {
this.lconfig = lconfig;
}
@@ -160,78 +157,55 @@ public void configureSession() {
}
/**
- * Connect JCSMPSession.
+ * Create and connect JCSMPSession
+ * @return
+ * @throws JCSMPException
*/
- public void connectSession() {
-
- System.setProperty("java.security.auth.login.config",
- lconfig.getString(SolaceSinkConstants.SOL_KERBEROS_LOGIN_CONFIG));
- System.setProperty("java.security.krb5.conf",
- lconfig.getString(SolaceSinkConstants.SOL_KERBEROS_KRB5_CONFIG));
-
- boolean connected = false;
- try {
+ public void connectSession() throws JCSMPException {
+ System.setProperty("java.security.auth.login.config",
+ lconfig.getString(SolaceSinkConstants.SOL_KERBEROS_LOGIN_CONFIG));
+ System.setProperty("java.security.krb5.conf",
+ lconfig.getString(SolaceSinkConstants.SOL_KERBEROS_KRB5_CONFIG));
+
session = JCSMPFactory.onlyInstance().createSession(properties,
null, new SolSessionEventCallbackHandler());
- connected = true;
- } catch (InvalidPropertiesException e) {
- connected = false;
- log.info("=============Received Solace exception {}, with the following: {} ",
- e.getCause(), e.getStackTrace());
- }
- if (connected) {
- try {
- session.connect();
-
- connected = true;
- } catch (JCSMPException e) {
- log.info("=============Received Solace exception {}, with the "
- + "following: {} ", e.getCause(), e.getStackTrace());
- connected = false;
- }
- }
-
- if (connected && lconfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
- try {
- txSession = session.createTransactedSession();
- log.info("================Transacted Session is Connected");
- } catch (JCSMPException e) {
- log.info(
- "================Transacted Session FAILED to Connect, "
- + "make sure transacted sessions is enabled for Solace Client");
- log.info("Received Solace exception {}, with the "
- + "following: {} ", e.getCause(), e.getStackTrace());
- connected = false;
- }
+ session.connect();
+ }
- } else {
- log.info(
- "================Transacted Session was not created, "
- + "either because of failure in creation or no queue consumers registered");
- txSession = null;
- }
+ /**
+ * Create transacted session
+ * @return TransactedSession
+ * @throws JCSMPException
+ */
+ public void createTxSession() throws JCSMPException {
+ txSession = session.createTransactedSession();
+ }
+ public JCSMPSession getSession() {
+ return session;
}
public TransactedSession getTxSession() {
return txSession;
}
- public JCSMPSession getSession() {
- return session;
+ public void printStats() {
+ if (session != null) {
+ JCSMPSessionStats lastStats = session.getSessionStats();
+ Enumeration estats = StatType.elements();
+ while (estats.hasMoreElements()) {
+ StatType statName = estats.nextElement();
+ log.info("\t" + statName.getLabel() + ": " + lastStats.getStat(statName));
+ }
+ log.info("\n");
+ }
}
-
+
/**
* Shutdown Session.
- *
- * @return boolean of shutdown result
*/
- public boolean shutdown() {
-
+ public void shutdown() {
session.closeSession();
-
- return true;
-
}
}
diff --git a/src/main/java/com/solace/sink/connector/SolStreamingMessageCallbackHandler.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolStreamingMessageCallbackHandler.java
similarity index 97%
rename from src/main/java/com/solace/sink/connector/SolStreamingMessageCallbackHandler.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolStreamingMessageCallbackHandler.java
index b360ddd..7629ed5 100644
--- a/src/main/java/com/solace/sink/connector/SolStreamingMessageCallbackHandler.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolStreamingMessageCallbackHandler.java
@@ -17,7 +17,7 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import com.solacesystems.jcsmp.JCSMPException;
import com.solacesystems.jcsmp.JCSMPStreamingPublishEventHandler;
diff --git a/src/main/java/com/solace/sink/connector/SolaceSinkConnector.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConnector.java
similarity index 92%
rename from src/main/java/com/solace/sink/connector/SolaceSinkConnector.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConnector.java
index f2e6617..e6d785c 100644
--- a/src/main/java/com/solace/sink/connector/SolaceSinkConnector.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConnector.java
@@ -17,7 +17,7 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import java.util.ArrayList;
import java.util.HashMap;
@@ -34,7 +34,7 @@
public class SolaceSinkConnector extends SinkConnector {
private static final Logger log = LoggerFactory.getLogger(SolaceSinkConnector.class);
- SolaceSinkConfig sconfig;
+ SolaceSinkConnectorConfig sconfig;
private Map sconfigProperties;
@Override
@@ -46,7 +46,7 @@ public String version() {
public void start(Map props) {
log.info("==================== Start a SolaceSinkConnector");
sconfigProperties = props;
- sconfig = new SolaceSinkConfig(props);
+ sconfig = new SolaceSinkConnectorConfig(props);
}
@Override
@@ -74,7 +74,7 @@ public void stop() {
@Override
public ConfigDef config() {
log.info("==================== Requesting Config for SolaceSinkConnector");
- return SolaceSinkConfig.config;
+ return SolaceSinkConnectorConfig.config;
}
}
diff --git a/src/main/java/com/solace/sink/connector/SolaceSinkConfig.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConnectorConfig.java
similarity index 95%
rename from src/main/java/com/solace/sink/connector/SolaceSinkConfig.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConnectorConfig.java
index cecebf8..a3a4a5e 100644
--- a/src/main/java/com/solace/sink/connector/SolaceSinkConfig.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConnectorConfig.java
@@ -17,7 +17,7 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import java.util.Map;
@@ -28,15 +28,15 @@
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-public class SolaceSinkConfig extends AbstractConfig {
+public class SolaceSinkConnectorConfig extends AbstractConfig {
- private static final Logger log = LoggerFactory.getLogger(SolaceSinkConfig.class);
+ private static final Logger log = LoggerFactory.getLogger(SolaceSinkConnectorConfig.class);
/**
* Create Solace Configuration Properties from JSON or Properties file.
* @param properties returns Properties
*/
- public SolaceSinkConfig(Map properties) {
+ public SolaceSinkConnectorConfig(Map properties) {
super(config, properties);
log.info("==================Initialize Connnector properties");
@@ -61,7 +61,7 @@ public static ConfigDef solaceConfigDef() {
.define(SolaceSinkConstants.SOl_QUEUE,
Type.STRING, null, Importance.MEDIUM, "Solace queue to consume from")
.define(SolaceSinkConstants.SOL_RECORD_PROCESSOR,
- Type.CLASS, SolRecordProcessor.class, Importance.HIGH,
+ Type.CLASS, SolRecordProcessorIF.class, Importance.HIGH,
"default Solace message processor to use against Kafka Sink Records")
.define(SolaceSinkConstants.SOL_LOCALHOST, Type.STRING, null, Importance.LOW,
"The hostname or IP address of the machine on which the application "
@@ -115,6 +115,10 @@ public static ConfigDef solaceConfigDef() {
.define(SolaceSinkConstants.SOL_SUB_ACK_WINDOW_SIZE,
Type.INT, 255, Importance.LOW,
"The size of the sliding subscriber ACK window. The valid range is 1-255")
+ .define(SolaceSinkConstants.SOL_QUEUE_MESSAGES_AUTOFLUSH_SIZE,
+ Type.INT, 200, Importance.LOW,
+ "Number of outstanding transacted messages before autoflush. Must be lower than "
+ + "max PubSub+ transaction size (255). The valid range is 1-200")
.define(SolaceSinkConstants.SOl_AUTHENTICATION_SCHEME,
Type.STRING, "AUTHENTICATION_SCHEME_BASIC",
Importance.MEDIUM, "String property specifying the authentication scheme.")
@@ -127,6 +131,9 @@ public static ConfigDef solaceConfigDef() {
"Session property specifying a transport protocol that SSL session "
+ "connection will be downgraded to after client authentication. "
+ "Allowed values: TRANSPORT_PROTOCOL_PLAIN_TEXT.")
+ .define(SolaceSinkConstants.SOl_USE_TRANSACTIONS_FOR_QUEUE,
+ Type.BOOLEAN, true, Importance.LOW,
+ "Specifies if writing messages to queue destination shall use transactions.")
.define(SolaceSinkConstants.SOL_CHANNEL_PROPERTY_connectTimeoutInMillis,
Type.INT, 30000, Importance.MEDIUM,
"Timeout value (in ms) for creating an initial connection to Solace")
diff --git a/src/main/java/com/solace/sink/connector/SolaceSinkConstants.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConstants.java
similarity index 95%
rename from src/main/java/com/solace/sink/connector/SolaceSinkConstants.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConstants.java
index 5698c43..3050bb0 100644
--- a/src/main/java/com/solace/sink/connector/SolaceSinkConstants.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkConstants.java
@@ -17,7 +17,7 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
/**
* SolaceSourceConstants is responsible for correct configuration management.
@@ -62,6 +62,7 @@ public class SolaceSinkConstants {
public static final String SOl_AUTHENTICATION_SCHEME = "sol.authentication_scheme";
public static final String SOL_KRB_SERVICE_NAME = "sol.krb_service_name";
public static final String SOL_SSL_CONNECTION_DOWNGRADE_TO = "sol.ssl_connection_downgrade_to";
+ public static final String SOl_USE_TRANSACTIONS_FOR_QUEUE = "sol.use_transactions_for_queue";
// Low Importance Solace TLS Protocol properties
// public static final String SOL_SSL_PROTOCOL = "sol.ssl_protocol";
@@ -70,7 +71,7 @@ public class SolaceSinkConstants {
public static final String SOL_SSL_VALIDATE_CERTIFICATE = "sol.ssl_validate_certificate";
public static final String SOL_SSL_VALIDATE_CERTIFICATE_DATE = "sol.ssl_validate_certicate_date";
public static final String SOL_SSL_TRUST_STORE = "sol.ssl_trust_store";
- public static final String SOL_SSL_TRUST_STORE_PASSWORD = "sol.ssl_trust_store_pasword";
+ public static final String SOL_SSL_TRUST_STORE_PASSWORD = "sol.ssl_trust_store_password";
public static final String SOL_SSL_TRUST_STORE_FORMAT = "sol.ssl_trust_store_format";
public static final String SOL_SSL_TRUSTED_COMMON_NAME_LIST = "sol.ssl_trusted_common_name_list";
public static final String SOL_SSL_KEY_STORE = "sol.ssl_key_store";
@@ -109,7 +110,8 @@ public class SolaceSinkConstants {
// Low Importance Persistent Message Properties
public static final String SOL_SUB_ACK_WINDOW_SIZE = "sol.sub_ack_window_size";
- public static final String SOL_PUB_ACK_WINDOW_SIZE = "sol.sub_ack_window_size";
+ public static final String SOL_PUB_ACK_WINDOW_SIZE = "sol.pub_ack_window_size";
+ public static final String SOL_QUEUE_MESSAGES_AUTOFLUSH_SIZE = "sol.autoflush.size";
public static final String SOL_SUB_ACK_TIME = "sol.sub_ack_time";
public static final String SOL_PUB_ACK_TIME = "sol.pub_ack_time";
public static final String SOL_SUB_ACK_WINDOW_THRESHOLD = "sol.sub_ack_window_threshold";
@@ -133,7 +135,7 @@ public class SolaceSinkConstants {
// Low importance, offset for replay - if null, continue from last offset when was last stopped
// value of 0 is start from beginning
- public static final String SOL_KAFKA_REPLAY_OFFSET = "sol.kakfa_replay_offset";
+ public static final String SOL_KAFKA_REPLAY_OFFSET = "sol.kafka_replay_offset";
// Allow SolRecordProcessor to control the creation of destinations rather than SolaceSinkSender
// Requires a destination property in the user SDTMap with a key "dynamicDestination"
diff --git a/src/main/java/com/solace/sink/connector/SolaceSinkSender.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkSender.java
similarity index 51%
rename from src/main/java/com/solace/sink/connector/SolaceSinkSender.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkSender.java
index 6139bc4..2316494 100644
--- a/src/main/java/com/solace/sink/connector/SolaceSinkSender.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkSender.java
@@ -17,30 +17,23 @@
* under the License.
*/
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
import com.solacesystems.jcsmp.BytesXMLMessage;
import com.solacesystems.jcsmp.DeliveryMode;
import com.solacesystems.jcsmp.Destination;
import com.solacesystems.jcsmp.JCSMPException;
import com.solacesystems.jcsmp.JCSMPFactory;
-import com.solacesystems.jcsmp.JCSMPSession;
import com.solacesystems.jcsmp.ProducerFlowProperties;
import com.solacesystems.jcsmp.Queue;
import com.solacesystems.jcsmp.SDTException;
import com.solacesystems.jcsmp.SDTMap;
import com.solacesystems.jcsmp.Topic;
import com.solacesystems.jcsmp.XMLMessageProducer;
-import com.solacesystems.jcsmp.transaction.TransactedSession;
-
import java.util.ArrayList;
-import java.util.HashMap;
import java.util.List;
-import java.util.Map;
import java.util.concurrent.atomic.AtomicInteger;
-import org.apache.kafka.clients.consumer.OffsetAndMetadata;
-import org.apache.kafka.common.TopicPartition;
import org.apache.kafka.connect.sink.SinkRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -48,179 +41,148 @@
public class SolaceSinkSender {
private static final Logger log = LoggerFactory.getLogger(SolaceSinkSender.class);
- private SolaceSinkConfig sconfig;
- private XMLMessageProducer producer;
- private XMLMessageProducer txProducer;
- private JCSMPSession session;
+ private SolaceSinkConnectorConfig sconfig;
+ private XMLMessageProducer topicProducer;
+ private XMLMessageProducer queueProducer;
+ private SolSessionHandler sessionHandler;
private BytesXMLMessage message;
private List topics = new ArrayList();
- private Queue solQueue;
+ private Queue solQueue = null;
private boolean useTxforQueue = false;
private Class> cprocessor;
- private SolRecordProcessor processor;
+ private SolRecordProcessorIF processor;
private String kafkaKey;
- private TransactedSession txSession;
- private SolaceSinkTask sinkTask;
- private AtomicInteger msgCounter = new AtomicInteger();
- private Map offsets
- = new HashMap();
+ private AtomicInteger txMsgCounter = new AtomicInteger();
/**
* Class that sends Solace Messages from Kafka Records.
* @param sconfig JCSMP Configuration
- * @param session JCSMPSession
- * @param txSession TransactedSession
- * @param sinkTask Connector Sink Task
+ * @param sessionHandler SolSessionHandler
+ * @param useTxforQueue
+ * @throws JCSMPException
*/
- public SolaceSinkSender(SolaceSinkConfig sconfig, JCSMPSession session,
- TransactedSession txSession, SolaceSinkTask sinkTask) {
+ public SolaceSinkSender(SolaceSinkConnectorConfig sconfig, SolSessionHandler sessionHandler,
+ boolean useTxforQueue) throws JCSMPException {
this.sconfig = sconfig;
- this.txSession = txSession;
- this.sinkTask = sinkTask;
- this.session = session;
-
- kafkaKey = this.sconfig.getString(SolaceSinkConstants.SOL_KAFKA_MESSAGE_KEY);
-
- if (sconfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
- solQueue = JCSMPFactory.onlyInstance().createQueue(
- sconfig.getString(SolaceSinkConstants.SOl_QUEUE));
- }
-
- ProducerFlowProperties flowProps = new ProducerFlowProperties();
- flowProps.setAckEventMode(sconfig.getString(SolaceSinkConstants.SOL_ACK_EVENT_MODE));
- flowProps.setWindowSize(sconfig.getInt(SolaceSinkConstants.SOL_PUBLISHER_WINDOW_SIZE));
-
- try {
- producer = session.getMessageProducer(new SolStreamingMessageCallbackHandler());
- if (sconfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
- txProducer = txSession.createProducer(flowProps, new SolStreamingMessageCallbackHandler(),
- new SolProducerEventCallbackHandler());
- log.info("=================txSession status: {}", txSession.getStatus().toString());
- }
-
- } catch (JCSMPException e) {
- log.info("Received Solace exception {}, with the following: {} ",
- e.getCause(), e.getStackTrace());
- }
-
+ this.sessionHandler = sessionHandler;
+ this.useTxforQueue = useTxforQueue;
+ kafkaKey = sconfig.getString(SolaceSinkConstants.SOL_KAFKA_MESSAGE_KEY);
+ topicProducer = sessionHandler.getSession().getMessageProducer(new SolStreamingMessageCallbackHandler());
cprocessor = (this.sconfig.getClass(SolaceSinkConstants.SOL_RECORD_PROCESSOR));
try {
- processor = (SolRecordProcessor) cprocessor.newInstance();
+ processor = (SolRecordProcessorIF) cprocessor.newInstance();
} catch (InstantiationException | IllegalAccessException e) {
- log.info("=================Received exception while creating record processing class {}, "
+ log.info("================ Received exception while creating record processing class {}, "
+ "with the following: {} ",
e.getCause(), e.getStackTrace());
}
}
/**
- * Generate Solace topics from topic string.
+ * Generate PubSub+ topics from topic string
*/
- public void createTopics() {
+ public void setupDestinationTopics() {
String solaceTopics = sconfig.getString(SolaceSinkConstants.SOL_TOPICS);
String[] stopics = solaceTopics.split(",");
int counter = 0;
-
while (stopics.length > counter) {
topics.add(JCSMPFactory.onlyInstance().createTopic(stopics[counter].trim()));
counter++;
}
}
-
- public void useTx(boolean tx) {
- this.useTxforQueue = tx;
+
+ /**
+ * Generate PubSub queue
+ */
+ public void setupDestinationQueue() throws JCSMPException {
+ solQueue = JCSMPFactory.onlyInstance().createQueue(sconfig.getString(SolaceSinkConstants.SOl_QUEUE));
+ ProducerFlowProperties flowProps = new ProducerFlowProperties();
+ flowProps.setAckEventMode(sconfig.getString(SolaceSinkConstants.SOL_ACK_EVENT_MODE));
+ flowProps.setWindowSize(sconfig.getInt(SolaceSinkConstants.SOL_PUBLISHER_WINDOW_SIZE));
+ if (useTxforQueue) {
+ // Using transacted session for queue
+ queueProducer = sessionHandler.getTxSession().createProducer(flowProps, new SolStreamingMessageCallbackHandler(),
+ new SolProducerEventCallbackHandler());
+ log.info("================ txSession status: {}", sessionHandler.getTxSession().getStatus().toString());
+ } else {
+ // Not using transacted session for queue
+ queueProducer = sessionHandler.getSession().createProducer(flowProps, new SolStreamingMessageCallbackHandler(),
+ new SolProducerEventCallbackHandler());
+ }
}
/**
* Send Solace Message from Kafka Record.
- * @param record Kakfa Records
+ * @param record Kafka Records
*/
public void sendRecord(SinkRecord record) {
message = processor.processRecord(kafkaKey, record);
- offsets.put(new TopicPartition(record.topic(), record.kafkaPartition()),
- new OffsetAndMetadata(record.kafkaOffset()));
- log.trace("=================record details, topic: {}, Partition: {}, "
+ log.trace("================ Processed record details, topic: {}, Partition: {}, "
+ "Offset: {}", record.topic(),
record.kafkaPartition(), record.kafkaOffset());
if (message.getAttachmentContentLength() == 0 || message.getAttachmentByteBuffer() == null) {
- log.info("==============Received record that had no data....discarded");
+ log.info("================ Received record that had no data....discarded");
return;
}
- /*
- if (message.getUserData() == null) {
- log.trace("============Receive a Kafka record with no data ... discarded");
- return;
- }
- */
-
- // Use Dynamic destination from SolRecordProcessor
if (sconfig.getBoolean(SolaceSinkConstants.SOL_DYNAMIC_DESTINATION)) {
+ // Process use Dynamic destination from SolRecordProcessor
SDTMap userMap = message.getProperties();
Destination dest = null;
try {
dest = userMap.getDestination("dynamicDestination");
} catch (SDTException e) {
- log.info("=================Received exception retrieving Dynamic Destination: "
+ log.info("================ Received exception retrieving Dynamic Destination: "
+ "{}, with the following: {} ",
e.getCause(), e.getStackTrace());
}
try {
- producer.send(message, dest);
+ topicProducer.send(message, dest);
} catch (JCSMPException e) {
- log.trace(
- "=================Received exception while sending message to topic {}: "
+ log.info(
+ "================ Received exception while sending message to topic {}: "
+ "{}, with the following: {} ",
dest.getName(), e.getCause(), e.getStackTrace());
}
-
-
- }
-
-
- if (useTxforQueue && !(sconfig.getBoolean(SolaceSinkConstants.SOL_DYNAMIC_DESTINATION))) {
- try {
- message.setDeliveryMode(DeliveryMode.PERSISTENT);
- txProducer.send(message, solQueue);
- msgCounter.getAndIncrement();
- log.trace("===============Count of TX message is now: {}", msgCounter.get());
- } catch (JCSMPException e) {
- log.info("=================Received exception while sending message to queue {}: "
- + "{}, with the following: {} ",
- solQueue.getName(), e.getCause(), e.getStackTrace());
- }
-
- }
-
- if (topics.size() != 0 && message.getDestination() == null
- && !(sconfig.getBoolean(SolaceSinkConstants.SOL_DYNAMIC_DESTINATION))) {
- message.setDeliveryMode(DeliveryMode.DIRECT);
- int count = 0;
- while (topics.size() > count) {
+ } else {
+ // Process when Dynamic destination is not set
+ if (solQueue != null) {
try {
- producer.send(message, topics.get(count));
- count++;
+ message.setDeliveryMode(DeliveryMode.PERSISTENT);
+ queueProducer.send(message, solQueue);
+ if (useTxforQueue) {
+ txMsgCounter.getAndIncrement();
+ log.trace("================ Count of TX message is now: {}", txMsgCounter.get());
+ }
} catch (JCSMPException e) {
- log.trace(
- "=================Received exception while sending message to topic {}: "
+ log.info("================ Received exception while sending message to queue {}: "
+ "{}, with the following: {} ",
- topics.get(count).getName(), e.getCause(), e.getStackTrace());
-
+ solQueue.getName(), e.getCause(), e.getStackTrace());
+ }
+ }
+ if (topics.size() != 0 && message.getDestination() == null) {
+ message.setDeliveryMode(DeliveryMode.DIRECT);
+ int count = 0;
+ while (topics.size() > count) {
+ try {
+ topicProducer.send(message, topics.get(count));
+ } catch (JCSMPException e) {
+ log.trace(
+ "================ Received exception while sending message to topic {}: "
+ + "{}, with the following: {} ",
+ topics.get(count).getName(), e.getCause(), e.getStackTrace());
+ }
+ count++;
}
- count++;
}
-
}
-
-
// Solace limits transaction size to 255 messages so need to force commit
- if (useTxforQueue && msgCounter.get() > 200) {
- log.debug("================Manually Flushing Offsets");
- sinkTask.flush(offsets);
+ if ( useTxforQueue && txMsgCounter.get() > sconfig.getInt(SolaceSinkConstants.SOL_QUEUE_MESSAGES_AUTOFLUSH_SIZE)-1 ) {
+ log.debug("================ Queue transaction autoflush size reached, flushing offsets from connector");
+ commit();
}
-
}
/**
@@ -231,11 +193,11 @@ public synchronized boolean commit() {
boolean commited = true;
try {
if (useTxforQueue) {
- txSession.commit();
+ sessionHandler.getTxSession().commit();
commited = true;
- msgCounter.set(0);
+ txMsgCounter.set(0);
log.debug("Comitted Solace records for transaction with status: {}",
- txSession.getStatus().name());
+ sessionHandler.getTxSession().getStatus().name());
}
} catch (JCSMPException e) {
log.info("Received Solace TX exception {}, with the following: {} ",
@@ -249,21 +211,15 @@ public synchronized boolean commit() {
/**
* Shutdown TXProducer and Topic Producer.
- * @return Boolean of Shutdown Status
+ * @return
*/
- public boolean shutdown() {
- if (txProducer != null) {
- txProducer.close();
+ public void shutdown() {
+ if (queueProducer != null) {
+ queueProducer.close();
}
-
- if (producer != null) {
- producer.close();
+ if (topicProducer != null) {
+ topicProducer.close();
}
-
- session.closeSession();
-
-
- return true;
}
}
diff --git a/src/main/java/com/solace/sink/connector/SolaceSinkTask.java b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkTask.java
similarity index 53%
rename from src/main/java/com/solace/sink/connector/SolaceSinkTask.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkTask.java
index 2a73cee..025f8e0 100644
--- a/src/main/java/com/solace/sink/connector/SolaceSinkTask.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/SolaceSinkTask.java
@@ -17,15 +17,10 @@
* under the License.
*/
-package com.solace.sink.connector;
-
-import com.solacesystems.jcsmp.JCSMPSession;
-import com.solacesystems.jcsmp.JCSMPSessionStats;
-import com.solacesystems.jcsmp.statistics.StatType;
-import com.solacesystems.jcsmp.transaction.TransactedSession;
+package com.solace.connector.kafka.connect.sink;
+import com.solacesystems.jcsmp.JCSMPException;
import java.util.Collection;
-import java.util.Enumeration;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;
@@ -40,14 +35,12 @@
public class SolaceSinkTask extends SinkTask {
private static final Logger log = LoggerFactory.getLogger(SolaceSinkTask.class);
- private SolSessionCreate sessionRef;
- private TransactedSession txSession = null;
- private JCSMPSession session;
- private SolaceSinkSender sender;
- private boolean txEnabled = false;
+ private SolSessionHandler solSessionHandler;
+ private SolaceSinkSender solSender;
+ private boolean useTxforQueue = false;
private SinkTaskContext context;
- SolaceSinkConfig sconfig;
+ SolaceSinkConnectorConfig connectorConfig;
@Override
public String version() {
@@ -56,27 +49,48 @@ public String version() {
@Override
public void start(Map props) {
- sconfig = new SolaceSinkConfig(props);
-
- sessionRef = new SolSessionCreate(sconfig);
- sessionRef.configureSession();
- sessionRef.connectSession();
- txSession = sessionRef.getTxSession();
- session = sessionRef.getSession();
- if (txSession != null) {
- log.info("======================TransactedSession JCSMPSession Connected");
+ connectorConfig = new SolaceSinkConnectorConfig(props);
+ solSessionHandler = new SolSessionHandler(connectorConfig);
+ try {
+ solSessionHandler.configureSession();
+ solSessionHandler.connectSession();
+ } catch (JCSMPException e) {
+ failStart(e, "================ Failed to create JCSMPSession");
}
-
- sender = new SolaceSinkSender(sconfig, session, txSession, this);
-
- if (sconfig.getString(SolaceSinkConstants.SOL_TOPICS) != null) {
- sender.createTopics();
+ log.info("================ JCSMPSession Connected");
+
+ if (connectorConfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
+ // Use transactions for queue destination
+ useTxforQueue = connectorConfig.getBoolean(SolaceSinkConstants.SOl_USE_TRANSACTIONS_FOR_QUEUE);
+ if (useTxforQueue) {
+ try {
+ solSessionHandler.createTxSession();
+ log.info("================ Transacted Session has been Created for PubSub+ queue destination");
+ } catch (JCSMPException e) {
+ failStart(e, "================ Failed to create Transacted Session for PubSub+ queue destination, "
+ + "make sure transacted sessions are enabled");
+ }
+ }
}
- if (sconfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
- txEnabled = true;
- sender.useTx(txEnabled);
+
+ try {
+ solSender = new SolaceSinkSender(connectorConfig, solSessionHandler, useTxforQueue);
+ if (connectorConfig.getString(SolaceSinkConstants.SOL_TOPICS) != null) {
+ solSender.setupDestinationTopics();
+ }
+ if (connectorConfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
+ solSender.setupDestinationQueue();
+ }
+ } catch (JCSMPException e) {
+ failStart(e, "Failed to setup sender to PubSub+");
}
-
+ }
+
+ private void failStart(JCSMPException e, String logMessage) {
+ log.info("Received Solace exception {}, with the "
+ + "following: {} ", e.getCause(), e.getStackTrace());
+ log.info( "message");
+ stop(); // Connector cannot continue
}
@Override
@@ -85,38 +99,22 @@ public void put(Collection records) {
log.trace("Putting record to topic {}, partition {} and offset {}", r.topic(),
r.kafkaPartition(),
r.kafkaOffset());
- sender.sendRecord(r);
+ solSender.sendRecord(r);
}
-
}
@Override
public void stop() {
- if (session != null) {
- JCSMPSessionStats lastStats = session.getSessionStats();
- Enumeration estats = StatType.elements();
- log.info("Final Statistics summary:");
-
- while (estats.hasMoreElements()) {
- StatType statName = estats.nextElement();
- System.out.println("\t" + statName.getLabel() + ": " + lastStats.getStat(statName));
- }
- log.info("\n");
+ log.info("================ Shutting down PubSub+ Sink Connector");
+ if (solSender != null) {
+ solSender.shutdown();
}
- boolean ok = true;
- log.info("==================Shutting down Solace Source Connector");
-
- if (sender != null) {
- ok = sender.shutdown();
+ if (solSessionHandler != null) {
+ log.info("Final Statistics summary:\n");
+ solSessionHandler.printStats();
+ solSessionHandler.shutdown();
}
- if (session != null) {
- ok = sessionRef.shutdown();
- }
-
- if (!(ok)) {
- log.info("Solace session failed to shutdown");
- }
-
+ log.info("PubSub+ Sink Connector stopped");
}
/**
@@ -129,15 +127,13 @@ public synchronized void flush(Map currentOff
log.debug("Flushing up to topic {}, partition {} and offset {}", tp.topic(),
tp.partition(), om.offset());
}
-
- if (sconfig.getString(SolaceSinkConstants.SOl_QUEUE) != null) {
- boolean commited = sender.commit();
+ if (useTxforQueue) {
+ boolean commited = solSender.commit();
if (!commited) {
- log.info("==============error in commiting transaction, shutting down");
+ log.info("Error in commiting transaction, shutting down");
stop();
}
}
-
}
/**
@@ -156,8 +152,8 @@ public void initialize(SinkTaskContext context) {
* @param partitions List of TopicPartitions for Topic
*/
public void open(Collection partitions) {
- Long offsetLong = sconfig.getLong(SolaceSinkConstants.SOL_KAFKA_REPLAY_OFFSET);
- log.debug("================Starting for replay Offset: " + offsetLong);
+ Long offsetLong = connectorConfig.getLong(SolaceSinkConstants.SOL_KAFKA_REPLAY_OFFSET);
+ log.debug("================ Starting for replay Offset: " + offsetLong);
if (offsetLong != null) {
Set parts = context.assignment();
Iterator partsIt = parts.iterator();
diff --git a/src/main/java/com/solace/sink/connector/VersionUtil.java b/src/main/java/com/solace/connector/kafka/connect/sink/VersionUtil.java
similarity index 69%
rename from src/main/java/com/solace/sink/connector/VersionUtil.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/VersionUtil.java
index 6597ddc..c68289a 100644
--- a/src/main/java/com/solace/sink/connector/VersionUtil.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/VersionUtil.java
@@ -1,4 +1,4 @@
-package com.solace.sink.connector;
+package com.solace.connector.kafka.connect.sink;
public class VersionUtil {
/**
@@ -7,7 +7,7 @@ public class VersionUtil {
*/
public static String getVersion() {
- return "1.0.2";
+ return "2.0.0";
}
}
diff --git a/src/main/java/com/solace/sink/connector/recordprocessor/SolDynamicDestinationRecordProcessor.java b/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolDynamicDestinationRecordProcessor.java
similarity index 63%
rename from src/main/java/com/solace/sink/connector/recordprocessor/SolDynamicDestinationRecordProcessor.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolDynamicDestinationRecordProcessor.java
index c6b6e42..dcdce4e 100644
--- a/src/main/java/com/solace/sink/connector/recordprocessor/SolDynamicDestinationRecordProcessor.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolDynamicDestinationRecordProcessor.java
@@ -17,10 +17,9 @@
* under the License.
*/
-package com.solace.sink.connector.recordprocessor;
-
-import com.solace.sink.connector.SolRecordProcessor;
+package com.solace.connector.kafka.connect.sink.recordprocessor;
+import com.solace.connector.kafka.connect.sink.SolRecordProcessorIF;
import com.solacesystems.jcsmp.BytesXMLMessage;
import com.solacesystems.jcsmp.JCSMPFactory;
import com.solacesystems.jcsmp.SDTException;
@@ -39,10 +38,10 @@
* Note: this example expects a record written to a Kafka topic that has the format:
* "busId" "Message", where there is a space in between the strings.
*
- * It requires the configuration property "sol.dynamic_destination=true" to be set.
+ * It also requires the configuration property "sol.dynamic_destination=true" to be set.
*/
-public class SolDynamicDestinationRecordProcessor implements SolRecordProcessor {
+public class SolDynamicDestinationRecordProcessor implements SolRecordProcessorIF {
private static final Logger log =
LoggerFactory.getLogger(SolDynamicDestinationRecordProcessor.class);
@@ -50,52 +49,39 @@ public class SolDynamicDestinationRecordProcessor implements SolRecordProcessor
public BytesXMLMessage processRecord(String skey, SinkRecord record) {
BytesXMLMessage msg = JCSMPFactory.onlyInstance().createMessage(BytesXMLMessage.class);
- // Add Record Topic,Parition,Offset to Solace Msg in case we need to track offset restart
- // limited in Kafka Topic size, replace using SDT below.
- //String userData = "T:" + record.topic() + ",P:" + record.kafkaPartition()
- // + ",O:" + record.kafkaOffset();
- //msg.setUserData(userData.getBytes(StandardCharsets.UTF_8));
-
+ // Add Record Topic,Partition,Offset to Solace Msg
+ String kafkaTopic = record.topic();
+ msg.setApplicationMessageType("ResendOfKafkaTopic: " + kafkaTopic);
-
- Object v = record.value();
+ Object recordValue = record.value();
String payload = "";
Topic topic;
- if (v instanceof byte[]) {
- payload = new String((byte[]) v, StandardCharsets.UTF_8);
- } else if (v instanceof ByteBuffer) {
- payload = new String(((ByteBuffer) v).array(),StandardCharsets.UTF_8);
+ if (recordValue instanceof byte[]) {
+ payload = new String((byte[]) recordValue, StandardCharsets.UTF_8);
+ } else if (recordValue instanceof ByteBuffer) {
+ payload = new String(((ByteBuffer) recordValue).array(),StandardCharsets.UTF_8);
}
-
- log.debug("==============================Payload: " + payload);
+ log.debug("================ Payload: " + payload);
String busId = payload.substring(0, 4);
-
String busMsg = payload.substring(5, payload.length());
- log.debug("=================bus message: " + busMsg);
+ log.debug("================ Bus message: " + busMsg);
if (busMsg.toLowerCase().contains("stop")) {
- msg.writeAttachment(busMsg.getBytes(StandardCharsets.UTF_8));
topic = JCSMPFactory.onlyInstance().createTopic("ctrl/bus/" + busId + "/stop");
- log.debug("=========================Dynamic Topic = " + topic.getName());
+ log.debug("================ Dynamic Topic = " + topic.getName());
} else if (busMsg.toLowerCase().contains("start")) {
- msg.writeAttachment(busMsg.getBytes(StandardCharsets.UTF_8));
topic = JCSMPFactory.onlyInstance().createTopic("ctrl/bus/" + busId + "/start");
- log.debug("=========================Dynamic Topic = " + topic.getName());
+ log.debug("================ Dynamic Topic = " + topic.getName());
} else {
topic = JCSMPFactory.onlyInstance().createTopic("comms/bus/" + busId);
- log.debug("=========================Dynamic Topic = " + topic.getName());
+ log.debug("================ Dynamic Topic = " + topic.getName());
}
-
-
-
-
- // Add Record Topic,Partition,Offset to Solace Msg as header properties
- // in case we need to track offset restart
+ // Also include topic in dynamicDestination header
SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
try {
- userHeader.putString("k_topic", record.topic());
+ userHeader.putString("k_topic", kafkaTopic);
userHeader.putInteger("k_partition", record.kafkaPartition());
userHeader.putLong("k_offset", record.kafkaOffset());
userHeader.putDestination("dynamicDestination", topic);
@@ -103,18 +89,9 @@ public BytesXMLMessage processRecord(String skey, SinkRecord record) {
log.info("Received Solace SDTException {}, with the following: {} ",
e.getCause(), e.getStackTrace());
}
-
- String kafkaTopic = record.topic();
-
- msg.setApplicationMessageType("ResendOfKakfaTopic: " + kafkaTopic);
-
msg.setProperties(userHeader);
-
- log.debug("=================bus message: " + busMsg);
-
msg.writeAttachment(busMsg.getBytes(StandardCharsets.UTF_8));
-
return msg;
}
diff --git a/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleKeyedRecordProcessor.java b/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleKeyedRecordProcessor.java
new file mode 100644
index 0000000..5be0781
--- /dev/null
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleKeyedRecordProcessor.java
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package com.solace.connector.kafka.connect.sink.recordprocessor;
+
+import com.solace.connector.kafka.connect.sink.SolRecordProcessorIF;
+import com.solacesystems.jcsmp.BytesXMLMessage;
+import com.solacesystems.jcsmp.JCSMPFactory;
+import com.solacesystems.jcsmp.SDTException;
+import com.solacesystems.jcsmp.SDTMap;
+
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
+
+import org.apache.kafka.connect.data.Schema;
+import org.apache.kafka.connect.sink.SinkRecord;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class SolSimpleKeyedRecordProcessor implements SolRecordProcessorIF {
+ private static final Logger log = LoggerFactory.getLogger(SolSimpleKeyedRecordProcessor.class);
+
+ public enum KeyHeader {
+ NONE, DESTINATION, CORRELATION_ID, CORRELATION_ID_AS_BYTES
+ }
+
+ protected KeyHeader keyheader = KeyHeader.NONE; // default
+
+ @Override
+ public BytesXMLMessage processRecord(String skey, SinkRecord record) {
+ if (skey.equals("NONE")) {
+ this.keyheader = KeyHeader.NONE;
+ } else if (skey.equals("DESTINATION")) {
+ this.keyheader = KeyHeader.DESTINATION;
+ } else if (skey.equals("CORRELATION_ID")) {
+ this.keyheader = KeyHeader.CORRELATION_ID;
+ } else if (skey.equals("CORRELATION_ID_AS_BYTES")) {
+ this.keyheader = KeyHeader.CORRELATION_ID_AS_BYTES;
+ }
+
+ BytesXMLMessage msg = JCSMPFactory.onlyInstance().createMessage(BytesXMLMessage.class);
+ // Add Record Topic,Partition,Offset to Solace Msg
+ String kafkaTopic = record.topic();
+ SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
+ try {
+ userHeader.putString("k_topic", kafkaTopic);
+ userHeader.putInteger("k_partition", record.kafkaPartition());
+ userHeader.putLong("k_offset", record.kafkaOffset());
+ } catch (SDTException e) {
+ log.info("Received Solace SDTException {}, with the following: {} ",
+ e.getCause(), e.getStackTrace());
+ }
+ msg.setProperties(userHeader);
+ msg.setApplicationMessageType("ResendOfKafkaTopic: " + kafkaTopic);
+
+ Object recordKey = record.key();
+ Schema keySchema = record.keySchema();
+
+ // If Topic was Keyed, use the key for correlationID
+ if (keyheader == KeyHeader.CORRELATION_ID || keyheader == KeyHeader.CORRELATION_ID_AS_BYTES) {
+ if (recordKey != null) {
+ if (keySchema == null) {
+ log.trace("No schema info {}", recordKey);
+ if (recordKey instanceof byte[]) {
+ msg.setCorrelationId(new String((byte[]) recordKey, StandardCharsets.UTF_8));
+ } else if (recordKey instanceof ByteBuffer) {
+ msg.setCorrelationId(new String(((ByteBuffer) recordKey).array(), StandardCharsets.UTF_8));
+ } else {
+ msg.setCorrelationId(recordKey.toString());
+ }
+ } else if (keySchema.type() == Schema.Type.BYTES) {
+ if (recordKey instanceof byte[]) {
+ msg.setCorrelationId(new String((byte[]) recordKey, StandardCharsets.UTF_8));
+ } else if (recordKey instanceof ByteBuffer) {
+ msg.setCorrelationId(new String(((ByteBuffer) recordKey).array(), StandardCharsets.UTF_8));
+ }
+ } else if (keySchema.type() == Schema.Type.STRING) {
+ msg.setCorrelationId((String) recordKey);
+ } else {
+ log.trace("No applicable schema type {}", keySchema.type());
+ // Nothing to do with no applicable schema type
+ }
+ } else {
+ // Nothing to do with null recordKey
+ }
+ } else if (keyheader == KeyHeader.DESTINATION && keySchema.type() == Schema.Type.STRING) {
+ // Destination is already determined by sink settings so set just the correlationId.
+ // Receiving app can evaluate it
+ msg.setCorrelationId((String) recordKey);
+ } else {
+ // Do nothing in all other cases
+ }
+
+ Schema valueSchema = record.valueSchema();
+ Object recordValue = record.value();
+ // get message body details from record
+ if (recordValue != null) {
+ if (valueSchema == null) {
+ log.trace("No schema info {}", recordValue);
+ if (recordValue instanceof byte[]) {
+ msg.writeAttachment((byte[]) recordValue);
+ } else if (recordValue instanceof ByteBuffer) {
+ msg.writeAttachment((byte[]) ((ByteBuffer) recordValue).array());
+ } else if (recordValue instanceof String) {
+ msg.writeAttachment(((String) recordValue).getBytes());
+ } else {
+ // Unknown recordValue type
+ msg.reset();
+ }
+ } else if (valueSchema.type() == Schema.Type.BYTES) {
+ if (recordValue instanceof byte[]) {
+ msg.writeAttachment((byte[]) recordValue);
+ } else if (recordValue instanceof ByteBuffer) {
+ msg.writeAttachment((byte[]) ((ByteBuffer) recordValue).array());
+ }
+ } else if (valueSchema.type() == Schema.Type.STRING) {
+ msg.writeAttachment(((String) recordValue).getBytes());
+ } else {
+ // Do nothing in all other cases
+ msg.reset();
+ }
+ } else {
+ // Invalid message
+ msg.reset();
+ }
+
+ return msg;
+ }
+
+}
diff --git a/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleRecordProcessor.java b/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleRecordProcessor.java
similarity index 56%
rename from src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleRecordProcessor.java
rename to src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleRecordProcessor.java
index d533c49..28910fc 100644
--- a/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleRecordProcessor.java
+++ b/src/main/java/com/solace/connector/kafka/connect/sink/recordprocessor/SolSimpleRecordProcessor.java
@@ -17,41 +17,32 @@
* under the License.
*/
-package com.solace.sink.connector.recordprocessor;
+package com.solace.connector.kafka.connect.sink.recordprocessor;
-import com.solace.sink.connector.SolRecordProcessor;
+import com.solace.connector.kafka.connect.sink.SolRecordProcessorIF;
import com.solacesystems.jcsmp.BytesXMLMessage;
import com.solacesystems.jcsmp.JCSMPFactory;
import com.solacesystems.jcsmp.SDTException;
import com.solacesystems.jcsmp.SDTMap;
import java.nio.ByteBuffer;
-import java.nio.charset.StandardCharsets;
-
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.sink.SinkRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
-public class SolSimpleRecordProcessor implements SolRecordProcessor {
+public class SolSimpleRecordProcessor implements SolRecordProcessorIF {
private static final Logger log = LoggerFactory.getLogger(SolSimpleRecordProcessor.class);
@Override
public BytesXMLMessage processRecord(String skey, SinkRecord record) {
BytesXMLMessage msg = JCSMPFactory.onlyInstance().createMessage(BytesXMLMessage.class);
-
- // Add Record Topic,Parition,Offset to Solace Msg in case we need to track offset restart
- // limited in Kafka Topic size, replace using SDT below.
- //String userData = "T:" + record.topic() + ",P:" + record.kafkaPartition()
- // + ",O:" + record.kafkaOffset();
- //msg.setUserData(userData.getBytes(StandardCharsets.UTF_8));
-
- // Add Record Topic,Partition,Offset to Solace Msg as header properties
- // in case we need to track offset restart
+ // Add Record Topic,Partition,Offset to Solace Msg
+ String kafkaTopic = record.topic();
SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
try {
- userHeader.putString("k_topic", record.topic());
+ userHeader.putString("k_topic", kafkaTopic);
userHeader.putInteger("k_partition", record.kafkaPartition());
userHeader.putLong("k_offset", record.kafkaOffset());
} catch (SDTException e) {
@@ -59,31 +50,41 @@ public BytesXMLMessage processRecord(String skey, SinkRecord record) {
e.getCause(), e.getStackTrace());
}
msg.setProperties(userHeader);
+ msg.setApplicationMessageType("ResendOfKafkaTopic: " + kafkaTopic);
- Schema s = record.valueSchema();
- String kafkaTopic = record.topic();
-
- msg.setApplicationMessageType("ResendOfKakfaTopic: " + kafkaTopic);
- Object v = record.value();
- log.debug("Value schema {}", s);
- if (v == null) {
- msg.reset();
- return msg;
- } else if (s == null) {
- log.debug("No schema info {}", v);
- if (v instanceof byte[]) {
- msg.writeAttachment((byte[]) v);
-
- } else if (v instanceof ByteBuffer) {
- msg.writeAttachment((byte[]) ((ByteBuffer) v).array());
- }
- } else if (s.type() == Schema.Type.BYTES) {
- if (v instanceof byte[]) {
- msg.writeAttachment((byte[]) v);
- } else if (v instanceof ByteBuffer) {
- msg.writeAttachment((byte[]) ((ByteBuffer) v).array());
+ Schema valueSchema = record.valueSchema();
+ Object recordValue = record.value();
+ // get message body details from record
+ if (recordValue != null) {
+ if (valueSchema == null) {
+ log.trace("No schema info {}", recordValue);
+ if (recordValue instanceof byte[]) {
+ msg.writeAttachment((byte[]) recordValue);
+ } else if (recordValue instanceof ByteBuffer) {
+ msg.writeAttachment((byte[]) ((ByteBuffer) recordValue).array());
+ } else if (recordValue instanceof String) {
+ msg.writeAttachment(((String) recordValue).getBytes());
+ } else {
+ // Unknown recordValue type
+ msg.reset();
+ }
+ } else if (valueSchema.type() == Schema.Type.BYTES) {
+ if (recordValue instanceof byte[]) {
+ msg.writeAttachment((byte[]) recordValue);
+ } else if (recordValue instanceof ByteBuffer) {
+ msg.writeAttachment((byte[]) ((ByteBuffer) recordValue).array());
+ }
+ } else if (valueSchema.type() == Schema.Type.STRING) {
+ msg.writeAttachment(((String) recordValue).getBytes());
+ } else {
+ // Do nothing in all other cases
+ msg.reset();
}
+ } else {
+ // Invalid message
+ msg.reset();
}
+
return msg;
}
diff --git a/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleKeyedRecordProcessor.java b/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleKeyedRecordProcessor.java
deleted file mode 100644
index c517cfb..0000000
--- a/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleKeyedRecordProcessor.java
+++ /dev/null
@@ -1,141 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-package com.solace.sink.connector.recordprocessor;
-
-import com.solace.sink.connector.SolRecordProcessor;
-import com.solacesystems.jcsmp.BytesXMLMessage;
-import com.solacesystems.jcsmp.JCSMPFactory;
-import com.solacesystems.jcsmp.SDTException;
-import com.solacesystems.jcsmp.SDTMap;
-
-import java.nio.ByteBuffer;
-import java.nio.charset.StandardCharsets;
-
-import org.apache.kafka.connect.data.Schema;
-import org.apache.kafka.connect.sink.SinkRecord;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class SolSimpleKeyedRecordProcessor implements SolRecordProcessor {
- private static final Logger log = LoggerFactory.getLogger(SolSimpleKeyedRecordProcessor.class);
-
- public enum KeyHeader {
- NONE, DESTINATION, CORRELATION_ID, CORRELATION_ID_AS_BYTES
- }
-
- protected KeyHeader keyheader = KeyHeader.NONE;
-
- @Override
- public BytesXMLMessage processRecord(String skey, SinkRecord record) {
- if (skey.equals("NONE")) {
- this.keyheader = KeyHeader.NONE;
- } else if (skey.equals("DESTINATION")) {
- this.keyheader = KeyHeader.DESTINATION;
- } else if (skey.equals("CORRELATION_ID")) {
- this.keyheader = KeyHeader.CORRELATION_ID;
- } else if (skey.equals("CORRELATION_ID_AS_BYTES")) {
- this.keyheader = KeyHeader.CORRELATION_ID_AS_BYTES;
- }
-
- BytesXMLMessage msg = JCSMPFactory.onlyInstance().createMessage(BytesXMLMessage.class);
-
- Object vk = record.key();
-
- // Add Record Topic,Partition,Offset to Solace Msg in case we need to track offset restart
- // limited in Kafka Topic size, replace using SDT below.
- //String userData = "T:" + record.topic() + ",P:" + record.kafkaPartition()
- // + ",O:" + record.kafkaOffset();
- //msg.setUserData(userData.getBytes(StandardCharsets.UTF_8));
-
- // Add Record Topic,Partition,Offset to Solace Msg as header properties
- // in case we need to track offset restart
- SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
- try {
- userHeader.putString("k_topic", record.topic());
- userHeader.putInteger("k_partition", record.kafkaPartition());
- userHeader.putLong("k_offset", record.kafkaOffset());
- } catch (SDTException e) {
- log.info("Received Solace SDTException {}, with the following: {} ",
- e.getCause(), e.getStackTrace());
- }
- msg.setProperties(userHeader);
-
- String kafkaTopic = record.topic();
- Schema sk = record.keySchema();
-
- msg.setApplicationMessageType("ResendOfKakfaTopic: " + kafkaTopic);
-
-
-
- // If Topic was Keyed, use the key for correlationID
- if (keyheader != KeyHeader.NONE && keyheader != KeyHeader.DESTINATION) {
-
- if (vk != null) {
- if (sk == null) {
- log.trace("No schema info {}", vk);
- if (vk instanceof byte[]) {
- msg.setCorrelationId(new String((byte[]) vk, StandardCharsets.UTF_8));
- } else if (vk instanceof ByteBuffer) {
- msg.setCorrelationId(new String(((ByteBuffer) vk).array(), StandardCharsets.UTF_8));
- } else {
- msg.setCorrelationId(vk.toString());
- }
- } else if (sk.type() == Schema.Type.BYTES) {
- if (vk instanceof byte[]) {
- msg.setCorrelationId(new String((byte[]) vk, StandardCharsets.UTF_8));
- } else if (vk instanceof ByteBuffer) {
- msg.setCorrelationId(new String(((ByteBuffer) vk).array(), StandardCharsets.UTF_8));
- }
- } else if (sk.type() == Schema.Type.STRING) {
- msg.setCorrelationId((String) vk);
- }
- }
-
- } else if (keyheader == KeyHeader.DESTINATION && sk.type() == Schema.Type.STRING) {
- msg.setCorrelationId((String) vk);
- }
-
- Schema s = record.valueSchema();
- Object v = record.value();
- // get message body details from record
- log.debug("Value schema {}", s);
- if (v == null) {
- msg.reset();
- return msg;
- } else if (s == null) {
- log.debug("No schema info {}", v);
- if (v instanceof byte[]) {
- msg.writeAttachment((byte[]) v);
-
- } else if (v instanceof ByteBuffer) {
- msg.writeAttachment((byte[]) ((ByteBuffer) v).array());
- }
- } else if (s.type() == Schema.Type.BYTES) {
- if (v instanceof byte[]) {
- msg.writeAttachment((byte[]) v);
- } else if (v instanceof ByteBuffer) {
- msg.writeAttachment((byte[]) ((ByteBuffer) v).array());
- }
- }
-
- return msg;
- }
-
-}
diff --git a/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleKeyedRecordProcessorDto.java b/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleKeyedRecordProcessorDto.java
deleted file mode 100644
index 529c6da..0000000
--- a/src/main/java/com/solace/sink/connector/recordprocessor/SolSimpleKeyedRecordProcessorDto.java
+++ /dev/null
@@ -1,141 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-package com.solace.sink.connector.recordprocessor;
-
-import com.solace.sink.connector.SolRecordProcessor;
-import com.solacesystems.jcsmp.BytesXMLMessage;
-import com.solacesystems.jcsmp.JCSMPFactory;
-import com.solacesystems.jcsmp.SDTException;
-import com.solacesystems.jcsmp.SDTMap;
-
-import java.nio.ByteBuffer;
-import java.nio.charset.StandardCharsets;
-
-import org.apache.kafka.connect.data.Schema;
-import org.apache.kafka.connect.sink.SinkRecord;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-public class SolSimpleKeyedRecordProcessorDto implements SolRecordProcessor {
-
- private static final Logger log = LoggerFactory.getLogger(SolSimpleKeyedRecordProcessor.class);
-
- public enum KeyHeader {
- NONE, DESTINATION, CORRELATION_ID, CORRELATION_ID_AS_BYTES
- }
-
- protected KeyHeader keyheader = KeyHeader.NONE;
-
- @Override
- public BytesXMLMessage processRecord(String skey, SinkRecord record) {
- if (skey.equals("NONE")) {
- this.keyheader = KeyHeader.NONE;
- } else if (skey.equals("DESTINATION")) {
- this.keyheader = KeyHeader.DESTINATION;
- } else if (skey.equals("CORRELATION_ID")) {
- this.keyheader = KeyHeader.CORRELATION_ID;
- } else if (skey.equals("CORRELATION_ID_AS_BYTES")) {
- this.keyheader = KeyHeader.CORRELATION_ID_AS_BYTES;
- }
-
- BytesXMLMessage msg = JCSMPFactory.onlyInstance().createMessage(BytesXMLMessage.class);
-
- Object vk = record.key();
-
- // Add Record Topic,Parition,Offset to Solace Msg in case we need to track offset restart
- // limited in Kafka Topic size, replace using SDT below.
- //String userData = "T:" + record.topic() + ",P:"
- // + record.kafkaPartition() + ",O:" + record.kafkaOffset();
- //msg.setUserData(userData.getBytes(StandardCharsets.UTF_8));
-
- // Add Record Topic,Partition,Offset to Solace Msg as header properties
- // in case we need to track offset restart
- SDTMap userHeader = JCSMPFactory.onlyInstance().createMap();
- try {
- userHeader.putString("k_topic", record.topic());
- userHeader.putInteger("k_partition", record.kafkaPartition());
- userHeader.putLong("k_offset", record.kafkaOffset());
- } catch (SDTException e) {
- // TODO Auto-generated catch block
- e.printStackTrace();
- }
- msg.setProperties(userHeader);
-
- String kafkaTopic = record.topic();
-
- msg.setApplicationMessageType("ResendOfKakfaTopic: " + kafkaTopic);
- msg.setDeliverToOne(true); // Added DTO flag for topic consumer scaling
-
- Schema s = record.valueSchema();
- Schema sk = record.keySchema();
- Object v = record.value();
- // If Topic was Keyed, use the key for correlationID
- if (keyheader != KeyHeader.NONE && keyheader != KeyHeader.DESTINATION) {
-
- if (vk != null) {
- if (sk == null) {
- log.trace("No schema info {}", vk);
- if (vk instanceof byte[]) {
- msg.setCorrelationId(new String((byte[]) vk, StandardCharsets.UTF_8));
- } else if (vk instanceof ByteBuffer) {
- msg.setCorrelationId(new String(((ByteBuffer) vk).array(), StandardCharsets.UTF_8));
- } else {
- msg.setCorrelationId(vk.toString());
- }
- } else if (sk.type() == Schema.Type.BYTES) {
- if (vk instanceof byte[]) {
- msg.setCorrelationId(new String((byte[]) vk, StandardCharsets.UTF_8));
- } else if (vk instanceof ByteBuffer) {
- msg.setCorrelationId(new String(((ByteBuffer) vk).array(), StandardCharsets.UTF_8));
- }
- } else if (sk.type() == Schema.Type.STRING) {
- msg.setCorrelationId((String) vk);
- }
- }
-
- } else if (keyheader == KeyHeader.DESTINATION && sk.type() == Schema.Type.STRING) {
- msg.setCorrelationId((String) vk);
- }
-
- // get message body details from record
- log.debug("Value schema {}", s);
- if (v == null) {
- msg.reset();
- return msg;
- } else if (s == null) {
- log.debug("No schema info {}", v);
- if (v instanceof byte[]) {
- msg.writeAttachment((byte[]) v);
-
- } else if (v instanceof ByteBuffer) {
- msg.writeAttachment((byte[]) ((ByteBuffer) v).array());
- }
- } else if (s.type() == Schema.Type.BYTES) {
- if (v instanceof byte[]) {
- msg.writeAttachment((byte[]) v);
- } else if (v instanceof ByteBuffer) {
- msg.writeAttachment((byte[]) ((ByteBuffer) v).array());
- }
- }
-
- return msg;
- }
-
-}