We recommend customers to migrate to 1.14.1 to avoid known bugs in 1.14.0 version
The Amazon Kinesis Client Library for Java (Amazon KCL) enables Java developers to easily consume and process data from Amazon Kinesis.
- Provides an easy-to-use programming model for processing data using Amazon Kinesis
- Helps with scale-out and fault-tolerant processing
- Sign up for AWS — Before you begin, you need an AWS account. For more information about creating an AWS account and retrieving your AWS credentials, see AWS Account and Credentials in the AWS SDK for Java Developer Guide.
- Sign up for Amazon Kinesis — Go to the Amazon Kinesis console to sign up for the service and create an Amazon Kinesis stream. For more information, see Create an Amazon Kinesis Stream in the Amazon Kinesis Developer Guide.
- Minimum requirements — To use the Amazon Kinesis Client Library, you'll need Java 1.8+. For more information about Amazon Kinesis Client Library requirements, see Before You Begin in the Amazon Kinesis Developer Guide.
- Using the Amazon Kinesis Client Library — The best way to get familiar with the Amazon Kinesis Client Library is to read Developing Record Consumer Applications in the Amazon Kinesis Developer Guide.
After you've downloaded the code from GitHub, you can build it using Maven. To disable GPG signing in the build, use this command: mvn clean install -Dgpg.skip=true
For producer-side developers using the Kinesis Producer Library (KPL), the KCL integrates without additional effort. When the KCL retrieves an aggregated Amazon Kinesis record consisting of multiple KPL user records, it will automatically invoke the KPL to extract the individual user records before returning them to the user.
To make it easier for developers to write record processors in other languages, we have implemented a Java based daemon, called MultiLangDaemon that does all the heavy lifting. Our approach has the daemon spawn a sub-process, which in turn runs the record processor, which can be written in any language. The MultiLangDaemon process and the record processor sub-process communicate with each other over STDIN and STDOUT using a defined protocol. There will be a one to one correspondence amongst record processors, child processes, and shards. For Python developers specifically, we have abstracted these implementation details away and expose an interface that enables you to focus on writing record processing logic in Python. This approach enables KCL to be language agnostic, while providing identical features and similar parallel processing model across all languages.
- #1371 Fix a bug in debug and trace logging levels for worker
- #1224 Modify RecordProcessorCheckpointer#advancePosition Metrics usage to ensure proper closure
- #1345 Generate wrappers from proto files instead of shipping them directly
- #1346 Upgrade com.google.protobuf:protobuf-java from 3.23.4 to 4.27.1
- #1338 Upgrade org.apache.logging.log4j:log4j-api from 2.20.0 to 2.23.1
- #1327 Upgrade com.google.guava:guava from 33.0.0-jre to 33.2.0-jre
- #1283 Upgrade com.fasterxml.jackson.core:jackson-core from 2.15.2 to 2.17.0
- #1284 Upgrade aws-java-sdk.version from 1.12.647 to 1.12.681
- #1288 Upgrade commons-logging:commons-logging from 1.2 to 1.3.1
- #1289 Upgrade org.projectlombok:lombok from 1.18.22 to 1.18.32
- #1248 Upgrade org.apache.maven.plugins:maven-surefire-plugin from 2.22.2 to 3.2.5
- #1234 Upgrade org.apache.maven.plugins:maven-javadoc-plugin from 3.4.1 to 3.6.3
- #1137 Upgrade maven-failsafe-plugin from 2.22.2 to 3.1.2
- #1134 Upgrade jackson-core from 2.15.0 to 2.15.2
- #1119 Upgrade maven-source-plugin from 3.2.1 to 3.3.0
- #1165 Upgrade protobuf-java from 3.19.6 to 3.23.4
- #1214 Added backoff logic for ShardSyncTaskIntegrationTest
- #1214 Upgrade Guava version from 31.0.1 to 32.1.1
- #1252 Upgrade aws-java-sdk from 1.12.406 to 1.12.647
- #1108 Add support for Stream ARNs
- #1111 More consistent testing behavior with HashRangesAreAlwaysComplete
- #1054 Upgrade log4j-core from 2.17.1 to 2.20.0
- #1103 Upgrade jackson-core from 2.13.0 to 2.15.0
- #943 Upgrade nexus-staging-maven-plugin from 1.6.8 to 1.6.13
- #1044 Upgrade aws-java-sdk.version from 1.12.406 to 1.12.408
- #1055 Upgrade maven-compiler-plugin from 3.10.0 to 3.11.0
- Updated aws-java-sdk from 1.12.130 to 1.12.406
- Updated com.google.protobuf from 3.19.4 to 3.19.6
- #995 Every other change for DynamoDBStreamsKinesis Adapter Compatibility
- #970 PeriodicShardSyncManager Changes Needed for DynamoDBStreamsKinesisAdapter
- Bump log4j-core from 2.17.0 to 2.17.1
- Bump protobuf-java from 3.19.1 to 3.19.4
- Bump maven-compiler-plugin from 3.8.1 to 3.10.0
- #881 Update log4j test dependency from 2.16.0 to 2.17.0 and some other dependencies
- #876 Update log4j test dependency from 2.15.0 to 2.16.0
- #872 Update log4j test dependency from 1.2.17 to 2.15.0
- #873 Upgrading version of AWS Java SDK to 1.12.128
- Milestone#61
- #816 Updated the Worker shutdown logic to make sure that the
LeaseCleanupManager
also terminates all the threads that it has started. - #821 Upgrading version of AWS Java SDK to 1.12.3
- Milestone#60
- #811 Fixing a bug in
KinesisProxy
that can lead to undetermined behavior during partial failures. - #811 Adding guardrails to handle duplicate shards from the service.
- Milestone#57
- #790 Fixing a bug that caused paginated
ListShards
calls with theShardFilter
parameter to fail when the lease table was being initialized.
-
Fix for cross DDB table interference when multiple KCL applications are run in same JVM.
-
Fix and guards to avoid potential checkpoint rewind during shard end, which may block children shard processing.
-
Fix for thread cycle wastage on InitializeTask for deleted shard.
-
Improved logging in LeaseCleanupManager that would indicate why certain shards are not cleaned up from the lease table.
-
Behavior of shard synchronization is moving from each worker independently learning about all existing shards to workers only discovering the children of shards that each worker owns. This optimizes memory usage, lease table IOPS usage, and number of calls made to kinesis for streams with high shard counts and/or frequent resharding.
-
When bootstrapping an empty lease table, KCL utilizes the ListShard API's filtering option (the ShardFilter optional request parameter) to retrieve and create leases only for a snapshot of shards open at the time specified by the ShardFilter parameter. The ShardFilter parameter enables you to filter out the response of the ListShards API, using the Type parameter. KCL uses the Type filter parameter and the following of its valid values to identify and return a snapshot of open shards that might require new leases.
- Currently, the following shard filters are supported:
AT_TRIM_HORIZON
- the response includes all the shards that were open atTRIM_HORIZON
.AT_LATEST
- the response includes only the currently open shards of the data stream.AT_TIMESTAMP
- the response includes all shards whose start timestamp is less than or equal to the given timestamp and end timestamp is greater than or equal to the given timestamp or still open.
ShardFilter
is used when creating leases for an empty lease table to initialize leases for a snapshot of shards specified atKinesisClientLibConfiguration#initialPositionInStreamExtended
.- For more information about ShardFilter, see the official AWS documentation on ShardFilter.
- Currently, the following shard filters are supported:
-
Introducing support for the
ChildShards
response of theGetRecords
API to perform lease/shard synchronization that happens atSHARD_END
for closed shards, allowing a KCL worker to only create leases for the child shards of the shard it finished processing.- For KCL 1.x applications, this uses the
ChildShards
response of theGetRecords
API. - For more information, see the official AWS Documentation on GetRecords and ChildShard.
- For KCL 1.x applications, this uses the
-
KCL now also performs additional periodic shard/lease scans in order to identify any potential holes in the lease table to ensure the complete hash range of the stream is being processed and create leases for them if required. When
KinesisClientLibConfiguration#shardSyncStrategyType
is set toShardSyncStrategyType.SHARD_END
,PeriodicShardSyncManager#leasesRecoveryAuditorInconsistencyConfidenceThreshold
will be used to determine the threshold for number of consecutive scans containing holes in the lease table after which to enforce a shard sync. WhenKinesisClientLibConfiguration#shardSyncStrategyType
is set toShardSyncStrategyType.PERIODIC
,leasesRecoveryAuditorInconsistencyConfidenceThreshold
is ignored.- New configuration options are available to configure
PeriodicShardSyncManager
inKinesisClientLibConfiguration
Name Default Description leasesRecoveryAuditorInconsistencyConfidenceThreshold 3 Confidence threshold for the periodic auditor job to determine if leases for a stream in the lease table is inconsistent. If the auditor finds same set of inconsistencies consecutively for a stream for this many times, then it would trigger a shard sync. Only used for ShardSyncStrategyType.SHARD_END
.- New CloudWatch metrics are also now emitted to monitor the health of
PeriodicShardSyncManager
:
Name Description NumStreamsWithPartialLeases Number of streams that had holes in their hash ranges. NumStreamsToSync Number of streams which underwent a full shard sync. - New configuration options are available to configure
-
Introducing deferred lease cleanup. Leases will be deleted asynchronously by
LeaseCleanupManager
upon reachingSHARD_END
, when a shard has either expired past the stream’s retention period or been closed as the result of a resharding operation.- New configuration options are available to configure
LeaseCleanupManager
.
Name Default Description leaseCleanupIntervalMillis 1 minute Interval at which to run lease cleanup thread. completedLeaseCleanupIntervalMillis 5 minutes Interval at which to check if a lease is completed or not. garbageLeaseCleanupIntervalMillis 30 minutes Interval at which to check if a lease is garbage (i.e trimmed past the stream's retention period) or not. - New configuration options are available to configure
-
Including an optimization to
KinesisShardSyncer
to only create leases for one layer of shards. -
Changing default shard prioritization strategy to be
NoOpShardPrioritization
to allow prioritization of completed shards. Customers who are upgrading to this version and are reading fromTRIM_HORIZON
should continue usingParentsFirstShardPrioritization
while upgrading. -
Upgrading version of AWS SDK to 1.11.844.
-
#719 Upgrading version of Google Protobuf to 3.11.4.
-
#712 Allowing KCL to consider lease tables in
UPDATING
healthy.