Changes and additions to the library will be listed here.
- Add client methods to manage configs (#759)
- Fix logger again (#762)
- Fix SSL authentication for ruby < 2.4.0 (#742)
- Add metrics for prometheus/client (#739)
- Do not add nil message entries when ignoring old messages (#746)
- Scram authentication thread save (#743)
- Optionally verify hostname on SSL certs (#733)
- Producer send offsets in transaction (#723)
- Support zstd compression (#724)
- Verify SSL Certificates (#730)
- Introduce regex matching in
Consumer#subscribe
(#700) - Only rejoin group on error if we're not in shutdown mode (#711)
- Use
maxTimestamp
forlogAppendTime
timestamps (#706) - Async producer limit number of retries (#708)
- Support SASL OAuthBearer Authentication (#710)
- Distribute partitions across consumer groups when there are few partitions per topic (#681)
- Fix an issue where a consumer would fail to fetch any messages (#689)
- Instrumentation for heartbeat event
- Synchronously stop the fetcher to prevent race condition when processing commands
- Instrument batch fetching (#694)
- Fix wrong encoding calculation that leads to message corruption (#682, #680).
- Change the log level of the 'Committing offsets' message to debug (#640).
- Avoid Ruby warnings about unused vars (#679).
- Synchronously commit offsets after HeartbeatError (#676).
- Discard messages that were fetched under a previous consumer group generation (#665).
- Support specifying an ssl client certificates key passphrase (#667).
- Synchronize access to @worker_thread and @timer_thread in AsyncProducer to prevent creating multiple threads (#661).
- Handle case when paused partition does not belong to group on resume (#656).
- Fix compatibility version in documentation (#651).
- Fix message set backward compatible (#648).
- Refresh metadata on connection error when listing topics (#644).
- Compatibility with dogstatsd-ruby v4.0.0.
- Fix consuming duplication due to redundant messages returned from Kafka (#636).
- Fresh cluster info on fetch error (#641).
- Exactly Once Delivery and Transactional Messaging Support (#608).
- Support extra client certificates in the SSL Context when authenticating with Kafka (#633).
- Drop support for Kafka 0.10 in favor of native support for Kafka 0.11.
- Support record headers (#604).
- Add instrumenter and logger when async message delivery fails (#603).
- Upgrade and rename GroupCoordinator API to FindCoordinator API (#606).
- Refresh cluster metadata after topic re-assignment (#609).
- Disable SASL over SSL with a new config (#613).
- Allow listing brokers in a cluster (#626).
- Fix Fetcher's message skipping (#625).
- Handle case where consumer doesn't know about the topic (#597 + 0e302cbd0f31315bf81c1d1645520413ad6b58f0)
- Fix bug related to partition assignment.
- Fix bug that caused consumers to jump back and reprocess messages (#595).
- Allow configuring the max size of the queue connecting the fetcher thread with the consumer.
- Add support for the Describe Groups API (#583).
- Add list groups API (#582).
- Use mutable String constructor (#584).
- Fix bug with exponential pausing causing pauses never to stop.
- Fetch messages asynchronously (#526).
- Add support for exponential backoff in pauses (#566).
- Instrument pause durations (#574).
- Support PLAINTEXT and SSL URI schemes (#550).
- Add support for config entries in the topic creation API (#540).
- Don't fail on retry when the cluster is secured (#545).
- Add support for the topic deletion API (#528).
- Add support for the partition creation API (#533).
- Allow passing in the seed brokers in a positional argument (#538).
- Instrument the start of message/batch processing (#496).
- Mark
Client#fetch_messages
as stable. - Fix the list topics API (#508).
- Add support for LZ4 compression (#499).
- Refactor compression codec lookup (#509).
- Fix compressed message set offset bug (#506).
- Test against multiple versions of Kafka.
- Fix double-processing of messages after a consumer exception (#518).
- Track consumer offsets in Datadog.
Requires Kafka 0.10.1+ due to usage of a few new APIs.
- Fix bug when using compression (#458).
- Update the v3 of the Fetch API, allowing a per-request
max_bytes
setting (#468). - Make
#deliver_message
more resilient using retries and backoff. - Add support for SASL SCRAM authentication (#465).
- Refactor and simplify SASL code.
- Fix issue when a consumer resets a partition to its default offset.
- Allow specifying a create time for messages (#481).
- Drops support for Kafka 0.9 in favor of Kafka 0.10 (#381)!
- Handle cases where there are no partitions to fetch from by sleeping a bit (#439).
- Handle problems with the broker cache (#440).
- Shut down more quickly (#438).
- Restart the async producer thread automatically after errors.
- Include the offset lag in batch consumer metrics (Statsd).
- Make the default
max_wait_time
more sane. - Fix issue with cached default offset lookups (#431).
- Upgrade to Datadog client version 3.
- Fix connection issue on SASL connections (#401).
- Add more instrumentation of consumer groups (#407).
- Improve error logging (#385)
- Allow seeking the consumer position (#386).
- Reopen idle connections after 5 minutes (#399).
- Support SASL authentication (#334 and #370)
- Allow loading SSL certificates from files (#371)
- Add Statsd metric reporting (#373)
- Re-commit previously committed offsets periodically with an interval of half the offset retention time, starting with the first commit (#318).
- Expose offset retention time in the Consumer API (#316).
- Don't get blocked when there's temporarily no leader for a topic (#336).
- Fix SSL socket timeout (#283).
- Update to the latest Datadog gem (#296).
- Automatically detect private key type (#297).
- Only fetch messages for subscribed topics (#309).
- Allow setting a timeout on a partition pause (#272).
- Allow pausing consumption of a partition (#268).
- Automatically recover from invalid consumer checkpoints.
- Minimize the number of times messages are reprocessed after a consumer group resync.
- Improve instrumentation of the async producer.
- Fix a bug in the consumer.
- Fix bug in the simple consumer loop.
- Handle brokers becoming unavailable while in a consumer loop (#228).
- Handle edge case when consuming from the end of a topic (#230).
- Ensure the library can be loaded without Bundler (#224).
- Add an API for fetching the last offset in a partition (#232).
- Improve the default durability setting. The producer setting
required_acks
now defaults to:all
(#210). - Handle rebalances in the producer (#196). Mpampis Kostas
- Add simplified producer and consumer APIs for simple use cases.
- Add out-of-the-box Datadog reporting.
- Improve producer performance.
- Keep separate connection pools for consumers and producers initialized from the same client.
- Handle connection errors automatically in the async producer.
- Default to port 9092 if no port is provided for a seed broker.
- Fix bug that caused partition information to not be reliably updated.
- Fix bug that caused the async producer to not work with Unicorn (#166).
- Fix bug that caused committed consumer offsets to be lost (#167).
- Instrument buffer overflows in the producer.
- Make the producer buffer more resilient in the face of isolated topic errors.
- Allow clearing a producer's buffer (Martin Nowak).
- Improved Consumer API.
- Instrument producer errors.
- Experimental batch consumer API.
- Simplify the heartbeat algorithm.
- Handle partial messages at the end of message sets received from the brokers.
- Add support for encryption and authentication with SSL (Tom Crayford).
- Allow configuring consumer offset commit policies.
- Instrument consumer message processing.
- Fixed an issue causing exceptions when no logger was specified.
- Add instrumentation of message compression.
- New! Consumer API – still alpha level. Expect many changes.