Releases: mochi-mqtt/server
v2.1.0
What's Changed
- Adds a new unix socket listener (
listeners.NewUnixSock
) to allow more efficient local processing, by @zgwit in #124 - Fixes an issue where processSubscribe was not correctly determining if a subscription existed, if multiple topics were passed for subscription, by @wind-c in #123
- Removes inefficient implementation of hook
OnExpireInflights
and uses more appropriateOnQosDropped
hook instead, by @mochi-co in #127 per #125 - Fixes an issue where Connect property RequestResponseInfo was being applied to all packets instead of just Connack packet, by @mochi-co in #128 per #128
New Contributors
Full Changelog: v2.0.7...v2.1.0
Tests
- Builds
- Unit Tests Passing
- Paho Interoperability Passing
v2.0.7
What's Changed
- Add the OnUnsubscribed hook to the unsubscribeClient method by @wind-c in #122 - UnsubscribeClient is now an exported method.
Many thanks to @wind-c!
Full Changelog: v2.0.6...v2.0.7
Tests
- Builds
- Unit Tests Passing
- Paho Interoperability Passing
v2.0.6
What's Changed
- Enforce server max packet size as per #120 by @mochi-co in #121
- Minor test changes.
- Adjustment to ClientDisconnect to adhere to PassiveClientDisconnect compatibility mode.
Full Changelog: v2.0.5...v2.0.6
Tests
- Builds
- Unit Tests Passing
- Paho Interoperability Passing - V3, V5
v2.0.5
What's Changed
- Fix mis-typed onpublished hook, update version, fanpool defaults by @mochi-co in #119
- Fix websocket malformed packet bug by @tommyminds and @svanharmelen in #116
New Contributors
- @tommyminds and @svanharmelenmade their first contribution in #116 🎉
Full Changelog: v2.0.4...v2.0.5
v2.0.4
- Restores the
server.Publish(topic string, payload []byte, retain bool, qos byte) error
method from v1.3.2 as a convenience function which utilizes server.InjectPacket, by @mochi-co for #113 - Refactors Client creation to allow developers to more easily create and use Clients and InlineClients as passing server.ops was difficult and NewClient, NewInlineClient, and newClientStub presented unnecessary code duplication. Use
server.NewClient
instead ofmqtt.NewClient
, by @mochi-co. Many thanks to @chenji1990 for their supportive PR regarding this matter!
Full Changelog: v2.0.3...v2.0.4
Tests
- Builds
- Unit Tests Passing
- Paho Interoperability Passing - V3, V5
v2.0.3
v2.0.1
What's Changed
- Everything. Thank you for your patience - by @mochi-co
- Provides full compliance with MQTT v5 specification
- Fixes all outstanding v1 related issues.
Full Changelog: v1.3.2...v2.0.1
This much-awaited release represents a total ground-up rewrite of the entire project in order to primarily support all of the features and compliance requirements detailed in the MQTT v5 specification. As such, it represents an absolute breaking change from the v1 series of the broker.
In particular, the following have changed which may interrupt your existing implementations:
- Auth interfaces, persistence interfaces, and the events callback system have been replaced with the new universal Hooks system.
- Inline Publish has been replaced by Inject Packet.
- The way the server is initiated and configured has changed.
Please refer to the new readme for full information on all changes, and open an issue if you have any questions or feedback! 🙂
Tests
- Builds
- Unit Tests Passing
- Paho Interoperability Passing - V3, V5
v1.3.2
What's Changed
- Provides a fix for new data races in inflight messages system by @mochi-co in #99 and @muXxer in #96
- Increased size of InlineMessages publish buffer from 1024 to 4096 in light of #95 by @mochi-co
Full Changelog: v1.3.1...v1.3.2
This issue does not effect users in controlled environments who do not use QOS values higher than 0, but users who do should upgrade to this release at the soonest possible convenience.
This patch release addresses an issue #98 and #96 in which a data race introduced by #90 when the primary server routine and client routines attempted to access the client inflight messages map at the same time, causing fatal errors.
The issue was reproduced placing the server under heavy load with inovex/mqtt-stresser, using qos=2 for both the publisher and subscriber (see #98).
Profiling the solution before and after shows a small but negligible reduction in mallocs (good thing). When running the above mentioned stress tests, no data races or crashes are detected.
With many thanks to @muXxer for identifying the issue and providing a potential solution in #96
Tests
- Builds
- Unit Tests Passing
- PAHO Interoperability Passing
v1.3.1
What's Changed
- Keep in sync server.System.Inflight by @bkupidura in #92
- Small fix for paho compatibility (check correct cleansession var on disconnect) by @mochi-co
New Contributors
- @bkupidura made their first contribution in #92 🥳
Full Changelog: v1.3.0...v1.3.1
Tests
- Builds
- Unit Tests Passing
- PAHO Interoperability Passing
v1.3.0
What's Changed - The Big Inflight Messages Release
- Adds Inflight TTL expiry times for queued inflight messages by @mochi-co as per #86 and #76
- Periodically resend queued inflight messages to connected clients by @mochi-co as per #86 and #76
- Fixes long standing flakey test by @mochi-co as per #25
- Changes to the Persistence interface in order to facilitate cleaning of inflight messages by @mochi-co.
- Upgrading to this new release will clean any stuck/accumulated QOS messages in your store.
Full Changelog: v1.2.3...v1.3.0
In Discussion Inflight messages cleaning #76 and Issue Stale inflight messages #86 we discovered that for some cases, inflight messages were able to accumulate with no way of purging them from memory or the persistence store (if in use).
The main reasons this occurred were:
- In order to meet the MQTT 3.1.1 spec, QOS > 0 Publish messages or to QOS > 0 subscriptions are queued for delivery later to clients when they reconnect. If a client never reconnected, the messages would queue indefinitely.
- Less frequently, QOS messages which failed to resolve were only resent if the client reconnected. If the client never reconnected, they were kept indefinitely.
To this end, the following changes have been implemented:
- The server internal event loop now periodically attempts resending pending inflight messages to connected clients. After 6 attempts, the inflight message is presumed to be defective/unwanted and dropped. As resends are only attempted with connected clients, this does not affect the expected behaviour of QOS message queueing.
- An Inflight TTL feature has been added which allows the server to drop any inflight messages which are still queued after a given duration. The server now takes a new
InflightTTL
option specifying the number of seconds an unresolved message should be kept before being dropped. A new case has been added to the internal event loop to periodically scan the client inflight messages memory and delete any expired inflight messages based on this value. - The Persistence interface has been updated to provide a
ClearExpiredInflight(expiry int64) error
method for clearing expired inflight messages in layers which do not support native TTL solutions (bolt), and asetInflightTTL
method used to automatically propagate the known inflight TTL from the server to persistence layers. These interface changes are breaking changes, and as such the minor version has been increased to 3.
For those wishing the set the TTL of their inflights to a specific value, examples/tcp/main.go
has a demonstration:
// An example of configuring various server options...
options := &mqtt.Options{
BufferSize: 0, // Use default values
BufferBlockSize: 0, // Use default values
InflightTTL: 60 * 15, // Set an example custom 15-min TTL for inflight messages
}
server := mqtt.NewServer(options)
With huge thanks to @Flaer and @bkupidura for their work investigating this issue.
As usual, please open an issue with any concerns, bugs, or ideas! :)
Tests
- Builds
- Unit Tests Passing
- PAHO Interoperability Passing