Skip to content

Latest commit

 

History

History
851 lines (678 loc) · 84 KB

CHANGELOG-3.4.md

File metadata and controls

851 lines (678 loc) · 84 KB

Previous change logs can be found at CHANGELOG-3.3.

The minimum recommended etcd versions to run in production are 3.2.28+, 3.3.18+, and 3.4.2+.


v3.4.8 (2020-05-18)

See code changes and v3.4 upgrade guide for any breaking changes.

etcdctl

Package clientv3

etcd server

Package Auth

Metrics, Monitoring

Go


v3.4.7 (2020-04-01)

See code changes and v3.4 upgrade guide for any breaking changes.

etcd server

Package wal

Metrics, Monitoring

Go


v3.4.6 (2020-03-29)

See code changes and v3.4 upgrade guide for any breaking changes.

Package lease

Go


v3.4.5 (2020-03-18)

See code changes and v3.4 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.4 upgrade guide.

etcd server

client v3

etcdctl v3

Metrics, Monitoring

See List of metrics for all metrics per release.

gRPC Proxy

Go


v3.4.4 (2020-02-24)

See code changes and v3.4 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.4 upgrade guide.

etcd server

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

Auth


v3.4.3 (2019-10-24)

See code changes and v3.4 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.4 upgrade guide.

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

Go


v3.4.2 (2019-10-11)

See code changes and v3.4 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.4 upgrade guide.

etcdctl v3

etcdserver

  • Add tracing to range, put and compact requests in etcdserver.

Go

client v3


v3.4.1 (2019-09-17)

See code changes and v3.4 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.4 upgrade guide.

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

etcd server

Package embed

Dependency

Go


v3.4.0 (2019-08-30)

See code changes and v3.4 upgrade guide for any breaking changes.

Again, before running upgrades from any previous release, please make sure to read change logs below and v3.4 upgrade guide.

Documentation

Improved

Breaking Changes

Dependency

Metrics, Monitoring

See List of metrics for all metrics per release.

Note that any etcd_debugging_* metrics are experimental and subject to change.

Security, Authentication

See security doc for more details.

  • Support TLS cipher suite whitelisting.
  • Add etcd --host-whitelist flag, etcdserver.Config.HostWhitelist, and embed.Config.HostWhitelist, to prevent "DNS Rebinding" attack.
    • Any website can simply create an authorized DNS name, and direct DNS to "localhost" (or any other address). Then, all HTTP endpoints of etcd server listening on "localhost" becomes accessible, thus vulnerable to DNS rebinding attacks (CVE-2018-5702).
    • Client origin enforce policy works as follow:
      • If client connection is secure via HTTPS, allow any hostnames..
      • If client connection is not secure and "HostWhitelist" is not empty, only allow HTTP requests whose Host field is listed in whitelist.
    • By default, "HostWhitelist" is "*", which means insecure server allows all client HTTP requests.
    • Note that the client origin policy is enforced whether authentication is enabled or not, for tighter controls.
    • When specifying hostnames, loopback addresses are not added automatically. To allow loopback interfaces, add them to whitelist manually (e.g. "localhost", "127.0.0.1", etc.).
    • e.g. etcd --host-whitelist example.com, then the server will reject all HTTP requests whose Host field is not example.com (also rejects requests to "localhost").
  • Support etcd --cors in v3 HTTP requests (gRPC gateway).
  • Support ttl field for etcd Authentication JWT token.
    • e.g. etcd --auth-token jwt,pub-key=<pub key path>,priv-key=<priv key path>,sign-method=<sign method>,ttl=5m.
  • Allow empty token provider in etcdserver.ServerConfig.AuthToken.
  • Fix TLS reload when certificate SAN field only includes IP addresses but no domain names.
    • In Go, server calls (*tls.Config).GetCertificate for TLS reload if and only if server's (*tls.Config).Certificates field is not empty, or (*tls.ClientHelloInfo).ServerName is not empty with a valid SNI from the client. Previously, etcd always populates (*tls.Config).Certificates on the initial client TLS handshake, as non-empty. Thus, client was always expected to supply a matching SNI in order to pass the TLS verification and to trigger (*tls.Config).GetCertificate to reload TLS assets.
    • However, a certificate whose SAN field does not include any domain names but only IP addresses would request *tls.ClientHelloInfo with an empty ServerName field, thus failing to trigger the TLS reload on initial TLS handshake; this becomes a problem when expired certificates need to be replaced online.
    • Now, (*tls.Config).Certificates is created empty on initial TLS client handshake, first to trigger (*tls.Config).GetCertificate, and then to populate rest of the certificates on every new TLS connection, even when client SNI is empty (e.g. cert only includes IPs).

etcd server

  • Add rpctypes.ErrLeaderChanged.
    • Now linearizable requests with read index would fail fast when there is a leadership change, instead of waiting until context timeout.
  • Add etcd --initial-election-tick-advance flag to configure initial election tick fast-forward.
    • By default, etcd --initial-election-tick-advance=true, then local member fast-forwards election ticks to speed up "initial" leader election trigger.
    • This benefits the case of larger election ticks. For instance, cross datacenter deployment may require longer election timeout of 10-second. If true, local node does not need wait up to 10-second. Instead, forwards its election ticks to 8-second, and have only 2-second left before leader election.
    • Major assumptions are that: cluster has no active leader thus advancing ticks enables faster leader election. Or cluster already has an established leader, and rejoining follower is likely to receive heartbeats from the leader after tick advance and before election timeout.
    • However, when network from leader to rejoining follower is congested, and the follower does not receive leader heartbeat within left election ticks, disruptive election has to happen thus affecting cluster availabilities.
    • Now, this can be disabled by setting etcd --initial-election-tick-advance=false.
    • Disabling this would slow down initial bootstrap process for cross datacenter deployments. Make tradeoffs by configuring etcd --initial-election-tick-advance at the cost of slow initial bootstrap.
    • If single-node, it advances ticks regardless.
    • Address disruptive rejoining follower node.
  • Add etcd --pre-vote flag to enable to run an additional Raft election phase.
    • For instance, a flaky(or rejoining) member may drop in and out, and start campaign. This member will end up with a higher term, and ignore all incoming messages with lower term. In this case, a new leader eventually need to get elected, thus disruptive to cluster availability. Raft implements Pre-Vote phase to prevent this kind of disruptions. If enabled, Raft runs an additional phase of election to check if pre-candidate can get enough votes to win an election.
    • etcd --pre-vote=false by default.
    • v3.5 will enable etcd --pre-vote=true by default.
  • Add etcd --experimental-compaction-batch-limit to sets the maximum revisions deleted in each compaction batch.
  • Reduced default compaction batch size from 10k revisions to 1k revisions to improve p99 latency during compactions and reduced wait between compactions from 100ms to 10ms.
  • Add etcd --discovery-srv-name flag to support custom DNS SRV name with discovery.
    • If not given, etcd queries _etcd-server-ssl._tcp.[YOUR_HOST] and _etcd-server._tcp.[YOUR_HOST].
    • If etcd --discovery-srv-name="foo", then query _etcd-server-ssl-foo._tcp.[YOUR_HOST] and _etcd-server-foo._tcp.[YOUR_HOST].
    • Useful for operating multiple etcd clusters under the same domain.
  • Support TLS cipher suite whitelisting.
  • Support etcd --cors in v3 HTTP requests (gRPC gateway).
  • Rename etcd --log-output to etcd --log-outputs to support multiple log outputs.
    • etcd --log-output will be deprecated in v3.5.
  • Add etcd --logger flag to support structured logger and multiple log outputs in server-side.
    • etcd --logger=capnslog will be deprecated in v3.5.
    • Main motivation is to promote automated etcd monitoring, rather than looking back server logs when it starts breaking. Future development will make etcd log as few as possible, and make etcd easier to monitor with metrics and alerts.
    • etcd --logger=capnslog --log-outputs=default is the default setting and same as previous etcd server logging format.
    • etcd --logger=zap --log-outputs=default is not supported when etcd --logger=zap.
      • Use etcd --logger=zap --log-outputs=stderr instead.
      • Or, use etcd --logger=zap --log-outputs=systemd/journal to send logs to the local systemd journal.
      • Previously, if etcd parent process ID (PPID) is 1 (e.g. run with systemd), etcd --logger=capnslog --log-outputs=default redirects server logs to local systemd journal. And if write to journald fails, it writes to os.Stderr as a fallback.
      • However, even with PPID 1, it can fail to dial systemd journal (e.g. run embedded etcd with Docker container). Then, every single log write will fail and fall back to os.Stderr, which is inefficient.
      • To avoid this problem, systemd journal logging must be configured manually.
    • etcd --logger=zap --log-outputs=stderr will log server operations in JSON-encoded format and writes logs to os.Stderr. Use this to override journald log redirects.
    • etcd --logger=zap --log-outputs=stdout will log server operations in JSON-encoded format and writes logs to os.Stdout Use this to override journald log redirects.
    • etcd --logger=zap --log-outputs=a.log will log server operations in JSON-encoded format and writes logs to the specified file a.log.
    • etcd --logger=zap --log-outputs=a.log,b.log,c.log,stdout writes server logs to multiple files a.log, b.log and c.log at the same time and outputs to os.Stderr, in JSON-encoded format.
    • etcd --logger=zap --log-outputs=/dev/null will discard all server logs.
  • Add etcd --log-level flag to support log level.
    • v3.5 will deprecate etcd --debug flag in favor of etcd --log-level=debug.
  • Add etcd --backend-batch-limit flag.
  • Add etcd --backend-batch-interval flag.
  • Fix mvcc "unsynced" watcher restore operation.
    • "unsynced" watcher is watcher that needs to be in sync with events that have happened.
    • That is, "unsynced" watcher is the slow watcher that was requested on old revision.
    • "unsynced" watcher restore operation was not correctly populating its underlying watcher group.
    • Which possibly causes missing events from "unsynced" watchers.
    • A node gets network partitioned with a watcher on a future revision, and falls behind receiving a leader snapshot after partition gets removed. When applying this snapshot, etcd watch storage moves current synced watchers to unsynced since sync watchers might have become stale during network partition. And reset synced watcher group to restart watcher routines. Previously, there was a bug when moving from synced watcher group to unsynced, thus client would miss events when the watcher was requested to the network-partitioned node.
  • Fix mvcc server panic from restore operation.
    • Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation.
    • Now, this server-side panic has been fixed.
  • Fix server panic on invalid Election Proclaim/Resign HTTP(S) requests.
    • Previously, wrong-formatted HTTP requests to Election API could trigger panic in etcd server.
    • e.g. curl -L http://localhost:2379/v3/election/proclaim -X POST -d '{"value":""}', curl -L http://localhost:2379/v3/election/resign -X POST -d '{"value":""}'.
  • Fix revision-based compaction retention parsing.
    • Previously, etcd --auto-compaction-mode revision --auto-compaction-retention 1 was translated to revision retention 3600000000000.
    • Now, etcd --auto-compaction-mode revision --auto-compaction-retention 1 is correctly parsed as revision retention 1.
  • Prevent overflow by large TTL values for Lease Grant.
    • TTL parameter to Grant request is unit of second.
    • Leases with too large TTL values exceeding math.MaxInt64 expire in unexpected ways.
    • Server now returns rpctypes.ErrLeaseTTLTooLarge to client, when the requested TTL is larger than 9,000,000,000 seconds (which is >285 years).
    • Again, etcd Lease is meant for short-periodic keepalives or sessions, in the range of seconds or minutes. Not for hours or days!
  • Fix expired lease revoke.
  • Enable etcd server raft.Config.CheckQuorum when starting with ForceNewCluster.
  • Allow non-WAL files in etcd --wal-dir directory.
    • Previously, existing files such as lost+found in WAL directory prevent etcd server boot.
    • Now, WAL directory that contains only lost+found or a file that's not suffixed with .wal is considered non-initialized.
  • Fix ETCD_CONFIG_FILE env variable parsing in etcd.
  • Fix race condition in rafthttp transport pause/resume.
  • Fix server crash from creating an empty role.
    • Previously, creating a role with an empty name crashed etcd server with an error code Unavailable.
    • Now, creating a role with an empty name is not allowed with an error code InvalidArgument.

API

  • Add isLearner field to etcdserverpb.Member, etcdserverpb.MemberAddRequest and etcdserverpb.StatusResponse as part of raft learner implementation.
  • Add MemberPromote rpc to etcdserverpb.Cluster interface and the corresponding MemberPromoteRequest and MemberPromoteResponse as part of raft learner implementation.
  • Add snapshot package for snapshot restore/save operations (see godoc.org/github.com/etcd/clientv3/snapshot for more).
  • Add watch_id field to etcdserverpb.WatchCreateRequest to allow user-provided watch ID to mvcc.
    • Corresponding watch_id is returned via etcdserverpb.WatchResponse, if any.
  • Add fragment field to etcdserverpb.WatchCreateRequest to request etcd server to split watch events when the total size of events exceeds etcd --max-request-bytes flag value plus gRPC-overhead 512 bytes.
    • The default server-side request bytes limit is embed.DefaultMaxRequestBytes which is 1.5 MiB plus gRPC-overhead 512 bytes.
    • If watch response events exceed this server-side request limit and watch request is created with fragment field true, the server will split watch events into a set of chunks, each of which is a subset of watch events below server-side request limit.
    • Useful when client-side has limited bandwidths.
    • For example, watch response contains 10 events, where each event is 1 MiB. And server etcd --max-request-bytes flag value is 1 MiB. Then, server will send 10 separate fragmented events to the client.
    • For example, watch response contains 5 events, where each event is 2 MiB. And server etcd --max-request-bytes flag value is 1 MiB and clientv3.Config.MaxCallRecvMsgSize is 1 MiB. Then, server will try to send 5 separate fragmented events to the client, and the client will error with "code = ResourceExhausted desc = grpc: received message larger than max (...)".
    • Client must implement fragmented watch event merge (which clientv3 does in etcd v3.4).
  • Add raftAppliedIndex field to etcdserverpb.StatusResponse for current Raft applied index.
  • Add errors field to etcdserverpb.StatusResponse for server-side error.
    • e.g. "etcdserver: no leader", "NOSPACE", "CORRUPT"
  • Add dbSizeInUse field to etcdserverpb.StatusResponse for actual DB size after compaction.
  • Add WatchRequest.WatchProgressRequest.
    • To manually trigger broadcasting watch progress event (empty watch response with latest header) to all associated watch streams.
    • Think of it as WithProgressNotify that can be triggered manually.

Note: v3.5 will deprecate etcd --log-package-levels flag for capnslog; etcd --logger=zap --log-outputs=stderr will the default. v3.5 will deprecate [CLIENT-URL]/config/local/log endpoint.

Package embed

Package pkg/adt

Package integration

client v3

  • Add MemberAddAsLearner to Clientv3.Cluster interface. This API is used to add a learner member to etcd cluster.
  • Add MemberPromote to Clientv3.Cluster interface. This API is used to promote a learner member in etcd cluster.
  • Client may receive rpctypes.ErrLeaderChanged from server.
    • Now linearizable requests with read index would fail fast when there is a leadership change, instead of waiting until context timeout.
  • Add WithFragment OpOption to support watch events fragmentation when the total size of events exceeds etcd --max-request-bytes flag value plus gRPC-overhead 512 bytes.
    • Watch fragmentation is disabled by default.
    • The default server-side request bytes limit is embed.DefaultMaxRequestBytes which is 1.5 MiB plus gRPC-overhead 512 bytes.
    • If watch response events exceed this server-side request limit and watch request is created with fragment field true, the server will split watch events into a set of chunks, each of which is a subset of watch events below server-side request limit.
    • Useful when client-side has limited bandwidths.
    • For example, watch response contains 10 events, where each event is 1 MiB. And server etcd --max-request-bytes flag value is 1 MiB. Then, server will send 10 separate fragmented events to the client.
    • For example, watch response contains 5 events, where each event is 2 MiB. And server etcd --max-request-bytes flag value is 1 MiB and clientv3.Config.MaxCallRecvMsgSize is 1 MiB. Then, server will try to send 5 separate fragmented events to the client, and the client will error with "code = ResourceExhausted desc = grpc: received message larger than max (...)".
  • Add Watcher.RequestProgress method.
    • To manually trigger broadcasting watch progress event (empty watch response with latest header) to all associated watch streams.
    • Think of it as WithProgressNotify that can be triggered manually.
  • Fix lease keepalive interval updates when response queue is full.
    • If <-chan *clientv3LeaseKeepAliveResponse from clientv3.Lease.KeepAlive was never consumed or channel is full, client was sending keepalive request every 500ms instead of expected rate of every "TTL / 3" duration.
  • Change snapshot file permissions: On Linux, the snapshot file changes from readable by all (mode 0644) to readable by the user only (mode 0600).
  • Client may choose to send keepalive pings to server using PermitWithoutStream.
    • By setting PermitWithoutStream to true, client can send keepalive pings to server without any active streams(RPCs). In other words, it allows sending keepalive pings with unary or simple RPC calls.
    • PermitWithoutStream is set to false by default.
  • Fix logic on release lock key if cancelled in clientv3/concurrency package.
  • Fix (*Client).Endpoints() method race condition.
  • Deprecated grpc.ErrClientConnClosing.
    • clientv3 and proxy/grpcproxy now does not return grpc.ErrClientConnClosing.
    • grpc.ErrClientConnClosing has been deprecated in gRPC >= 1.10.
    • Use clientv3.IsConnCanceled(error) or google.golang.org/grpc/status.FromError(error) instead.

etcdctl v3

gRPC proxy

  • Fix etcd server panic from restore operation.
    • Let's assume that a watcher had been requested with a future revision X and sent to node A that became network-partitioned thereafter. Meanwhile, cluster makes progress. Then when the partition gets removed, the leader sends a snapshot to node A. Previously if the snapshot's latest revision is still lower than the watch revision X, etcd server panicked during snapshot restore operation.
    • Especially, gRPC proxy was affected, since it detects a leader loss with a key "proxy-namespace__lostleader" and a watch revision "int64(math.MaxInt64 - 2)".
    • Now, this server-side panic has been fixed.
  • Fix memory leak in cache layer.
  • Change gRPC proxy to expose etcd server endpoint /metrics.
    • The metrics that were exposed via the proxy were not etcd server members but instead the proxy itself.

gRPC gateway

Package raft

Package wal

Tooling

Go

Dockerfile