Skip to content

Releases: admiraltyio/admiralty

v0.13.0

18 Nov 00:46
7a231d3
Compare
Choose a tag to compare

New Features

  • a1c88bc Alternative scheduling algorithm, enabled with multicluster.admiralty.io/no-reservation pod annotation, to work with third-party schedulers in target clusters, e.g., AWS Fargate (instead of candidate scheduler).
  • 7a231d3 Support cluster-level, i.e., virtual-node-level scheduling constraints, in addition (with multicluster.admiralty.io/proxy-pod-scheduling-constraints pod annotation) or instead of (with multicluster.admiralty.io/use-constraints-from-spec-for-proxy-pod-scheduling pod annotation) target-cluster-node-level scheduling constraints. To inform this new type of scheduling, aggregate target cluster node labels on virtual nodes: labels with unique values across all nodes of a target cluster, though not necessarily present on all nodes of that cluster, are added to the corresponding virtual node.

Bugfixes

  • a04da55 Fix multi-cluster service deletion.

v0.12.0

08 Oct 00:40
Compare
Choose a tag to compare

New Features

  • 8865647 Ingresses follow services that follow matching cross-cluster pods to integrate with global ingress controllers, e.g., Admiralty Cloud.

Bugfixes

  • 3e4da1c Fix Helm chart post-delete hook. The patch was missing quotes. Uninstall would hang, with the job crash-looping.
  • ec86d72 Fix service reroute, which didn't stick after service was updated or re-applied.

Internals

  • ec86d72 Refactor service controllers (reroute and global) into a new follow controller similar to the others. As a consequence, remove multicluster-controller dependency.

v0.11.0

25 Sep 23:53
Compare
Choose a tag to compare

New Features

  • 2b696b0 Support kubectl logs and kubectl exec.
  • d3832f6 Misconfigured Targets are now skipped, instead of crashing. A partially functioning system is better than failure in this case.
  • 21483dc Multi-arch Docker image manifests! In addition to amd64, we now release images for arm64, ppc64le, and s390x, per user requests. Binaries are cross-compiled and untested but should work. If not, please submit an issue. If you need other architectures, please submit an issue or PR.
  • 21483dc Smaller images, compressed with UPX.

Bugfixes

  • c81cbc5 Allow source clusters that have the HugePageStorageMediumSize feature gate disabled (Kubernetes pre-1.19) to work with target clusters that have multiple huge page sizes.
  • 2922775 Don't crash if user annotates pod but forgot to label namespace.

v0.10.0

17 Aug 06:02
Compare
Choose a tag to compare

New Features

  • The Source CRD and controller make it easy to create service accounts and role bindings for source clusters in a target cluster. (PR #48)
  • The Target CRD and controller allow defining targets of a source cluster at runtime, rather than as Helm release values. (PR #49)

Bugfixes

  • Fix name collisions and length issues (PR #56)
  • Fix cross-cluster references when parent-child names differ and parent name ins longer than 63 characters, including proxy-delegate pods (PR #57)
  • Fix source cluster role references (PR #58)

See further changes already listed for release candidates below.

v0.10.0-rc.1

07 Jul 18:45
Compare
Choose a tag to compare
v0.10.0-rc.1 Pre-release
Pre-release

This release fixes cluster summary RBAC for namespaced targets. (ClusterSummary is a new CRD introduced in v0.10.0-rc.0.)

v0.10.0-rc.0

07 Jul 17:21
Compare
Choose a tag to compare
v0.10.0-rc.0 Pre-release
Pre-release

This release fixes a couple of bugs, one with GKE route-based clusters (vs.VPC-native), the other with DNS horizontal autoscaling. As a side benefit, virtual nodes capacities and allocatable resources aren't dummy high values anymore, but the sum of the corresponding values over the nodes of the target clusters that they represent. We slipped in a small UX change: when you run kubectl get nodes, the role column will now say "cluster" for virtual nodes, rather than "agent", to help understand concepts. Last but not least, we're upgrading internally from Kubernetes 1.17 to 1.18.

v0.9.3

25 Jun 17:37
Compare
Choose a tag to compare

Bugfixes

  • Fix #38. Cross-cluster garbage collection finalizers were added to all config maps and secrets, although only those that are copied across clusters actually need them. Finalizers are removed by the controller manager when config maps and secrets are terminating, so the bug wasn't major, but it did introduce unnecessary risk, because, if the controller manager went down, config maps and secrets couldn't be deleted. It could also conflict with third-party controllers of those config maps and secrets. The fix only applies finalizers to config maps and secrets that are referrred to by multi-cluster pods, and removes extraneous finalizers (no manual cleanup needed).

v0.9.2

23 Jun 04:19
Compare
Choose a tag to compare

Bugfixes

The feature introduced in v0.9.0 (config maps and secrets follow pods) wasn't compatible with namespaced targets.

v0.9.1

23 Jun 04:13
Compare
Choose a tag to compare

Bugfixes

  • f4b1936 had removed proxy pod filter on feedback controller, which crashed the controller manager if normal pods were scheduled on nodes whose names were shorter than 10 characters, and added finalizers to normal pods (manual cleanup necessary!).

v0.9.0

22 Jun 22:31
Compare
Choose a tag to compare

New Features

  • Fix #32. Config maps and secrets now follow pods. More specifically, if a proxy pod refers to config maps or secrets to be mounted as volumes, projected volumes, used as environment variables or, for secrets, as image pull secrets, Admiralty copies those config maps or secrets to the target cluster where the corresponding delegate pod runs.