Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(helm): update chart cilium to 1.16.5 #15

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Oct 28, 2023

This PR contains the following updates:

Package Update Change
cilium (source) minor 1.14.2 -> 1.16.5

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

cilium/cilium (cilium)

v1.16.5: 1.16.5

Compare Source

Summary of Changes

Minor Changes:

Bugfixes:

  • Address potential connectivity disruption when using either L7 / DNS Network policies in combination with per-endpoint routes and hostLegacyRouting, or L7 / DNS network policies in combination with IPsec network encryption. (Backport PR #​36540, Upstream PR #​36484, @​julianwiedmann)
  • bgp: fix race in bgp stores (Backport PR #​36066, Upstream PR #​35971, @​harsimran-pabla)
  • BGPv1: Fix race by reconciliation of services with externalTrafficPolicy=Local by populating locally available services after performing service diff (Backport PR #​36286, Upstream PR #​36230, @​rastislavs)
  • BGPv2: Fix race by reconciliation of services with externalTrafficPolicy=Local by populating locally available services after performing service diff (Backport PR #​36286, Upstream PR #​36165, @​rastislavs)
  • Cilium agent now waits until endpoints have restored before starting accepting new xDS streams. (Backport PR #​36049, Upstream PR #​35984, @​jrajahalme)
  • Cilium no longer keeps old DNS-IP mappings alive while reaping newer ones, leading to spurious drops in connections to domains with many IPs associated. (Backport PR #​36462, Upstream PR #​36252, @​bimmlerd)
  • cilium-health-ep controller is made to be more robust against successive failures. (Backport PR #​36066, Upstream PR #​35936, @​jrajahalme)
  • DNS proxy port is no longer released when endpoint with a DNS policy fails to regenerate successfully. A potential deadlock between CEC/CCEC parser and endpoint policy update is removed. (Backport PR #​36468, Upstream PR #​36142, @​jrajahalme)
  • Envoy "initial fetch timeout" warnings are now demoted to info level, as they are expected to happen during Cilium Agent restart. (Backport PR #​36049, Upstream PR #​36060, @​jrajahalme)
  • Fix an issue where pod-to-world traffic goes up stack when BPF host routing is enabled with tunnel. (Backport PR #​35861, Upstream PR #​35098, @​jschwinger233)
  • Fix identity leak for kvstore identity mode (Backport PR #​36066, Upstream PR #​34893, @​odinuge)
  • Fix potential Cilium agent panic during endpoint restoration, occurring if the corresponding pod gets deleted while the agent is restarting. This regression only affects Cilium v1.16.4. (Backport PR #​36302, Upstream PR #​36292, @​giorio94)
  • gateway-api: Fix gateway checks for namespace (Backport PR #​36462, Upstream PR #​35452, @​sayboras)
  • gha: Remove hostLegacyRouting in clustermesh (Backport PR #​36357, Upstream PR #​35418, @​sayboras)
  • helm: Use an absolute FQDN for the Hubble peer-service endpoint to avoid incorrect DNS resolution outside the cluster (Backport PR #​36066, Upstream PR #​36005, @​devodev)
  • hubble: consistently use v as prefix for the Hubble version (Backport PR #​36286, Upstream PR #​35891, @​rolinh)
  • iptables: Fix data race in iptables manager (Backport PR #​36066, Upstream PR #​35902, @​pippolo84)
  • lrp: update LRP services with stale backends on agent restart (Backport PR #​36106, Upstream PR #​36036, @​ysksuzuki)
  • policy: Fix bug that allowed port ranges to be attached to L7 policies, which is not permitted. (#​36050, @​nathanjsweet)
  • Unbreak the cilium-dbg preflight migrate-identity command (Backport PR #​36286, Upstream PR #​36089, @​giorio94)
  • Use strconv.Itoa instead of string() for the correct behavior when converting kafka.ErrorCode from int32 to string. Add relevant unit tests for Kafka plugin and handler. (Backport PR #​36066, Upstream PR #​35856, @​nddq)

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests

cilium

quay.io/cilium/cilium:v1.16.5@​sha256:758ca0793f5995bb938a2fa219dcce63dc0b3fa7fc4ce5cc851125281fb7361d
quay.io/cilium/cilium:stable@sha256:758ca0793f5995bb938a2fa219dcce63dc0b3fa7fc4ce5cc851125281fb7361d

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.16.5@​sha256:37a7fdbef806b78ef63df9f1a9828fdddbf548d1f0e43b8eb10a6bdc8fa03958
quay.io/cilium/clustermesh-apiserver:stable@sha256:37a7fdbef806b78ef63df9f1a9828fdddbf548d1f0e43b8eb10a6bdc8fa03958

docker-plugin

quay.io/cilium/docker-plugin:v1.16.5@​sha256:d6b4ed076ae921535c2a543d4b5b63af474288ee4501653a1f442c935beb5768
quay.io/cilium/docker-plugin:stable@sha256:d6b4ed076ae921535c2a543d4b5b63af474288ee4501653a1f442c935beb5768

hubble-relay

quay.io/cilium/hubble-relay:v1.16.5@​sha256:6cfae1d1afa566ba941f03d4d7e141feddd05260e5cd0a1509aba1890a45ef00
quay.io/cilium/hubble-relay:stable@sha256:6cfae1d1afa566ba941f03d4d7e141feddd05260e5cd0a1509aba1890a45ef00

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.16.5@​sha256:c0edf4c8d089e76d6565d3c57128b98bc6c73d14bb4590126ee746aeaedba5e0
quay.io/cilium/operator-alibabacloud:stable@sha256:c0edf4c8d089e76d6565d3c57128b98bc6c73d14bb4590126ee746aeaedba5e0

operator-aws

quay.io/cilium/operator-aws:v1.16.5@​sha256:97e1fe0c2b522583033138eb10c170919d8de49d2788ceefdcff229a92210476
quay.io/cilium/operator-aws:stable@sha256:97e1fe0c2b522583033138eb10c170919d8de49d2788ceefdcff229a92210476

operator-azure

quay.io/cilium/operator-azure:v1.16.5@​sha256:265e2b78f572c76b523f91757083ea5f0b9b73b82f2d9714e5a8fb848e4048f9
quay.io/cilium/operator-azure:stable@sha256:265e2b78f572c76b523f91757083ea5f0b9b73b82f2d9714e5a8fb848e4048f9

operator-generic

quay.io/cilium/operator-generic:v1.16.5@​sha256:f7884848483bbcd7b1e0ccfd34ba4546f258b460cb4b7e2f06a1bcc96ef88039
quay.io/cilium/operator-generic:stable@sha256:f7884848483bbcd7b1e0ccfd34ba4546f258b460cb4b7e2f06a1bcc96ef88039

operator

quay.io/cilium/operator:v1.16.5@​sha256:617896e1b23a2c4504ab2c84f17964e24dade3b5845f733b11847202230ca940
quay.io/cilium/operator:stable@sha256:617896e1b23a2c4504ab2c84f17964e24dade3b5845f733b11847202230ca940

v1.16.4: 1.16.4

Compare Source

Security Advisories

This release addresses GHSA-xg58-75qf-9r67.

Summary of Changes

Minor Changes:

  • Added Helm option 'envoy.initialFetchTimeoutSeconds' (default 30 seconds) to override the Envoy default (15 seconds). (Backport PR #​35908, Upstream PR #​35809, @​jrajahalme)
  • clustermesh: add guardrails for known broken ENI/aws-chaining + cluster ID combination (Backport PR #​35543, Upstream PR #​35349, @​giorio94)
  • helm: Lower default hubble.tls.auto.certValidityDuration to 365 days (Backport PR #​35781, Upstream PR #​35630, @​chancez)
  • helm: New socketLB.tracing flag (Backport PR #​35781, Upstream PR #​35747, @​pchaigno)
  • hubble-relay: Return underlying connection errors when connecting to peer manager (Backport PR #​35781, Upstream PR #​35632, @​chancez)
  • netkit: Fix issue where traffic originating from the host namespace fails to reach the pod when using endpoint routes and network policies. (Backport PR #​35543, Upstream PR #​35306, @​jrife)

Bugfixes:

CI Changes:

Misc Changes:

Other Changes:

Docker Manifests
cilium

quay.io/cilium/cilium:v1.16.4@​sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf
quay.io/cilium/cilium:stable@sha256:d55ec38938854133e06739b1af237932b9c4dd4e75e9b7b2ca3acc72540a44bf

clustermesh-apiserver

quay.io/cilium/clustermesh-apiserver:v1.16.4@​sha256:b41ba9c1b32e31308e17287a24a5b8e8ed0931f70d168087001c9679bc6c5dd2
quay.io/cilium/clustermesh-apiserver:stable@sha256:b41ba9c1b32e31308e17287a24a5b8e8ed0931f70d168087001c9679bc6c5dd2

docker-plugin

quay.io/cilium/docker-plugin:v1.16.4@​sha256:0e55f80fa875a1bcce87d87eae9a72b32c9db1fe9741c1f8d1bf308ef4b1193e
quay.io/cilium/docker-plugin:stable@sha256:0e55f80fa875a1bcce87d87eae9a72b32c9db1fe9741c1f8d1bf308ef4b1193e

hubble-relay

quay.io/cilium/hubble-relay:v1.16.4@​sha256:fb2c7d127a1c809f6ba23c05973f3dd00f6b6a48e4aee2da95db925a4f0351d2
quay.io/cilium/hubble-relay:stable@sha256:fb2c7d127a1c809f6ba23c05973f3dd00f6b6a48e4aee2da95db925a4f0351d2

operator-alibabacloud

quay.io/cilium/operator-alibabacloud:v1.16.4@​sha256:8d59d1c9043d0ccf40f3e16361e5c81e8044cb83695d32d750b0c352f690c686
quay.io/cilium/operator-alibabacloud:stable@sha256:8d59d1c9043d0ccf40f3e16361e5c81e8044cb83695d32d750b0c352f690c686

operator-aws

quay.io/cilium/operator-aws:v1.16.4@​sha256:355051bbebab73ea3067bb7f0c28cfd43b584d127570cb826f794f468e2d31be
quay.io/cilium/operator-aws:stable@sha256:355051bbebab73ea3067bb7f0c28cfd43b584d127570cb826f794f468e2d31be

operator-azure

quay.io/cilium/operator-azure:v1.16.4@​sha256:475594628af6d6a807d58fcb6b7d48f5a82e0289f54ae372972b1d0536c0b6de
quay.io/cilium/operator-azure:stable@sha256:475594628af6d6a807d58fcb6b7d48f5a82e0289f54ae372972b1d0536c0b6de

operator-generic

quay.io/cilium/operator-generic:v1.16.4@​sha256:c55a7cbe19fe0b6b28903a085334edb586a3201add9db56d2122c8485f7a51c5
quay.io/cilium/operator-generic:stable@sha256:c55a7cbe19fe0b6b28903a085334edb586a3201add9db56d2122c8485f7a51c5

operator

quay.io/cilium/operator:v1.16.4@​sha256:c77643984bc17e1a93d83b58fa976d7e72ad1485ce722257594f8596899fdfff
quay.io/cilium/operator:stable@sha256:c77643984bc17e1a93d83b58fa976d7e72ad1485ce722257594f8596899fdfff

v1.16.3: 1.16.3

Compare Source

Summary of Changes

Bugfixes:

CI Changes:

  • .github/lint-build-commits: fix workflow for push events (Backport PR #​35274, Upstream PR #​35264, @​aanm)
  • .github: create cache directories on cache miss (Backport PR #​35157, Upstream PR #​35088, @​aanm)
  • .github: do not push floating tag from PRs (Backport PR #​35230, Upstream PR [#​35227](h

Configuration

📅 Schedule: Branch creation - "on saturday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link

github-actions bot commented Oct 28, 2023

--- kubernetes HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

+++ kubernetes HelmRelease: kube-system/cilium Deployment: kube-system/cilium-operator

@@ -19,22 +19,24 @@

     rollingUpdate:
       maxSurge: 25%
       maxUnavailable: 50%
     type: RollingUpdate
   template:
     metadata:
-      annotations: null
+      annotations:
+        prometheus.io/port: '9963'
+        prometheus.io/scrape: 'true'
       labels:
         io.cilium/app: operator
         name: cilium-operator
         app.kubernetes.io/part-of: cilium
         app.kubernetes.io/name: cilium-operator
     spec:
       containers:
       - name: cilium-operator
-        image: quay.io/cilium/operator-generic:v1.14.2@sha256:52f70250dea22e506959439a7c4ea31b10fe8375db62f5c27ab746e3a2af866d
+        image: quay.io/cilium/operator-generic:v1.16.3@sha256:6e2925ef47a1c76e183c48f95d4ce0d34a1e5e848252f910476c3e11ce1ec94b
         imagePullPolicy: IfNotPresent
         command:
         - cilium-operator-generic
         args:
         - --config-dir=/tmp/cilium/config-map
         - --debug=$(CILIUM_DEBUG)
@@ -52,12 +54,17 @@

         - name: CILIUM_DEBUG
           valueFrom:
             configMapKeyRef:
               key: debug
               name: cilium-config
               optional: true
+        ports:
+        - name: prometheus
+          containerPort: 9963
+          hostPort: 9963
+          protocol: TCP
         livenessProbe:
           httpGet:
             host: 127.0.0.1
             path: /healthz
             port: 9234
             scheme: HTTP
@@ -79,13 +86,12 @@

           mountPath: /tmp/cilium/config-map
           readOnly: true
         terminationMessagePolicy: FallbackToLogsOnError
       hostNetwork: true
       restartPolicy: Always
       priorityClassName: system-cluster-critical
-      serviceAccount: cilium-operator
       serviceAccountName: cilium-operator
       automountServiceAccountToken: true
       affinity:
         podAntiAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
           - labelSelector:
--- kubernetes HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config

+++ kubernetes HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-envoy-config

@@ -0,0 +1,326 @@

+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+  name: cilium-envoy-config
+  namespace: kube-system
+data:
+  bootstrap-config.json: |
+    {
+      "node": {
+        "id": "host~127.0.0.1~no-id~localdomain",
+        "cluster": "ingress-cluster"
+      },
+      "staticResources": {
+        "listeners": [
+          {
+            "name": "envoy-prometheus-metrics-listener",
+            "address": {
+              "socket_address": {
+                "address": "0.0.0.0",
+                "port_value": 9964
+              }
+            },
+            "filter_chains": [
+              {
+                "filters": [
+                  {
+                    "name": "envoy.filters.network.http_connection_manager",
+                    "typed_config": {
+                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+                      "stat_prefix": "envoy-prometheus-metrics-listener",
+                      "route_config": {
+                        "virtual_hosts": [
+                          {
+                            "name": "prometheus_metrics_route",
+                            "domains": [
+                              "*"
+                            ],
+                            "routes": [
+                              {
+                                "name": "prometheus_metrics_route",
+                                "match": {
+                                  "prefix": "/metrics"
+                                },
+                                "route": {
+                                  "cluster": "/envoy-admin",
+                                  "prefix_rewrite": "/stats/prometheus"
+                                }
+                              }
+                            ]
+                          }
+                        ]
+                      },
+                      "http_filters": [
+                        {
+                          "name": "envoy.filters.http.router",
+                          "typed_config": {
+                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+                          }
+                        }
+                      ],
+                      "stream_idle_timeout": "0s"
+                    }
+                  }
+                ]
+              }
+            ]
+          },
+          {
+            "name": "envoy-health-listener",
+            "address": {
+              "socket_address": {
+                "address": "127.0.0.1",
+                "port_value": 9878
+              }
+            },
+            "filter_chains": [
+              {
+                "filters": [
+                  {
+                    "name": "envoy.filters.network.http_connection_manager",
+                    "typed_config": {
+                      "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
+                      "stat_prefix": "envoy-health-listener",
+                      "route_config": {
+                        "virtual_hosts": [
+                          {
+                            "name": "health",
+                            "domains": [
+                              "*"
+                            ],
+                            "routes": [
+                              {
+                                "name": "health",
+                                "match": {
+                                  "prefix": "/healthz"
+                                },
+                                "route": {
+                                  "cluster": "/envoy-admin",
+                                  "prefix_rewrite": "/ready"
+                                }
+                              }
+                            ]
+                          }
+                        ]
+                      },
+                      "http_filters": [
+                        {
+                          "name": "envoy.filters.http.router",
+                          "typed_config": {
+                            "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router"
+                          }
+                        }
+                      ],
+                      "stream_idle_timeout": "0s"
+                    }
+                  }
+                ]
+              }
+            ]
+          }
+        ],
+        "clusters": [
+          {
+            "name": "ingress-cluster",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s"
+          },
+          {
+            "name": "egress-cluster-tls",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "upstreamHttpProtocolOptions": {},
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s",
+            "transportSocket": {
+              "name": "cilium.tls_wrapper",
+              "typedConfig": {
+                "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+              }
+            }
+          },
+          {
+            "name": "egress-cluster",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s"
+          },
+          {
+            "name": "ingress-cluster-tls",
+            "type": "ORIGINAL_DST",
+            "connectTimeout": "2s",
+            "lbPolicy": "CLUSTER_PROVIDED",
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "commonHttpProtocolOptions": {
+                  "idleTimeout": "60s",
+                  "maxConnectionDuration": "0s",
+                  "maxRequestsPerConnection": 0
+                },
+                "upstreamHttpProtocolOptions": {},
+                "useDownstreamProtocolConfig": {}
+              }
+            },
+            "cleanupInterval": "2.500s",
+            "transportSocket": {
+              "name": "cilium.tls_wrapper",
+              "typedConfig": {
+                "@type": "type.googleapis.com/cilium.UpstreamTlsWrapperContext"
+              }
+            }
+          },
+          {
+            "name": "xds-grpc-cilium",
+            "type": "STATIC",
+            "connectTimeout": "2s",
+            "loadAssignment": {
+              "clusterName": "xds-grpc-cilium",
+              "endpoints": [
+                {
+                  "lbEndpoints": [
+                    {
+                      "endpoint": {
+                        "address": {
+                          "pipe": {
+                            "path": "/var/run/cilium/envoy/sockets/xds.sock"
+                          }
+                        }
+                      }
+                    }
+                  ]
+                }
+              ]
+            },
+            "typedExtensionProtocolOptions": {
+              "envoy.extensions.upstreams.http.v3.HttpProtocolOptions": {
+                "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions",
+                "explicitHttpConfig": {
+                  "http2ProtocolOptions": {}
+                }
+              }
+            }
+          },
+          {
+            "name": "/envoy-admin",
+            "type": "STATIC",
+            "connectTimeout": "2s",
+            "loadAssignment": {
+              "clusterName": "/envoy-admin",
+              "endpoints": [
+                {
+                  "lbEndpoints": [
+                    {
+                      "endpoint": {
+                        "address": {
+                          "pipe": {
+                            "path": "/var/run/cilium/envoy/sockets/admin.sock"
+                          }
+                        }
+                      }
+                    }
+                  ]
+                }
+              ]
+            }
+          }
+        ]
+      },
+      "dynamicResources": {
+        "ldsConfig": {
+          "apiConfigSource": {
+            "apiType": "GRPC",
+            "transportApiVersion": "V3",
+            "grpcServices": [
+              {
+                "envoyGrpc": {
[Diff truncated by flux-local]
--- kubernetes HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

+++ kubernetes HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium-operator

@@ -15,12 +15,20 @@

   - list
   - watch
   - delete
 - apiGroups:
   - ''
   resources:
+  - configmaps
+  resourceNames:
+  - cilium-config
+  verbs:
+  - patch
+- apiGroups:
+  - ''
+  resources:
   - nodes
   verbs:
   - list
   - watch
 - apiGroups:
   - ''
@@ -116,12 +124,15 @@

   - update
 - apiGroups:
   - cilium.io
   resources:
   - ciliumendpointslices
   - ciliumenvoyconfigs
+  - ciliumbgppeerconfigs
+  - ciliumbgpadvertisements
+  - ciliumbgpnodeconfigs
   verbs:
   - create
   - update
   - get
   - list
   - watch
@@ -142,12 +153,17 @@

   - customresourcedefinitions
   verbs:
   - update
   resourceNames:
   - ciliumloadbalancerippools.cilium.io
   - ciliumbgppeeringpolicies.cilium.io
+  - ciliumbgpclusterconfigs.cilium.io
+  - ciliumbgppeerconfigs.cilium.io
+  - ciliumbgpadvertisements.cilium.io
+  - ciliumbgpnodeconfigs.cilium.io
+  - ciliumbgpnodeconfigoverrides.cilium.io
   - ciliumclusterwideenvoyconfigs.cilium.io
   - ciliumclusterwidenetworkpolicies.cilium.io
   - ciliumegressgatewaypolicies.cilium.io
   - ciliumendpoints.cilium.io
   - ciliumendpointslices.cilium.io
   - ciliumenvoyconfigs.cilium.io
@@ -162,12 +178,15 @@

   - ciliumpodippools.cilium.io
 - apiGroups:
   - cilium.io
   resources:
   - ciliumloadbalancerippools
   - ciliumpodippools
+  - ciliumbgppeeringpolicies
+  - ciliumbgpclusterconfigs
+  - ciliumbgpnodeconfigoverrides
   verbs:
   - get
   - list
   - watch
 - apiGroups:
   - cilium.io
--- kubernetes HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

+++ kubernetes HelmRelease: kube-system/cilium ConfigMap: kube-system/cilium-config

@@ -7,106 +7,132 @@

 data:
   identity-allocation-mode: crd
   identity-heartbeat-timeout: 30m0s
   identity-gc-interval: 15m0s
   cilium-endpoint-gc-interval: 5m0s
   nodes-gc-interval: 5m0s
-  skip-cnp-status-startup-clean: 'false'
   debug: 'false'
   debug-verbose: ''
   enable-policy: default
-  proxy-prometheus-port: '9964'
+  policy-cidr-match-mode: ''
+  operator-prometheus-serve-addr: :9963
+  enable-metrics: 'true'
   enable-ipv4: 'true'
   enable-ipv6: 'false'
   custom-cni-conf: 'false'
   enable-bpf-clock-probe: 'false'
   monitor-aggregation: medium
   monitor-aggregation-interval: 5s
   monitor-aggregation-flags: all
   bpf-map-dynamic-size-ratio: '0.0025'
   bpf-policy-map-max: '16384'
   bpf-lb-map-max: '65536'
   bpf-lb-external-clusterip: 'false'
+  bpf-events-drop-enabled: 'true'
+  bpf-events-policy-verdict-enabled: 'true'
+  bpf-events-trace-enabled: 'true'
   preallocate-bpf-maps: 'false'
-  sidecar-istio-proxy-image: cilium/istio_proxy
   cluster-name: default
   cluster-id: '0'
   routing-mode: tunnel
   tunnel-protocol: vxlan
+  service-no-backend-response: reject
   enable-l7-proxy: 'true'
   enable-ipv4-masquerade: 'true'
   enable-ipv4-big-tcp: 'false'
   enable-ipv6-big-tcp: 'false'
   enable-ipv6-masquerade: 'true'
+  enable-tcx: 'true'
+  datapath-mode: veth
+  enable-masquerade-to-route-source: 'false'
   enable-xt-socket-fallback: 'true'
   install-no-conntrack-iptables-rules: 'false'
   auto-direct-node-routes: 'false'
+  direct-routing-skip-unreachable: 'false'
   enable-local-redirect-policy: 'false'
+  enable-runtime-device-detection: 'true'
   kube-proxy-replacement: 'false'
   kube-proxy-replacement-healthz-bind-address: ''
   bpf-lb-sock: 'false'
+  bpf-lb-sock-terminate-pod-connections: 'false'
   enable-host-port: 'false'
   enable-external-ips: 'false'
   enable-node-port: 'false'
+  nodeport-addresses: ''
   enable-health-check-nodeport: 'true'
+  enable-health-check-loadbalancer-ip: 'false'
   node-port-bind-protection: 'true'
   enable-auto-protect-node-port-range: 'true'
+  bpf-lb-acceleration: disabled
   enable-svc-source-range-check: 'true'
   enable-l2-neigh-discovery: 'true'
   arping-refresh-period: 30s
+  k8s-require-ipv4-pod-cidr: 'false'
+  k8s-require-ipv6-pod-cidr: 'false'
   enable-k8s-networkpolicy: 'true'
   write-cni-conf-when-ready: /host/etc/cni/net.d/05-cilium.conflist
   cni-exclusive: 'true'
   cni-log-file: /var/run/cilium/cilium-cni.log
   enable-endpoint-health-checking: 'true'
   enable-health-checking: 'true'
   enable-well-known-identities: 'false'
-  enable-remote-node-identity: 'true'
+  enable-node-selector-labels: 'false'
   synchronize-k8s-nodes: 'true'
   operator-api-serve-addr: 127.0.0.1:9234
   enable-hubble: 'true'
   hubble-socket-path: /var/run/cilium/hubble.sock
+  hubble-export-file-max-size-mb: '10'
+  hubble-export-file-max-backups: '5'
   hubble-listen-address: :4244
   hubble-disable-tls: 'false'
   hubble-tls-cert-file: /var/lib/cilium/tls/hubble/server.crt
   hubble-tls-key-file: /var/lib/cilium/tls/hubble/server.key
   hubble-tls-client-ca-files: /var/lib/cilium/tls/hubble/client-ca.crt
   ipam: cluster-pool
   ipam-cilium-node-update-rate: 15s
   cluster-pool-ipv4-cidr: 10.0.0.0/8
   cluster-pool-ipv4-mask-size: '24'
-  disable-cnp-status-updates: 'true'
-  cnp-node-status-gc-interval: 0s
   egress-gateway-reconciliation-trigger-interval: 1s
   enable-vtep: 'false'
   vtep-endpoint: ''
   vtep-cidr: ''
   vtep-mask: ''
   vtep-mac: ''
-  enable-bgp-control-plane: 'false'
   procfs: /host/proc
   bpf-root: /sys/fs/bpf
   cgroup-root: /run/cilium/cgroupv2
   enable-k8s-terminating-endpoint: 'true'
   enable-sctp: 'false'
-  k8s-client-qps: '5'
-  k8s-client-burst: '10'
+  k8s-client-qps: '10'
+  k8s-client-burst: '20'
   remove-cilium-node-taints: 'true'
   set-cilium-node-taints: 'true'
   set-cilium-is-up-condition: 'true'
   unmanaged-pod-watcher-interval: '15'
+  dnsproxy-enable-transparent-mode: 'true'
+  dnsproxy-socket-linger-timeout: '10'
   tofqdns-dns-reject-response-code: refused
   tofqdns-enable-dns-compression: 'true'
   tofqdns-endpoint-max-ip-per-hostname: '50'
   tofqdns-idle-connection-grace-period: 0s
   tofqdns-max-deferred-connection-deletes: '10000'
   tofqdns-proxy-response-max-delay: 100ms
   agent-not-ready-taint-key: node.cilium.io/agent-not-ready
   mesh-auth-enabled: 'true'
   mesh-auth-queue-size: '1024'
   mesh-auth-rotated-identities-queue-size: '1024'
   mesh-auth-gc-interval: 5m0s
+  proxy-xff-num-trusted-hops-ingress: '0'
+  proxy-xff-num-trusted-hops-egress: '0'
   proxy-connect-timeout: '2'
   proxy-max-requests-per-connection: '0'
   proxy-max-connection-duration-seconds: '0'
-  external-envoy-proxy: 'false'
+  proxy-idle-timeout-seconds: '60'
+  external-envoy-proxy: 'true'
+  envoy-base-id: '0'
+  envoy-keep-cap-netbindservice: 'false'
+  max-connected-clusters: '255'
+  clustermesh-enable-endpoint-sync: 'false'
+  clustermesh-enable-mcs-api: 'false'
+  nat-map-stats-entries: '32'
+  nat-map-stats-interval: 30s
 
--- kubernetes HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium

+++ kubernetes HelmRelease: kube-system/cilium ClusterRole: kube-system/cilium

@@ -44,12 +44,15 @@

   - get
 - apiGroups:
   - cilium.io
   resources:
   - ciliumloadbalancerippools
   - ciliumbgppeeringpolicies
+  - ciliumbgpnodeconfigs
+  - ciliumbgpadvertisements
+  - ciliumbgppeerconfigs
   - ciliumclusterwideenvoyconfigs
   - ciliumclusterwidenetworkpolicies
   - ciliumegressgatewaypolicies
   - ciliumendpoints
   - ciliumendpointslices
   - ciliumenvoyconfigs
@@ -93,14 +96,13 @@

   verbs:
   - get
   - update
 - apiGroups:
   - cilium.io
   resources:
-  - ciliumnetworkpolicies/status
-  - ciliumclusterwidenetworkpolicies/status
   - ciliumendpoints/status
   - ciliumendpoints
   - ciliuml2announcementpolicies/status
+  - ciliumbgpnodeconfigs/status
   verbs:
   - patch
 
--- kubernetes HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy

+++ kubernetes HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium-envoy

@@ -0,0 +1,165 @@

+---
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+  labels:
+    k8s-app: cilium-envoy
+    app.kubernetes.io/part-of: cilium
+    app.kubernetes.io/name: cilium-envoy
+    name: cilium-envoy
+spec:
+  selector:
+    matchLabels:
+      k8s-app: cilium-envoy
+  updateStrategy:
+    rollingUpdate:
+      maxUnavailable: 2
+    type: RollingUpdate
+  template:
+    metadata:
+      annotations: null
+      labels:
+        k8s-app: cilium-envoy
+        name: cilium-envoy
+        app.kubernetes.io/name: cilium-envoy
+        app.kubernetes.io/part-of: cilium
+    spec:
+      securityContext:
+        appArmorProfile:
+          type: Unconfined
+      containers:
+      - name: cilium-envoy
+        image: quay.io/cilium/cilium-envoy:v1.29.9-1728346947-0d05e48bfbb8c4737ec40d5781d970a550ed2bbd@sha256:42614a44e508f70d03a04470df5f61e3cffd22462471a0be0544cf116f2c50ba
+        imagePullPolicy: IfNotPresent
+        command:
+        - /usr/bin/cilium-envoy-starter
+        args:
+        - --
+        - -c /var/run/cilium/envoy/bootstrap-config.json
+        - --base-id 0
+        - --log-level info
+        - --log-format [%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v
+        startupProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          failureThreshold: 105
+          periodSeconds: 2
+          successThreshold: 1
+          initialDelaySeconds: 5
+        livenessProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          periodSeconds: 30
+          successThreshold: 1
+          failureThreshold: 10
+          timeoutSeconds: 5
+        readinessProbe:
+          httpGet:
+            host: 127.0.0.1
+            path: /healthz
+            port: 9878
+            scheme: HTTP
+          periodSeconds: 30
+          successThreshold: 1
+          failureThreshold: 3
+          timeoutSeconds: 5
+        env:
+        - name: K8S_NODE_NAME
+          valueFrom:
+            fieldRef:
+              apiVersion: v1
+              fieldPath: spec.nodeName
+        - name: CILIUM_K8S_NAMESPACE
+          valueFrom:
+            fieldRef:
+              apiVersion: v1
+              fieldPath: metadata.namespace
+        ports:
+        - name: envoy-metrics
+          containerPort: 9964
+          hostPort: 9964
+          protocol: TCP
+        securityContext:
+          seLinuxOptions:
+            level: s0
+            type: spc_t
+          capabilities:
+            add:
+            - NET_ADMIN
+            - SYS_ADMIN
+            drop:
+            - ALL
+        terminationMessagePolicy: FallbackToLogsOnError
+        volumeMounts:
+        - name: envoy-sockets
+          mountPath: /var/run/cilium/envoy/sockets
+          readOnly: false
+        - name: envoy-artifacts
+          mountPath: /var/run/cilium/envoy/artifacts
+          readOnly: true
+        - name: envoy-config
+          mountPath: /var/run/cilium/envoy/
+          readOnly: true
+        - name: bpf-maps
+          mountPath: /sys/fs/bpf
+          mountPropagation: HostToContainer
+      restartPolicy: Always
+      priorityClassName: system-node-critical
+      serviceAccountName: cilium-envoy
+      automountServiceAccountToken: true
+      terminationGracePeriodSeconds: 1
+      hostNetwork: true
+      affinity:
+        nodeAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+            nodeSelectorTerms:
+            - matchExpressions:
+              - key: cilium.io/no-schedule
+                operator: NotIn
+                values:
+                - 'true'
+        podAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                k8s-app: cilium
+            topologyKey: kubernetes.io/hostname
+        podAntiAffinity:
+          requiredDuringSchedulingIgnoredDuringExecution:
+          - labelSelector:
+              matchLabels:
+                k8s-app: cilium-envoy
+            topologyKey: kubernetes.io/hostname
+      nodeSelector:
+        kubernetes.io/os: linux
+      tolerations:
+      - operator: Exists
+      volumes:
+      - name: envoy-sockets
+        hostPath:
+          path: /var/run/cilium/envoy/sockets
+          type: DirectoryOrCreate
+      - name: envoy-artifacts
+        hostPath:
+          path: /var/run/cilium/envoy/artifacts
+          type: DirectoryOrCreate
+      - name: envoy-config
+        configMap:
+          name: cilium-envoy-config
+          defaultMode: 256
+          items:
+          - key: bootstrap-config.json
+            path: bootstrap-config.json
+      - name: bpf-maps
+        hostPath:
+          path: /sys/fs/bpf
+          type: DirectoryOrCreate
+
--- kubernetes HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

+++ kubernetes HelmRelease: kube-system/cilium DaemonSet: kube-system/cilium

@@ -15,25 +15,24 @@

   updateStrategy:
     rollingUpdate:
       maxUnavailable: 2
     type: RollingUpdate
   template:
     metadata:
-      annotations:
-        container.apparmor.security.beta.kubernetes.io/cilium-agent: unconfined
-        container.apparmor.security.beta.kubernetes.io/clean-cilium-state: unconfined
-        container.apparmor.security.beta.kubernetes.io/mount-cgroup: unconfined
-        container.apparmor.security.beta.kubernetes.io/apply-sysctl-overwrites: unconfined
+      annotations: null
       labels:
         k8s-app: cilium
         app.kubernetes.io/name: cilium-agent
         app.kubernetes.io/part-of: cilium
     spec:
+      securityContext:
+        appArmorProfile:
+          type: Unconfined
       containers:
       - name: cilium-agent
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
         imagePullPolicy: IfNotPresent
         command:
         - cilium-agent
         args:
         - --config-dir=/tmp/cilium/config-map
         startupProbe:
@@ -45,12 +44,13 @@

             httpHeaders:
             - name: brief
               value: 'true'
           failureThreshold: 105
           periodSeconds: 2
           successThreshold: 1
+          initialDelaySeconds: 5
         livenessProbe:
           httpGet:
             host: 127.0.0.1
             path: /healthz
             port: 9879
             scheme: HTTP
@@ -84,13 +84,43 @@

           valueFrom:
             fieldRef:
               apiVersion: v1
               fieldPath: metadata.namespace
         - name: CILIUM_CLUSTERMESH_CONFIG
           value: /var/lib/cilium/clustermesh/
+        - name: GOMEMLIMIT
+          valueFrom:
+            resourceFieldRef:
+              resource: limits.memory
+              divisor: '1'
         lifecycle:
+          postStart:
+            exec:
+              command:
+              - bash
+              - -c
+              - |
+                set -o errexit
+                set -o pipefail
+                set -o nounset
+
+                # When running in AWS ENI mode, it's likely that 'aws-node' has
+                # had a chance to install SNAT iptables rules. These can result
+                # in dropped traffic, so we should attempt to remove them.
+                # We do it using a 'postStart' hook since this may need to run
+                # for nodes which might have already been init'ed but may still
+                # have dangling rules. This is safe because there are no
+                # dependencies on anything that is part of the startup script
+                # itself, and can be safely run multiple times per node (e.g. in
+                # case of a restart).
+                if [[ "$(iptables-save | grep -E -c 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN')" != "0" ]];
+                then
+                    echo 'Deleting iptables rules created by the AWS CNI VPC plugin'
+                    iptables-save | grep -E -v 'AWS-SNAT-CHAIN|AWS-CONNMARK-CHAIN' | iptables-restore
+                fi
+                echo 'Done!'
           preStop:
             exec:
               command:
               - /cni-uninstall.sh
         securityContext:
           seLinuxOptions:
@@ -111,12 +141,15 @@

             - SETGID
             - SETUID
             drop:
             - ALL
         terminationMessagePolicy: FallbackToLogsOnError
         volumeMounts:
+        - name: envoy-sockets
+          mountPath: /var/run/cilium/envoy/sockets
+          readOnly: false
         - mountPath: /host/proc/sys/net
           name: host-proc-sys-net
         - mountPath: /host/proc/sys/kernel
           name: host-proc-sys-kernel
         - name: bpf-maps
           mountPath: /sys/fs/bpf
@@ -137,16 +170,16 @@

           mountPath: /var/lib/cilium/tls/hubble
           readOnly: true
         - name: tmp
           mountPath: /tmp
       initContainers:
       - name: config
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
-        imagePullPolicy: IfNotPresent
-        command:
-        - cilium
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
+        imagePullPolicy: IfNotPresent
+        command:
+        - cilium-dbg
         - build-config
         env:
         - name: K8S_NODE_NAME
           valueFrom:
             fieldRef:
               apiVersion: v1
@@ -158,13 +191,13 @@

               fieldPath: metadata.namespace
         volumeMounts:
         - name: tmp
           mountPath: /tmp
         terminationMessagePolicy: FallbackToLogsOnError
       - name: mount-cgroup
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
         imagePullPolicy: IfNotPresent
         env:
         - name: CGROUP_ROOT
           value: /run/cilium/cgroupv2
         - name: BIN_PATH
           value: /opt/cni/bin
@@ -190,13 +223,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: apply-sysctl-overwrites
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
         imagePullPolicy: IfNotPresent
         env:
         - name: BIN_PATH
           value: /opt/cni/bin
         command:
         - sh
@@ -220,13 +253,13 @@

             - SYS_ADMIN
             - SYS_CHROOT
             - SYS_PTRACE
             drop:
             - ALL
       - name: mount-bpf-fs
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
         imagePullPolicy: IfNotPresent
         args:
         - mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
         command:
         - /bin/bash
         - -c
@@ -236,13 +269,13 @@

           privileged: true
         volumeMounts:
         - name: bpf-maps
           mountPath: /sys/fs/bpf
           mountPropagation: Bidirectional
       - name: clean-cilium-state
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
         imagePullPolicy: IfNotPresent
         command:
         - /init-container.sh
         env:
         - name: CILIUM_ALL_STATE
           valueFrom:
@@ -252,12 +285,18 @@

               optional: true
         - name: CILIUM_BPF_STATE
           valueFrom:
             configMapKeyRef:
               name: cilium-config
               key: clean-cilium-bpf-state
+              optional: true
+        - name: WRITE_CNI_CONF_WHEN_READY
+          valueFrom:
+            configMapKeyRef:
+              name: cilium-config
+              key: write-cni-conf-when-ready
               optional: true
         terminationMessagePolicy: FallbackToLogsOnError
         securityContext:
           seLinuxOptions:
             level: s0
             type: spc_t
@@ -274,18 +313,14 @@

           mountPath: /sys/fs/bpf
         - name: cilium-cgroup
           mountPath: /run/cilium/cgroupv2
           mountPropagation: HostToContainer
         - name: cilium-run
           mountPath: /var/run/cilium
-        resources:
-          requests:
-            cpu: 100m
-            memory: 100Mi
       - name: install-cni-binaries
-        image: quay.io/cilium/cilium:v1.14.2@sha256:6263f3a3d5d63b267b538298dbeb5ae87da3efacf09a2c620446c873ba807d35
+        image: quay.io/cilium/cilium:v1.16.3@sha256:62d2a09bbef840a46099ac4c69421c90f84f28d018d479749049011329aa7f28
         imagePullPolicy: IfNotPresent
         command:
         - /install-plugin.sh
         resources:
           requests:
             cpu: 100m
@@ -300,13 +335,12 @@

         terminationMessagePolicy: FallbackToLogsOnError
         volumeMounts:
         - name: cni-path
           mountPath: /host/opt/cni/bin
       restartPolicy: Always
       priorityClassName: system-node-critical
-      serviceAccount: cilium
       serviceAccountName: cilium
       automountServiceAccountToken: true
       terminationGracePeriodSeconds: 1
       hostNetwork: true
       affinity:
         podAntiAffinity:
@@ -350,12 +384,16 @@

         hostPath:
           path: /lib/modules
       - name: xtables-lock
         hostPath:
           path: /run/xtables.lock
           type: FileOrCreate
+      - name: envoy-sockets
+        hostPath:
+          path: /var/run/cilium/envoy/sockets
+          type: DirectoryOrCreate
       - name: clustermesh-secrets
         projected:
           defaultMode: 256
           sources:
           - secret:
               name: cilium-clustermesh
@@ -367,12 +405,22 @@

               - key: tls.key
                 path: common-etcd-client.key
               - key: tls.crt
                 path: common-etcd-client.crt
               - key: ca.crt
                 path: common-etcd-client-ca.crt
+          - secret:
+              name: clustermesh-apiserver-local-cert
+              optional: true
+              items:
+              - key: tls.key
+                path: local-etcd-client.key
+              - key: tls.crt
+                path: local-etcd-client.crt
+              - key: ca.crt
+                path: local-etcd-client-ca.crt
       - name: host-proc-sys-net
         hostPath:
           path: /proc/sys/net
           type: Directory
       - name: host-proc-sys-kernel
         hostPath:
--- kubernetes HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy

+++ kubernetes HelmRelease: kube-system/cilium ServiceAccount: kube-system/cilium-envoy

@@ -0,0 +1,7 @@

+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+
--- kubernetes HelmRelease: kube-system/cilium Service: kube-system/cilium-envoy

+++ kubernetes HelmRelease: kube-system/cilium Service: kube-system/cilium-envoy

@@ -0,0 +1,25 @@

+---
+apiVersion: v1
+kind: Service
+metadata:
+  name: cilium-envoy
+  namespace: kube-system
+  annotations:
+    prometheus.io/scrape: 'true'
+    prometheus.io/port: '9964'
+  labels:
+    k8s-app: cilium-envoy
+    app.kubernetes.io/name: cilium-envoy
+    app.kubernetes.io/part-of: cilium
+    io.cilium/app: proxy
+spec:
+  clusterIP: None
+  type: ClusterIP
+  selector:
+    k8s-app: cilium-envoy
+  ports:
+  - name: envoy-metrics
+    port: 9964
+    protocol: TCP
+    targetPort: envoy-metrics
+

@github-actions
Copy link

github-actions bot commented Oct 28, 2023

--- kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cluster-apps-cilium HelmRelease: kube-system/cilium

+++ kubernetes/apps/kube-system/cilium/app Kustomization: flux-system/cluster-apps-cilium HelmRelease: kube-system/cilium

@@ -9,13 +9,13 @@

     spec:
       chart: cilium
       sourceRef:
         kind: HelmRepository
         name: cilium
         namespace: flux-system
-      version: 1.14.2
+      version: 1.16.3
   install:
     remediation:
       retries: 3
   interval: 30m
   maxHistory: 2
   uninstall:

@renovate renovate bot changed the title fix(helm): update chart cilium to 1.14.3 fix(helm): update chart cilium to 1.14.4 Nov 13, 2023
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 9dc14cb to 9130751 Compare November 13, 2023 21:41
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 9130751 to cad0251 Compare December 13, 2023 23:41
@renovate renovate bot changed the title fix(helm): update chart cilium to 1.14.4 fix(helm): update chart cilium to 1.14.5 Dec 13, 2023
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from cad0251 to 5ff2ef8 Compare January 18, 2024 16:27
@renovate renovate bot changed the title fix(helm): update chart cilium to 1.14.5 fix(helm): update chart cilium to 1.14.6 Jan 18, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 5ff2ef8 to f2897b2 Compare January 31, 2024 21:58
@renovate renovate bot changed the title fix(helm): update chart cilium to 1.14.6 feat(helm): update chart cilium to 1.15.0 Jan 31, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from f2897b2 to 3e2c784 Compare February 15, 2024 01:28
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.0 feat(helm): update chart cilium to 1.15.1 Feb 15, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 3e2c784 to f405217 Compare March 13, 2024 17:45
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.1 feat(helm): update chart cilium to 1.15.2 Mar 13, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from f405217 to 496ffe4 Compare March 26, 2024 20:28
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.2 feat(helm): update chart cilium to 1.15.3 Mar 26, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 496ffe4 to 0d90405 Compare April 12, 2024 04:15
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.3 feat(helm): update chart cilium to 1.15.4 Apr 12, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 0d90405 to 9adebe7 Compare May 15, 2024 20:30
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.4 feat(helm): update chart cilium to 1.15.5 May 15, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 9adebe7 to c566aa7 Compare June 10, 2024 19:52
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.5 feat(helm): update chart cilium to 1.15.6 Jun 10, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from c566aa7 to 9fa2161 Compare July 11, 2024 20:26
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.6 feat(helm): update chart cilium to 1.15.7 Jul 11, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 9fa2161 to d143ba6 Compare July 24, 2024 17:26
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.15.7 feat(helm): update chart cilium to 1.16.0 Jul 24, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from d143ba6 to ca825a8 Compare August 14, 2024 15:13
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.0 feat(helm): update chart cilium to 1.16.1 Aug 14, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from ca825a8 to 5871eb4 Compare September 26, 2024 14:43
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.1 feat(helm): update chart cilium to 1.16.2 Sep 26, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 5871eb4 to b44c53e Compare October 15, 2024 14:04
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.2 feat(helm): update chart cilium to 1.16.3 Oct 15, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from b44c53e to 8a23a8a Compare November 20, 2024 10:58
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.3 feat(helm): update chart cilium to 1.16.4 Nov 20, 2024
@renovate renovate bot force-pushed the renovate/cilium-1.x branch from 8a23a8a to c0f5174 Compare December 18, 2024 01:23
@renovate renovate bot changed the title feat(helm): update chart cilium to 1.16.4 feat(helm): update chart cilium to 1.16.5 Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants