Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ArgoCD installation timing out when using helm chart with Istio sidecar injection enabled. #3053

Open
angelosanramon opened this issue Nov 27, 2024 · 0 comments
Labels
argo-cd bug Something isn't working

Comments

@angelosanramon
Copy link

angelosanramon commented Nov 27, 2024

Describe the bug

I am trying to install ArgoCD using the latest helm chart in a namespace with istio-injection=enabled label. The installation is timing out. Install works fine without the istio-injection=enabled label.

Related helm chart

argo-cd

Helm chart version

7.7.5

To Reproduce

  1. helm repo add argo https://argoproj.github.io/argo-helm
  2. kubectl create ns argocd
  3. kubectl label ns argocd istio-injection=enabled
  4. helm upgrade argocd argo/argo-cd --install --namespace argocd --wait

Expected behavior

ArgoCD and all it's components should be installed successfully.

Screenshots

❯ helm upgrade argocd argo/argo-cd --install --namespace argocd --wait
Release "argocd" does not exist. Installing it now.
Error: failed pre-install: 1 error occurred:
	* timed out waiting for the condition

❯ kubectl get pod -n argocd
NAME                             READY   STATUS     RESTARTS   AGE
argocd-redis-secret-init-rk74l   1/2     NotReady   1          7m12s

❯ kubectl logs -n argocd argocd-redis-secret-init-rk74l
Checking for initial Redis password in secret argocd/argocd-redis at key auth.
Argo CD Redis secret state confirmed: secret name argocd-redis.
Password secret is configured properly.

❯ kubectl describe pods -n argocd argocd-redis-secret-init-rk74l
Name:             argocd-redis-secret-init-rk74l
Namespace:        argocd
Priority:         0
Service Account:  argocd-redis-secret-init
Node:             minikube/192.168.64.10
Start Time:       Tue, 26 Nov 2024 17:42:36 -0800
Labels:           app.kubernetes.io/component=redis-secret-init
                  app.kubernetes.io/instance=argocd
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=argocd-redis-secret-init
                  app.kubernetes.io/part-of=argocd
                  app.kubernetes.io/version=v2.13.1
                  batch.kubernetes.io/controller-uid=4688d49a-8a61-4acb-bf0a-db04504e2f31
                  batch.kubernetes.io/job-name=argocd-redis-secret-init
                  controller-uid=4688d49a-8a61-4acb-bf0a-db04504e2f31
                  helm.sh/chart=argo-cd-7.7.5
                  job-name=argocd-redis-secret-init
                  security.istio.io/tlsMode=istio
                  service.istio.io/canonical-name=argocd-redis-secret-init
                  service.istio.io/canonical-revision=v2.13.1
Annotations:      istio.io/rev: default
                  kubectl.kubernetes.io/default-container: secret-init
                  kubectl.kubernetes.io/default-logs-container: secret-init
                  prometheus.io/path: /stats/prometheus
                  prometheus.io/port: 15020
                  prometheus.io/scrape: true
                  sidecar.istio.io/status:
                    {"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-env...
Status:           Running
IP:               10.244.0.59
IPs:
  IP:           10.244.0.59
Controlled By:  Job/argocd-redis-secret-init
Init Containers:
  istio-init:
    Container ID:  docker://21b80cb0f3b10af8abe100368f035b240c378e3e681b3c4bbc83b9b980fd65a8
    Image:         docker.io/istio/proxyv2:1.24.0
    Image ID:      docker-pullable://istio/proxyv2@sha256:ee6565e57319e01b5e45b929335eb9dc3d4b30d531b4652467e6939ae81b41f7
    Port:          <none>
    Host Port:     <none>
    Args:
      istio-iptables
      -p
      15001
      -z
      15006
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      *
      -d
      15090,15021,15020
      --log_output_level=default:info
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 26 Nov 2024 17:42:37 -0800
      Finished:     Tue, 26 Nov 2024 17:42:37 -0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8s74t (ro)
Containers:
  secret-init:
    Container ID:    docker://b9dfcb333b202728b016e5443e68c61f9a6ce7cee24266776c9a4b192f95bb27
    Image:           quay.io/argoproj/argocd:v2.13.1
    Image ID:        docker-pullable://quay.io/argoproj/argocd@sha256:19608c266cc41e4986d9b1c2b79ea4c42bb9430269eefc5005e9d65be4d22868
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Command:
      argocd
      admin
      redis-initial-password
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 26 Nov 2024 17:42:39 -0800
      Finished:     Tue, 26 Nov 2024 17:42:39 -0800
    Ready:          False
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8s74t (ro)
  istio-proxy:
    Container ID:  docker://22bea27c4446b11f6fb7f52ee79a94e60ceb63ee3f2d51b575ae93219bd0035f
    Image:         docker.io/istio/proxyv2:1.24.0
    Image ID:      docker-pullable://istio/proxyv2@sha256:ee6565e57319e01b5e45b929335eb9dc3d4b30d531b4652467e6939ae81b41f7
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --domain
      $(POD_NAMESPACE).svc.cluster.local
      --proxyLogLevel=warning
      --proxyComponentLogLevel=misc:error
      --log_output_level=default:info
    State:          Running
      Started:      Tue, 26 Nov 2024 17:42:38 -0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  1Gi
    Requests:
      cpu:      100m
      memory:   128Mi
    Readiness:  http-get http://:15021/healthz/ready delay=0s timeout=3s period=15s #success=1 #failure=4
    Startup:    http-get http://:15021/healthz/ready delay=0s timeout=3s period=1s #success=1 #failure=600
    Environment:
      PILOT_CERT_PROVIDER:           istiod
      CA_ADDR:                       istiod.istio-system.svc:15012
      POD_NAME:                      argocd-redis-secret-init-rk74l (v1:metadata.name)
      POD_NAMESPACE:                 argocd (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      SERVICE_ACCOUNT:                (v1:spec.serviceAccountName)
      HOST_IP:                        (v1:status.hostIP)
      ISTIO_CPU_LIMIT:               2 (limits.cpu)
      PROXY_CONFIG:                  {}

      ISTIO_META_POD_PORTS:          [
                                     ]
      ISTIO_META_APP_CONTAINERS:     secret-init
      GOMEMLIMIT:                    1073741824 (limits.memory)
      GOMAXPROCS:                    2 (limits.cpu)
      ISTIO_META_CLUSTER_ID:         Kubernetes
      ISTIO_META_NODE_NAME:           (v1:spec.nodeName)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_META_WORKLOAD_NAME:      argocd-redis-secret-init
      ISTIO_META_OWNER:              kubernetes://apis/batch/v1/namespaces/argocd/jobs/argocd-redis-secret-init
      ISTIO_META_MESH_ID:            cluster.local
      TRUST_DOMAIN:                  cluster.local
    Mounts:
      /etc/istio/pod from istio-podinfo (rw)
      /etc/istio/proxy from istio-envoy (rw)
      /var/lib/istio/data from istio-data (rw)
      /var/run/secrets/credential-uds from credential-socket (rw)
      /var/run/secrets/istio from istiod-ca-cert (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8s74t (ro)
      /var/run/secrets/tokens from istio-token (rw)
      /var/run/secrets/workload-spiffe-credentials from workload-certs (rw)
      /var/run/secrets/workload-spiffe-uds from workload-socket (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  workload-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  credential-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  workload-certs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-envoy:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  istio-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  istio-podinfo:
    Type:  DownwardAPI (a volume populated by information about the pod)
    Items:
      metadata.labels -> labels
      metadata.annotations -> annotations
  istio-token:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  43200
  istiod-ca-cert:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      istio-ca-root-cert
    Optional:  false
  kube-api-access-8s74t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                    From               Message
  ----    ------     ----                   ----               -------
  Normal  Scheduled  8m40s                  default-scheduler  Successfully assigned argocd/argocd-redis-secret-init-rk74l to minikube
  Normal  Pulled     8m40s                  kubelet            Container image "docker.io/istio/proxyv2:1.24.0" already present on machine
  Normal  Created    8m40s                  kubelet            Created container istio-init
  Normal  Started    8m40s                  kubelet            Started container istio-init
  Normal  Pulled     8m39s                  kubelet            Container image "docker.io/istio/proxyv2:1.24.0" already present on machine
  Normal  Created    8m39s                  kubelet            Created container istio-proxy
  Normal  Started    8m39s                  kubelet            Started container istio-proxy
  Normal  Pulled     8m38s (x2 over 8m39s)  kubelet            Container image "quay.io/argoproj/argocd:v2.13.1" already present on machine
  Normal  Created    8m38s (x2 over 8m39s)  kubelet            Created container secret-init
  Normal  Started    8m38s (x2 over 8m39s)  kubelet            Started container secret-init

❯ kubectl get pod -n argocd argocd-redis-secret-init-rk74l -oyaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    istio.io/rev: default
    kubectl.kubernetes.io/default-container: secret-init
    kubectl.kubernetes.io/default-logs-container: secret-init
    prometheus.io/path: /stats/prometheus
    prometheus.io/port: "15020"
    prometheus.io/scrape: "true"
    sidecar.istio.io/status: '{"initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["workload-socket","credential-socket","workload-certs","istio-envoy","istio-data","istio-podinfo","istio-token","istiod-ca-cert"],"imagePullSecrets":null,"revision":"default"}'
  creationTimestamp: "2024-11-27T01:42:36Z"
  finalizers:
  - batch.kubernetes.io/job-tracking
  generateName: argocd-redis-secret-init-
  labels:
    app.kubernetes.io/component: redis-secret-init
    app.kubernetes.io/instance: argocd
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: argocd-redis-secret-init
    app.kubernetes.io/part-of: argocd
    app.kubernetes.io/version: v2.13.1
    batch.kubernetes.io/controller-uid: 4688d49a-8a61-4acb-bf0a-db04504e2f31
    batch.kubernetes.io/job-name: argocd-redis-secret-init
    controller-uid: 4688d49a-8a61-4acb-bf0a-db04504e2f31
    helm.sh/chart: argo-cd-7.7.5
    job-name: argocd-redis-secret-init
    security.istio.io/tlsMode: istio
    service.istio.io/canonical-name: argocd-redis-secret-init
    service.istio.io/canonical-revision: v2.13.1
  name: argocd-redis-secret-init-rk74l
  namespace: argocd
  ownerReferences:
  - apiVersion: batch/v1
    blockOwnerDeletion: true
    controller: true
    kind: Job
    name: argocd-redis-secret-init
    uid: 4688d49a-8a61-4acb-bf0a-db04504e2f31
  resourceVersion: "27122"
  uid: d7345271-9f76-42e5-90a0-d38594950954
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/name: argocd-redis-secret-init
          topologyKey: kubernetes.io/hostname
        weight: 100
  containers:
  - command:
    - argocd
    - admin
    - redis-initial-password
    image: quay.io/argoproj/argocd:v2.13.1
    imagePullPolicy: IfNotPresent
    name: secret-init
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8s74t
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --domain
    - $(POD_NAMESPACE).svc.cluster.local
    - --proxyLogLevel=warning
    - --proxyComponentLogLevel=misc:error
    - --log_output_level=default:info
    env:
    - name: PILOT_CERT_PROVIDER
      value: istiod
    - name: CA_ADDR
      value: istiod.istio-system.svc:15012
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: SERVICE_ACCOUNT
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.serviceAccountName
    - name: HOST_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.hostIP
    - name: ISTIO_CPU_LIMIT
      valueFrom:
        resourceFieldRef:
          divisor: "0"
          resource: limits.cpu
    - name: PROXY_CONFIG
      value: |
        {}
    - name: ISTIO_META_POD_PORTS
      value: |-
        [
        ]
    - name: ISTIO_META_APP_CONTAINERS
      value: secret-init
    - name: GOMEMLIMIT
      valueFrom:
        resourceFieldRef:
          divisor: "0"
          resource: limits.memory
    - name: GOMAXPROCS
      valueFrom:
        resourceFieldRef:
          divisor: "0"
          resource: limits.cpu
    - name: ISTIO_META_CLUSTER_ID
      value: Kubernetes
    - name: ISTIO_META_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    - name: ISTIO_META_WORKLOAD_NAME
      value: argocd-redis-secret-init
    - name: ISTIO_META_OWNER
      value: kubernetes://apis/batch/v1/namespaces/argocd/jobs/argocd-redis-secret-init
    - name: ISTIO_META_MESH_ID
      value: cluster.local
    - name: TRUST_DOMAIN
      value: cluster.local
    image: docker.io/istio/proxyv2:1.24.0
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    ports:
    - containerPort: 15090
      name: http-envoy-prom
      protocol: TCP
    readinessProbe:
      failureThreshold: 4
      httpGet:
        path: /healthz/ready
        port: 15021
        scheme: HTTP
      periodSeconds: 15
      successThreshold: 1
      timeoutSeconds: 3
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 128Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: 1337
      runAsNonRoot: true
      runAsUser: 1337
    startupProbe:
      failureThreshold: 600
      httpGet:
        path: /healthz/ready
        port: 15021
        scheme: HTTP
      periodSeconds: 1
      successThreshold: 1
      timeoutSeconds: 3
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/workload-spiffe-uds
      name: workload-socket
    - mountPath: /var/run/secrets/credential-uds
      name: credential-socket
    - mountPath: /var/run/secrets/workload-spiffe-credentials
      name: workload-certs
    - mountPath: /var/run/secrets/istio
      name: istiod-ca-cert
    - mountPath: /var/lib/istio/data
      name: istio-data
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /var/run/secrets/tokens
      name: istio-token
    - mountPath: /etc/istio/pod
      name: istio-podinfo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8s74t
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  initContainers:
  - args:
    - istio-iptables
    - -p
    - "15001"
    - -z
    - "15006"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - '*'
    - -x
    - ""
    - -b
    - '*'
    - -d
    - 15090,15021,15020
    - --log_output_level=default:info
    image: docker.io/istio/proxyv2:1.24.0
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources:
      limits:
        cpu: "2"
        memory: 1Gi
      requests:
        cpu: 100m
        memory: 128Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
        drop:
        - ALL
      privileged: false
      readOnlyRootFilesystem: false
      runAsGroup: 0
      runAsNonRoot: false
      runAsUser: 0
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8s74t
      readOnly: true
  nodeName: minikube
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: OnFailure
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: argocd-redis-secret-init
  serviceAccountName: argocd-redis-secret-init
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: workload-socket
  - emptyDir: {}
    name: credential-socket
  - emptyDir: {}
    name: workload-certs
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - emptyDir: {}
    name: istio-data
  - downwardAPI:
      defaultMode: 420
      items:
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.labels
        path: labels
      - fieldRef:
          apiVersion: v1
          fieldPath: metadata.annotations
        path: annotations
    name: istio-podinfo
  - name: istio-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: istio-ca
          expirationSeconds: 43200
          path: istio-token
  - configMap:
      defaultMode: 420
      name: istio-ca-root-cert
    name: istiod-ca-cert
  - name: kube-api-access-8s74t
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-11-27T01:42:38Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2024-11-27T01:42:38Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-11-27T01:42:36Z"
    message: 'containers with unready status: [secret-init]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-11-27T01:42:36Z"
    message: 'containers with unready status: [secret-init]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-11-27T01:42:36Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://22bea27c4446b11f6fb7f52ee79a94e60ceb63ee3f2d51b575ae93219bd0035f
    image: istio/proxyv2:1.24.0
    imageID: docker-pullable://istio/proxyv2@sha256:ee6565e57319e01b5e45b929335eb9dc3d4b30d531b4652467e6939ae81b41f7
    lastState: {}
    name: istio-proxy
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-11-27T01:42:38Z"
    volumeMounts:
    - mountPath: /var/run/secrets/workload-spiffe-uds
      name: workload-socket
    - mountPath: /var/run/secrets/credential-uds
      name: credential-socket
    - mountPath: /var/run/secrets/workload-spiffe-credentials
      name: workload-certs
    - mountPath: /var/run/secrets/istio
      name: istiod-ca-cert
    - mountPath: /var/lib/istio/data
      name: istio-data
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /var/run/secrets/tokens
      name: istio-token
    - mountPath: /etc/istio/pod
      name: istio-podinfo
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8s74t
      readOnly: true
      recursiveReadOnly: Disabled
  - containerID: docker://b9dfcb333b202728b016e5443e68c61f9a6ce7cee24266776c9a4b192f95bb27
    image: quay.io/argoproj/argocd:v2.13.1
    imageID: docker-pullable://quay.io/argoproj/argocd@sha256:19608c266cc41e4986d9b1c2b79ea4c42bb9430269eefc5005e9d65be4d22868
    lastState: {}
    name: secret-init
    ready: false
    restartCount: 1
    started: false
    state:
      terminated:
        containerID: docker://b9dfcb333b202728b016e5443e68c61f9a6ce7cee24266776c9a4b192f95bb27
        exitCode: 0
        finishedAt: "2024-11-27T01:42:39Z"
        reason: Completed
        startedAt: "2024-11-27T01:42:39Z"
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8s74t
      readOnly: true
      recursiveReadOnly: Disabled
  hostIP: 192.168.64.10
  hostIPs:
  - ip: 192.168.64.10
  initContainerStatuses:
  - containerID: docker://21b80cb0f3b10af8abe100368f035b240c378e3e681b3c4bbc83b9b980fd65a8
    image: istio/proxyv2:1.24.0
    imageID: docker-pullable://istio/proxyv2@sha256:ee6565e57319e01b5e45b929335eb9dc3d4b30d531b4652467e6939ae81b41f7
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: docker://21b80cb0f3b10af8abe100368f035b240c378e3e681b3c4bbc83b9b980fd65a8
        exitCode: 0
        finishedAt: "2024-11-27T01:42:37Z"
        reason: Completed
        startedAt: "2024-11-27T01:42:37Z"
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-8s74t
      readOnly: true
      recursiveReadOnly: Disabled
  phase: Running
  podIP: 10.244.0.59
  podIPs:
  - ip: 10.244.0.59
  qosClass: Burstable
  startTime: "2024-11-27T01:42:36Z"

Additional context

Installing using the https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml file works just fine.

❯ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
clusterrole.rbac.authorization.k8s.io/argocd-application-controller created
clusterrole.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrole.rbac.authorization.k8s.io/argocd-server created
rolebinding.rbac.authorization.k8s.io/argocd-application-controller created
rolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
rolebinding.rbac.authorization.k8s.io/argocd-dex-server created
rolebinding.rbac.authorization.k8s.io/argocd-notifications-controller created
rolebinding.rbac.authorization.k8s.io/argocd-redis created
rolebinding.rbac.authorization.k8s.io/argocd-server created
clusterrolebinding.rbac.authorization.k8s.io/argocd-application-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-applicationset-controller created
clusterrolebinding.rbac.authorization.k8s.io/argocd-server created
configmap/argocd-cm created
configmap/argocd-cmd-params-cm created
configmap/argocd-gpg-keys-cm created
configmap/argocd-notifications-cm created
configmap/argocd-rbac-cm created
configmap/argocd-ssh-known-hosts-cm created
configmap/argocd-tls-certs-cm created
secret/argocd-notifications-secret created
secret/argocd-secret created
service/argocd-applicationset-controller created
service/argocd-dex-server created
service/argocd-metrics created
service/argocd-notifications-controller-metrics created
service/argocd-redis created
service/argocd-repo-server created
service/argocd-server created
service/argocd-server-metrics created
deployment.apps/argocd-applicationset-controller created
deployment.apps/argocd-dex-server created
deployment.apps/argocd-notifications-controller created
deployment.apps/argocd-redis created
deployment.apps/argocd-repo-server created
deployment.apps/argocd-server created
statefulset.apps/argocd-application-controller created
networkpolicy.networking.k8s.io/argocd-application-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-applicationset-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-dex-server-network-policy created
networkpolicy.networking.k8s.io/argocd-notifications-controller-network-policy created
networkpolicy.networking.k8s.io/argocd-redis-network-policy created
networkpolicy.networking.k8s.io/argocd-repo-server-network-policy created
networkpolicy.networking.k8s.io/argocd-server-network-policy created

❯ kubectl get pods -n argocd
NAME                                                READY   STATUS    RESTARTS   AGE
argocd-application-controller-0                     2/2     Running   0          2m1s
argocd-applicationset-controller-7ff94fc879-8bznc   2/2     Running   0          2m1s
argocd-dex-server-84b879d87c-d8vnd                  2/2     Running   0          2m1s
argocd-notifications-controller-6c65b4b9f6-q9drx    2/2     Running   0          2m1s
argocd-redis-868dbb7cf4-hlmgr                       2/2     Running   0          2m1s
argocd-repo-server-6d47848766-8vx9h                 2/2     Running   0          2m1s
argocd-server-c9f58d8cf-2552g                       2/2     Running   0          2m1s
@angelosanramon angelosanramon added the bug Something isn't working label Nov 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
argo-cd bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants