Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The argocd-redis-ha-haproxy container within ArgoCD v2.14.2 on Kubernetes v1.32.2 running on Raspberry Pi 5 fails #21950

Open
3 tasks done
cookew opened this issue Feb 23, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@cookew
Copy link

cookew commented Feb 23, 2025

Checklist:

  • I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
  • I've included steps to reproduce the bug.
  • I've pasted the output of argocd version.

Describe the bug

I run a vanilla Kubernetes v1.32.2 cluster on multiple Raspberry Pi 5s, 3 control nodes and 4 workers nodes. ArgoCD is deployed using Kustomize. The argocd-redis-ha-haproxy container is failing with no apparent reason given in the logs or a kubectl describe. Version bumping the container image to one that supports linux/arm64/v8 would appear to fix the issue.

The image used for the argocd-redis-ha-haproxy container is public.ecr.aws/docker/library/haproxy:2.6.17-alpine, which I suspect is mirrored from docker hub. Looking at the available images and the architectures supported, I changed the tag to one that supports the arm64/v8 and it the container successfully started and I can log into ArgoCD now.

I haven't done much more than just logging into ArgoCD at this point and do not know if changing the haproxy image effects anything else.

Link to the containers used in the original manifest:
https://hub.docker.com/_/haproxy/tags?name=2.6.17-alpine

Link to the new container used:
https://hub.docker.com/_/haproxy/tags?name=2.6.21-alpine

Adding this to my kustomization.yml file fixed the issue.

images:
- name: public.ecr.aws/docker/library/haproxy
  newTag: 2.6.21-alpine

To Reproduce

Deploy ArgoCD with kustomize.

Expected behavior

For all the containers to not crash.

Version

$ argocd version
argocd: v2.14.2+ad27246
  BuildDate: 2025-02-05T23:44:17Z
  GitCommit: ad2724661b66ede607db9b5bd4c3c26491f5be67
  GitTreeState: clean
  GoVersion: go1.23.3
  Compiler: gc
  Platform: linux/arm64
argocd-server: v2.14.2+ad27246
  BuildDate: 2025-02-05T23:44:17Z
  GitCommit: ad2724661b66ede607db9b5bd4c3c26491f5be67
  GitTreeState: clean
  GoVersion: go1.23.3
  Compiler: gc
  Platform: linux/arm64
  Kustomize Version: v5.4.3 2024-07-19T16:40:33Z
  Helm Version: v3.16.3+gcfd0749
  Kubectl Version: v0.31.0
  Jsonnet Version: v0.20.0


$ kubectl version
Client Version: v1.32.2
Kustomize Version: v5.5.0
Server Version: v1.32.2


kube-worker-4 ~ $ uname -a
Linux kube-worker-4 6.11.0-1004-raspi #4-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 22:25:34 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

Logs

$ cat kustomization.yml 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: argocd
resources:
- base/namespace.yml
- base/argo-dex-server-tls.yml
- base/argo-repo-server-tls.yml
- base/argo-server-tls.yml
- base/ingress.yaml
- https://github.com/argoproj/argo-cd/manifests/ha/cluster-install?ref=v2.14.2
patches:
- path: patches/argocd-cm.yml
- path: patches/argocd-rbac-cm.yml
  target:
    kind: ConfigMap
    name: argocd-rbac-cm
- path: patches/argocd-secret.yml


$ kubectl -n argocd describe pod/argocd-redis-ha-haproxy-5c9877b76d-89hh7
Name:             argocd-redis-ha-haproxy-5c9877b76d-89hh7
Namespace:        argocd
Priority:         0
Service Account:  argocd-redis-ha-haproxy
Node:             kube-worker-4/10.42.16.47
Start Time:       Sun, 23 Feb 2025 08:47:25 -0700
Labels:           app.kubernetes.io/name=argocd-redis-ha-haproxy
                  pod-template-hash=5c9877b76d
Annotations:      checksum/config: e34e8124c38bcfd2f16e75620bbde30158686692b13bc449eecc44c51b207d54
                  prometheus.io/path: /metrics
                  prometheus.io/port: 9101
                  prometheus.io/scrape: true
Status:           Running
IP:               10.0.5.188
IPs:
  IP:           10.0.5.188
Controlled By:  ReplicaSet/argocd-redis-ha-haproxy-5c9877b76d
Init Containers:
  secret-init:
    Container ID:    containerd://f56dff58ebd30442404cee07b3dc4ad49a820a353622ef798cb472b51fcf12f7
    Image:           quay.io/argoproj/argocd:v2.14.2
    Image ID:        quay.io/argoproj/argocd@sha256:018f6444077deb39eac7c549a0ffe68d75da71751dd19899e05d3b60e1c2476f
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Command:
      argocd
      admin
      redis-initial-password
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 23 Feb 2025 08:47:26 -0700
      Finished:     Sun, 23 Feb 2025 08:47:27 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c48j4 (ro)
  config-init:
    Container ID:    containerd://872f452c6d55f5fe634b9bf19039770587866b9d740d4fde469278886e5420ba
    Image:           public.ecr.aws/docker/library/haproxy:2.6.17-alpine
    Image ID:        public.ecr.aws/docker/library/haproxy@sha256:14772bafca418146ade0acba20ebb17a06ac2ad720a93baa6d6792ce1e0f57d9
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Command:
      sh
    Args:
      /readonly/haproxy_init.sh
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 23 Feb 2025 08:47:28 -0700
      Finished:     Sun, 23 Feb 2025 08:47:28 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /data from data (rw)
      /readonly from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c48j4 (ro)
Containers:
  haproxy:
    Container ID:    containerd://5c79125a49384f671773cdb2e44936490e85e64fa447c4e5aa0de42069dca4de
    Image:           public.ecr.aws/docker/library/haproxy:2.6.17-alpine
    Image ID:        public.ecr.aws/docker/library/haproxy@sha256:14772bafca418146ade0acba20ebb17a06ac2ad720a93baa6d6792ce1e0f57d9
    Ports:           6379/TCP, 9101/TCP
    Host Ports:      0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    State:           Running
      Started:       Sun, 23 Feb 2025 08:47:57 -0700
    Last State:      Terminated
      Reason:        OOMKilled
      Exit Code:     137
      Started:       Sun, 23 Feb 2025 08:47:43 -0700
      Finished:      Sun, 23 Feb 2025 08:47:56 -0700
    Ready:           False
    Restart Count:   2
    Liveness:        http-get http://:8888/healthz delay=5s timeout=1s period=3s #success=1 #failure=3
    Readiness:       http-get http://:8888/healthz delay=5s timeout=1s period=3s #success=1 #failure=3
    Environment:
      AUTH:  <set to the key 'auth' in secret 'argocd-redis'>  Optional: false
    Mounts:
      /run/haproxy from shared-socket (rw)
      /usr/local/etc/haproxy from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c48j4 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      argocd-redis-ha-configmap
    Optional:  false
  shared-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-c48j4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  43s                default-scheduler  Successfully assigned argocd/argocd-redis-ha-haproxy-5c9877b76d-89hh7 to kube-worker-4
  Normal   Pulling    43s                kubelet            Pulling image "quay.io/argoproj/argocd:v2.14.2"
  Normal   Pulled     42s                kubelet            Successfully pulled image "quay.io/argoproj/argocd:v2.14.2" in 970ms (970ms including waiting). Image size: 177991083 bytes.
  Normal   Created    42s                kubelet            Created container: secret-init
  Normal   Started    42s                kubelet            Started container secret-init
  Normal   Pulling    41s                kubelet            Pulling image "public.ecr.aws/docker/library/haproxy:2.6.17-alpine"
  Normal   Pulled     41s                kubelet            Successfully pulled image "public.ecr.aws/docker/library/haproxy:2.6.17-alpine" in 438ms (438ms including waiting). Image size: 12031587 bytes.
  Normal   Started    40s                kubelet            Started container config-init
  Normal   Created    40s                kubelet            Created container: config-init
  Normal   Pulled     40s                kubelet            Successfully pulled image "public.ecr.aws/docker/library/haproxy:2.6.17-alpine" in 281ms (281ms including waiting). Image size: 12031587 bytes.
  Normal   Pulled     25s                kubelet            Successfully pulled image "public.ecr.aws/docker/library/haproxy:2.6.17-alpine" in 304ms (304ms including waiting). Image size: 12031587 bytes.
  Normal   Killing    13s (x2 over 28s)  kubelet            Container haproxy failed liveness probe, will be restarted
  Normal   Pulling    12s (x3 over 40s)  kubelet            Pulling image "public.ecr.aws/docker/library/haproxy:2.6.17-alpine"
  Normal   Created    11s (x3 over 40s)  kubelet            Created container: haproxy
  Normal   Started    11s (x3 over 40s)  kubelet            Started container haproxy
  Normal   Pulled     11s                kubelet            Successfully pulled image "public.ecr.aws/docker/library/haproxy:2.6.17-alpine" in 520ms (520ms including waiting). Image size: 12031587 bytes.
  Warning  Unhealthy  1s (x10 over 35s)  kubelet            Readiness probe failed: Get "http://10.0.5.188:8888/healthz": dial tcp 10.0.5.188:8888: connect: connection refused
  Warning  Unhealthy  1s (x8 over 34s)   kubelet            Liveness probe failed: Get "http://10.0.5.188:8888/healthz": dial tcp 10.0.5.188:8888: connect: connection refused


$ kubectl -n argocd logs  pod/argocd-redis-ha-haproxy-5c9877b76d-89hh7 config-init 
10.96.186.83      argocd-redis-ha-announce-0.argocd.svc.cluster.local  argocd-redis-ha-announce-0.argocd.svc.cluster.local argocd-redis-ha-announce-0
10.96.51.166      argocd-redis-ha-announce-1.argocd.svc.cluster.local  argocd-redis-ha-announce-1.argocd.svc.cluster.local argocd-redis-ha-announce-1
10.96.112.185     argocd-redis-ha-announce-2.argocd.svc.cluster.local  argocd-redis-ha-announce-2.argocd.svc.cluster.local argocd-redis-ha-announce-2


$ kubectl -n argocd logs  pod/argocd-redis-ha-haproxy-5c9877b76d-89hh7 haproxy 


$ kubectl -n argocd logs  pod/argocd-redis-ha-haproxy-5c9877b76d-89hh7 secret-init 
Checking for initial Redis password in secret argocd/argocd-redis at key auth. 
Argo CD Redis secret state confirmed: secret name argocd-redis.
Password secret is configured properly.
@cookew cookew added the bug Something isn't working label Feb 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant