You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Start the mongodb community kubernetes operator, after deployment , when we run "oc get mdbc", it shows
What did you expect?
The pod should be in running state. we are getting this error for mongo v6.0.20 and Operator Version: v0.12.0
For reference, output for mongo v6.0.19
What happened instead?
The pod is in CrashLoopBackOff state.
oc describe pod staging-mongodb-0
Normal Pulled 161m (x3 over 161m) kubelet Container image "<my-registery>/mongo:6.0.20" already present on machine Warning Unhealthy 161m kubelet Readiness probe failed: {"level":"info","ts":"2025-02-17T08:05:03.784Z","msg":"logging configuration: &{Filename:/var/log/mongodb-mms-automation/readiness.log MaxSize:5 MaxAge:0 MaxBackups:5 LocalTime:false Compress:false size:0 file:<nil> mu:{state:0 sema:0} millCh:<nil> startMill:{done:{_:{} v:0} m:{state:0 sema:0}}}"} {"level":"info","ts":"2025-02-17T08:05:03.822Z","msg":"Mongod is not ready"} {"level":"info","ts":"2025-02-17T08:05:03.822Z","msg":"Reached the end of the check. Returning not ready."} Normal Created 161m (x3 over 161m) kubelet Created container mongod Normal Started 161m (x3 over 161m) kubelet Started container mongod Warning BackOff 7m1s (x718 over 161m) kubelet Back-off restarting failed container mongod in pod staging-mongodb-0_staging(a1beb1b2-15cf-440a-a522-199bba4c7109) Warning Unhealthy 116s (x1724 over 161m) kubelet (combined from similar events): Readiness probe failed: {"level":"info","ts":"2025-02-17T10:44:46.580Z","msg":"logging configuration: &{Filename:/var/log/mongodb-mms-automation/readiness.log MaxSize:5 MaxAge:0 MaxBackups:5 LocalTime:false Compress:false size:0 file:<nil> mu:{state:0 sema:0} millCh:<nil> startMill:{done:{_:{} v:0} m:{state:0 sema:0}}}"} {"level":"info","ts":"2025-02-17T10:44:46.613Z","msg":"Mongod is not ready"} {"level":"info","ts":"2025-02-17T10:44:46.613Z","msg":"Reached the end of the check. Returning not ready."}
oc logs staging-mongodb-0
Defaulted container "mongod" out of: mongod, mongodb-agent, mongod-posthook (init), mongodb-agent-readinessprobe (init) exec /bin/sh: exec format error
oc logs -p -c mongod-posthook staging-mongodb-0 Error from server (BadRequest): previous terminated container "mongod-posthook" in pod "staging-mongodb-0" not found
oc logs -p -c mongodb-agent-readinessprobe staging-mongodb-0 Error from server (BadRequest): previous terminated container "mongodb-agent-readinessprobe" in pod "staging-mongodb-0" not found
Screenshots
Operator Information
Operator Version: v0.12.0
MongoDB Image used: 6.0.20
Kubernetes Cluster Information
RedHat Openshift
Additional context
We have not changed anything from application code side. It was working fine with Operator Version: v0.12.0 and MongoDB Image : 6.0.19
Wondering if it means that Mongo 6.0.20 can not be deployed using this v0.12.0 version of operator?
db:
name:
scramCredentialsSecretName:
version: 6.0.20
status:
currentMongoDBMembers: 0
currentStatefulSetReplicas: 0
message: ReplicaSet is not yet ready, retrying in 10 seconds
mongoUri: ""
phase: Pending
kind: List
metadata:
resourceVersion: ""
❯ oc get po | grep mongo
mongodb-kubernetes-operator-66b978df7f-nd6x4 1/1 Running 0 3h7m
staging-mongodb-0 0/2 CrashLoopBackOff 41 (107s ago) 3h7m
❯ oc get mdbc
NAME PHASE VERSION
staging-mongodb Pending
The text was updated successfully, but these errors were encountered:
ibmer-Aishwary
changed the title
Failed to start MongoDB Coommunity Kubernetes Operator-v0.12.0 with Mongo-v6.0.20
Failed to start MongoDB Community Kubernetes Operator-v0.12.0 with Mongo-v6.0.20
Feb 17, 2025
What did you do to encounter the bug?
Steps to reproduce the behavior:
Start the mongodb community kubernetes operator, after deployment , when we run "oc get mdbc", it shows
What did you expect?
The pod should be in running state. we are getting this error for mongo v6.0.20 and Operator Version: v0.12.0
For reference, output for mongo v6.0.19
What happened instead?
The pod is in CrashLoopBackOff state.
oc describe pod staging-mongodb-0
Normal Pulled 161m (x3 over 161m) kubelet Container image "<my-registery>/mongo:6.0.20" already present on machine Warning Unhealthy 161m kubelet Readiness probe failed: {"level":"info","ts":"2025-02-17T08:05:03.784Z","msg":"logging configuration: &{Filename:/var/log/mongodb-mms-automation/readiness.log MaxSize:5 MaxAge:0 MaxBackups:5 LocalTime:false Compress:false size:0 file:<nil> mu:{state:0 sema:0} millCh:<nil> startMill:{done:{_:{} v:0} m:{state:0 sema:0}}}"} {"level":"info","ts":"2025-02-17T08:05:03.822Z","msg":"Mongod is not ready"} {"level":"info","ts":"2025-02-17T08:05:03.822Z","msg":"Reached the end of the check. Returning not ready."} Normal Created 161m (x3 over 161m) kubelet Created container mongod Normal Started 161m (x3 over 161m) kubelet Started container mongod Warning BackOff 7m1s (x718 over 161m) kubelet Back-off restarting failed container mongod in pod staging-mongodb-0_staging(a1beb1b2-15cf-440a-a522-199bba4c7109) Warning Unhealthy 116s (x1724 over 161m) kubelet (combined from similar events): Readiness probe failed: {"level":"info","ts":"2025-02-17T10:44:46.580Z","msg":"logging configuration: &{Filename:/var/log/mongodb-mms-automation/readiness.log MaxSize:5 MaxAge:0 MaxBackups:5 LocalTime:false Compress:false size:0 file:<nil> mu:{state:0 sema:0} millCh:<nil> startMill:{done:{_:{} v:0} m:{state:0 sema:0}}}"} {"level":"info","ts":"2025-02-17T10:44:46.613Z","msg":"Mongod is not ready"} {"level":"info","ts":"2025-02-17T10:44:46.613Z","msg":"Reached the end of the check. Returning not ready."}
oc logs staging-mongodb-0
Defaulted container "mongod" out of: mongod, mongodb-agent, mongod-posthook (init), mongodb-agent-readinessprobe (init) exec /bin/sh: exec format error
oc logs -p -c mongod-posthook staging-mongodb-0
Error from server (BadRequest): previous terminated container "mongod-posthook" in pod "staging-mongodb-0" not found
oc logs -p -c mongodb-agent-readinessprobe staging-mongodb-0
Error from server (BadRequest): previous terminated container "mongodb-agent-readinessprobe" in pod "staging-mongodb-0" not found
Screenshots

Operator Information
Kubernetes Cluster Information
Additional context
We have not changed anything from application code side. It was working fine with Operator Version: v0.12.0 and MongoDB Image : 6.0.19
Wondering if it means that Mongo 6.0.20 can not be deployed using this v0.12.0 version of operator?
yaml definition for mongodb deployment-
apiVersion: v1
items:
kind: MongoDBCommunity
metadata:
annotations:
productChargedContainers: All
creationTimestamp: "2025-02-17T07:10:06Z"
generation: 1
labels:
app.kubernetes.io/instance: staging
app.kubernetes.io/name: staging-mongodb
name: staging-mongodb
namespace: staging
resourceVersion: ""
uid:
spec:
additionalMongodConfig:
net.maxIncomingConnections: 900
featureCompatibilityVersion: "6.0"
members: 1
security:
authentication:
ignoreUnknownUsers: true
modes:
- SCRAM
statefulSet:
spec:
template:
metadata:
annotations:
productChargedContainers: All
labels:
.kubernetes.io/instance: staging
.kubernetes.io/name: staging-mongodb
spec:
containers:
- image: /mongo:6.0.20
name: mongod
resources:
limits:
cpu: "4"
ephemeral-storage: 5Gi
memory: 10Gi
requests:
cpu: "1"
ephemeral-storage: 1Gi
memory: 2Gi
imagePullSecrets:
- name:
initContainers:
- name: mongodb-agent-readinessprobe
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 6m
memory: 6Mi
- name: mongod-posthook
resources:
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 6m
memory: 6Mi
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: rook-ceph-block
volumeMode: Filesystem
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: rook-ceph-block
volumeMode: Filesystem
type: ReplicaSet
users:
name:
passwordSecretRef:
key:
name:
roles:
name: clusterAdmin
name: userAdminAnyDatabase
name: readWriteAnyDatabase
scramCredentialsSecretName:
name:
passwordSecretRef:
key:
name:
roles:
name:
scramCredentialsSecretName:
version: 6.0.20
status:
currentMongoDBMembers: 0
currentStatefulSetReplicas: 0
message: ReplicaSet is not yet ready, retrying in 10 seconds
mongoUri: ""
phase: Pending
kind: List
metadata:
resourceVersion: ""
The text was updated successfully, but these errors were encountered: