Installing MongoDB on Kubernetes with Replica Sets and NO MongoDB Operator

Are you tired of searching for MongoDB on Kubernetes and Immediately going to a MongoDB site on how to use their operator? Are you tired of finding nothing but Helm packages that you have no clue what is really going or finding a set of instructions that are made very complex? Are you tired of having no choice but to be pushed to a MongoDB cloud or Cloud Service Provider (AWZ, Azure, and GCP) service? I was tired of looking online just to find some complex way of setting up MongoDB. So let's cut out the complexity and move on to making MongoDB simple.
Shows the full stack deployed on ArgoCD within clusterStep 1.

Setting up the Role-Based Access Controls (RBAC)

The first thing we need to do is set up a Service Account, a ClusterRole, and connect the two with a Cluster RoleBinding. This will be used for our "headless" service that MongoDB will utilize when creating DNS association of the replica sets.

Create a file called mongodb-rbac.yml and add the following:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: mongo-account
  namespace: <your-namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: mongo-role
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["deployments"]
  verbs: ["list", "watch"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get","list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: mongo_role_binding
subjects:
- kind: ServiceAccount
  name: mongo-account
  namespace: <your-namespace>
roleRef:
  kind: ClusterRole
  name: mongo-role
  apiGroup: rbac.authorization.k8s.io

The roles are pretty simple. They are set up to have access to watch and list the deployment and review the services of the pods.
Apply the RBAC by running:

kubectl apply -f mongodb-rbac.yml

Step 2. Setting up the Headless Service

First of all, What in the world is a "headless" service! Well in Kubernetes by default if no Service Type is specified, then a ClusterIP is given. However, a headless service means that there will be NO ClusterIP given by default. So how do we do this? Well, it's simple. Just add "clusterIP: None" into your specification for the service. Let's do just this.
Create a file called mongodb-headless.yml and add the following:

apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: <your-namespace>
  labels:
    name: mongo
spec:
  ports:
    - port: 27017
      targetPort: 27017
  clusterIP: None
  selector:
    app: mongo

Great! Now apply by using;

kubectl apply -f mongodb-headless.yml

Step 3. Setting up the StatefulSet Deployment with Persistence

MongoDB really is Monolithic, but in order to set it up for Kubernetes, a StatefulSet deployment will be required. This is because we will NOT be using an Operator to handle the Statefulness but instead do it on our own. Now there will be Persistance set up as well. This will be done with a VolumeClaimTemplate.
Create a file called mongodb-stateful-deployment.yml and add the following:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-replica
  namespace: <your-namespace>
spec:
  serviceName: mongo
  replicas: 2
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
        selector: mongo
    spec:
      terminationGracePeriodSeconds: 30
      serviceAccount: mongo-account
      containers:
      - name: mongodb
        image: docker.io/mongo:4.2
        env:
        command: ["/bin/sh"]
        args: ["-c", "mongod --replSet=rs0 --bind_ip_all"]
        resources:
          limits:
            cpu: 1
            memory: 1500Mi
          requests:
            cpu: 1
            memory: 1000Mi
        ports:
        - name: mongo-port
          containerPort: 27017
        volumeMounts:
        - name: mongo-data
          mountPath: /data/db
  volumeClaimTemplates:
  - metadata:
      name: mongo-data
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi

This should be pretty simple to understand. Basically, the Statefulset deployment is using the Service Account that was stood up in Step 1. Then the docker.io/mongo:4.2 image is utilized. The key to the replica set is the commands on runtime in the arg key field. This is what will be used to set up the replica set on runtime for each pod:

mongod --replSet=rs0 --bind_ip_all

Of course, the "/data/db" folder is persisted by assigning it to the VolumeClaimTemplate.
Great! Now apply the file by running:

kubectl apply -f mongodb-stateful-deployment.yml

Step 4. Setting up Replication Host

Some manual configuration will need to be done in order to set up replication. However, the steps are very simple. In order to set up replication, you must first port-forward into Pod 0 that was created by the Statefulset. So let's port-forward by running:
kubectl exec -it mongodb-replica-0 -n -- mongo
So this will exec an Individual terminal and run the command mongo to access inside of the MongoDB.
From here replication must be initialized. To do so run the following:

rs.initiate()

The expected output after running rs.initiate() is to see:
"no configuration specified. Using a default configuration for the set".
Now let's set up a variable called cfg. This variable will be used to execute rs.conf(). Run the following:

var cfg = rs.conf()

Now let's utilize the variable to add a Primary server to the ReplicaSet configuration.

cfg.members[0].host="mongodb-replica-0.mongo:27017"

So what in the world does this mean. So the "mongodb-replica-0" represents the Pod Name. The "mongo" represents the "headless service" that we stood up and the 27017 of course is the MongoDB port.
Now lets setup the configuration by running:

rs.reconfig(cfg)

Great! What we should now see is a response of:
"ok": 1
The ok of 1 represents that the configuration was successful.

Step 5. Add the Second MongoDB Instance/Pod

Now the second instance/pod needs to be added to the replicaset configuration. To do that run the following:

rs.add("mongodb-replica-0.mongo:27017")

Again the output should show an OK status of 1.

Step 6. Verify the ReplicaSet Status

This is a very easy command and should be used to reference the primary and secondary servers. Run the command:

rs.status()

This should now show the two servers added to the replica.
Updating Replicas (Optional)
Now lets say that you want to add another replica set. All you have to do is run:

kubectl scale sts <name of statefulset> -n <name of namespace> --replicas <number of replicas>

Now to add the replicas to the server just port-forward back into replica-0 pod by running:

kubectl exec -it mongodb-replica-0 -n <your-namespace> -- mongo

Then repeat what we did in Step 5 but for the new pod added because of the update in replica.
Of course, if you want to remove then just run rs.remove().
Awesome. I will be created an article very soon that will identify how to set up an External Connection within Kubernetes to connect to MongoDB using Compass.

43