This README provides a detailed guide to the architecture and deployment of a Node.js microservice application, leveraging RabbitMQ for messaging and Kubernetes for orchestration. It aims to explain the 'why' and 'how' behind the choices in communication and deployment mechanisms.
- Why RabbitMQ?
- How Services Communicate
- Why Kubernetes?
- Kubernetes Architecture
- Deployment Process
- Useful Commands
- Node.js Docker Kubernetes RabbitMQ Example
RabbitMQ is chosen for this microservice architecture due to its robust messaging capabilities, which facilitate efficient communication between different parts of the application. It supports complex routing and ensures message delivery and fault tolerance, which are crucial for the reliability of microservices.
Services communicate using RabbitMQ as a message broker. Each service sends and receives messages via queues, enabling decoupled and scalable interactions. This asynchronous communication pattern helps handle varying loads and ensures that the failure of one service does not impact others.
Kubernetes is utilized for its powerful container orchestration. It handles the deployment, scaling, and management of containerized applications, making it ideal for microservices. Kubernetes provides high availability, load balancing, and automated rollouts and rollbacks, enhancing the resilience and scalability of applications.
- Pods: The smallest deployable units created and managed by Kubernetes. Each pod represents a running process in your cluster.
- Clusters: A set of node machines for running containerized applications. A Kubernetes cluster has at least one worker node and a master node that coordinates the cluster.
- Nodes: Worker machines in Kubernetes, which can be either physical or virtual machines, depending on the cluster.
- Deployments: Kubernetes objects that manage the deployment of containerized applications, ensuring that a specified number of pods are running at any given time.
The deployment process involves several steps:
- Creating Docker images: Package the Node.js application and its dependencies into Docker containers.
- Pushing to a Registry: Push the Docker images to a container registry (e.g., Docker Hub).
- Deploying to Kubernetes: Use Kubernetes manifests to deploy your application. This includes setting up services, deployments, and necessary configurations.
- Managing with kubectl: Use kubectl commands to manage the Kubernetes resources.
kubectl get nodes -o wide
: List all nodes in the Kubernetes cluster, providing details like status, roles, and IP addresses.kubectl get pods
: List all pods in the current namespace, showing their status, restarts, and age.kubectl get svc
: List all services in the current namespace, showing their types, cluster IPs, external IPs, and ports.kubectl describe pod <pod-name>
: Provide detailed information about a specific pod, including events and resource usage.kubectl describe svc <service-name>
: Provide detailed information about a specific service, including its endpoints and selectors.kubectl apply -f <file.yaml>
: Apply a configuration to a resource by filename or stdin. Used to create or update resources defined in YAML files.kubectl port-forward svc/<service-name> <local-port>:<service-port>
: Forward one or more local ports to a pod, making it accessible on localhost.kubectl logs <pod-name>
: Fetch the logs of a specific pod. Useful for debugging and monitoring applications.kubectl delete <resource-type> <resource-name>
: Delete resources such as pods, services, or deployments by name.kubectl get configmap
: List all ConfigMaps in the current namespace.kubectl describe configmap <configmap-name>
: Provide detailed information about a specific ConfigMap.kubectl get deployments
: List all deployments in the current namespace, showing their status and number of replicas.kubectl describe deployment <deployment-name>
: Provide detailed information about a specific deployment.my-service
. Useful for local testing.
To work with this application, you need:
- Docker installed on your machine.
- Kubernetes cluster set up (Minikube etc.).
- Skaffold installed for easy deployment and testing.
- RabbitMQ setup either locally or in the cluster.
- Rancher Desktop (This will smoothen the process of installing Docker and K8s)
Each microservice has its own Dockerfile
which defines how the Docker image for that service is built. The Dockerfiles include the application's dependencies and the command to run the application.
ConfigMaps are used to store configuration data that can be accessed by the microservices.
Kubernetes deployment.yaml
defines how the publisher and subscriber microservices are deployed. It includes specifications like the number of replicas, Docker image to use, and necessary environment variables.
Kubernetes service.yaml
files are used to expose the microservices within the cluster. This ensures that the publisher and subscriber can communicate with each other and with RabbitMQ.
The skaffold.yaml
file is used to manage the development and deployment lifecycle of the microservices using Skaffold. This includes building the Docker images and deploying them to Kubernetes. Useful commands include:
skaffold dev
: Runs Skaffold in development mode, monitoring your source code for changes and performing builds and deployments automatically.skaffold build
: Builds the Docker images using the configurations provided inskaffold.yaml
.skaffold deploy
: Deploys your application to Kubernetes according to theskaffold.yaml
configuration.
Port forwarding is configured to allow local access to the Kubernetes services. Commands for setting this up include:
kubectl port-forward svc/publisher 8080:80
: Forward traffic from your local machine's port 8080 to the Kubernetes servicepublisher
on port 80.kubectl port-forward svc/subscriber 9090:90
: Forward traffic from your local machine's port 9090 to the Kubernetes servicesubscriber
on port 90.
Once everything is deployed, you can access the application by navigating to the specific port on your local machine, as defined in the port forwarding settings.