This is the second post in my series on managing clustered services with Nirmata. My previous post showed how to manage Zookeeper clusters. In this post, I will cover managing MongoDB clusters with Nirmata. You can skip this introduction if you have already read the previous post.
Cloud native applications may use several backing services, for functions like messaging and data management. Typically these backing services are deployed separately from the applications, so multiple application instances (environments) can utilize them. Also, these tools typically run as a cluster . In this blog post, I will show you how you can use Docker and Nirmata to easily deploy and operate a production MongoDB cluster.
Nirmata has been designed and built from the ground up to deploy and operate Microservices applications, . However Nirmata can also orchestrate cluster services such as Zookeeper, Kafka, MongoDB, etc. Cluster services require a different style of orchestration compared to "regular" Microservices. They tend to be less elastic in nature than application-tier Microservices. For instance, adding or removing a node sometimes requires to restart the other nodes in order to sync-up their configuration. The setup often involves specifying the IP address and ports of the other nodes in a configuration. It means that the placement of all the nodes must be calculated first before the configuration files of a the nodes can be derived.
Deploying MongoDB
With Nirmata you can deploy a MongoDB cluster in four easy steps:
- Create a Cloud Provider
- Create a Host Group with at least 3 hosts
- Import the MongoDB blueprint
- Deploy the MongoDB cluster in an Environment
Creating a Cloud Provider and a Host Group
You must first go through the initial setup to on-board the cloud resources you want the use to deploy your MongoDB cluster. You can either deploy your cluster in one of the public clouds we support or even in your private cloud on Openstack or VSphere.
Importing the MongoDB Blueprint
Next, you can import the Nirmata MongoDB blueprint into your account. Using the navigation panel, go to the Applications panel and import the MongoDB blueprint.
Click ‘Next’ and then select a container size of 2GB:
You can now click ‘Finish’ to complete the blueprint import.
You can expand the application definition to see the details of the blueprint. The details of the blueprint will be explained in the last part of this post.
A blueprint is only a logical definition of an application. At this point, the MongoDB cluster is not running in your cloud. The next step is deploying the cluster in an Environment. An environment is a runtime instantiation of an application.
Creating an Environment
To deploy the cluster, use the navigation panel on the left and select "Environments". Click on the Add button.
The only mandatory parameters you need to specify are the name of your environment, the type of the environment (Production, Staging or Sandbox) and the application blueprint you want to deploy, MongoDB-3.4.6 in this case. Now, just click the Finish button to trigger the deployment. At this point, Nirmata will compute the placement of the 3 MongoDB containers required for this cluster. Then it will create the containers on the hosts you configured in your Host Group. When the container creation and the health check is completed you will see the 3 MongoDB nodes in a running state.
Using the mongo CLI, you can check the state of your cluster:
And that is it, your MongoDB cluster is ready for use!
Operating a MongoDB Cluster
We have just seen that deploying a MongoDB cluster at scale on public cloud or private cloud can be done very quickly using Nirmata. We will now see how you can operate and maintain your cluster. We will cover 3 typical use cases:
- Scaling up the MongoDB cluster
- Scaling down the MongoDB cluster
- MongoDB nodes resiliency
Scaling-Up the Cluster
To scale up your cluster, you can edit the scaling rules in your environment. Before actually creating new nodes, make use your host group has enough hosts. In our case, we are going to scale the cluster from 3 nodes to 5, so our host group has 5 hosts. Also make sure to check the ‘Auto-Recovery’ flag.
Once the additional 2 nodes are in running state you can restart the 3 other nodes to make sure all 5 nodes have the same configuration.
Scaling-Down the Cluster
The process of scaling down your cluster is very similar to the one used to scale up. If you want to scale down from 5 nodes to 3, edit the scaling policy in your environment and set the desired count to 3. The Nirmata orchestration will shut down two out of the five running nodes. Then you can restart the 3 remaining nodes to make sure their configuration is up-to-date
You can also be specific regarding which of the two nodes you want to shut down. Instead of editing the scaling rule, delete directly the instance you want to remove from the cluster. Make sure to select the option "Decrements scaling rule".
MongoDB Nodes Resiliency
Nirmata provides out-of-the-box service instance resiliency. If a service instance is deleted or if a container fails then Nirmata will restart your service instance automatically. For regular Microservices, the service instance can be restarted on the same host or on a different host depending on the memory and ports available on each host at that time. With cluster services such as MongoDB, Nirmata will always try to restart the service instance on the same host. This is done to guaranty that the configuration of the other nodes and the configuration of the MongoDB clients are still valid after the node has recovered.
MongoDB Cluster Clients
Now that your cluster is up and running, you probably want to connect your application to it. If your application is not deployed using Nirmata, you need to provide the MongoDB connect string to your application. To format the connect string, you can look at the IP address of the hosts where the nodes are running. You will also need to know the MongoDB client port. This port is specified in the blueprint with a value of 2181.
Another option is to deploy your application using Nirmata in the same environment where the MongoDB cluster is running. You can execute the following steps to to this:
- Import the MongoDB blueprint and use it a starting point for your own blueprint (rename it to the name of your application).
- Add the definition of your services to this blueprint.
- Deploy your application blueprint in an environment
When adding the definition of your services to the blueprint, make sure to specify that your services depends on the MongoDB service:
This will indicate to Nirmata orchestration that MongoDB must be started first and then your services. An environment variable called NIRMATA_CLUSTER_INFO_mongodb will be injected in all the containers running your services. The format of this environment variable is JSON. Here is an example for a 3 nodes MongoDB cluster:
[ { "ipAddress": "10.10.130.24", "ports": [ { "portType": "TCP", "containerPort": 27017, "hostPort": 27017, "portName": "SERVICE_PORT" } ], "nodeId": 1 }, { "ipAddress": "10.10.130.114", "ports": [ { "portType": "TCP", "containerPort": 27017, "hostPort": 27017, "portName": "SERVICE_PORT" } ], "nodeId": 2 }, { "ipAddress": "10.10.128.176", "ports": [ { "portType": "TCP", "containerPort": 27017, "hostPort": 27017, "portName": "SERVICE_PORT" } ], "nodeId": 3 } ]
Your application can parse this environment variable in order to build the MongoDB connect string.
MongoDB Blueprint Explained
We are now going to take a look at the details of the MongoDB blueprint. You really don't have to understand these details if you simply want to run a MongoDB cluster . It is recommended to understand the details of the blueprint if you want to scale-up or down a cluster, or run a cluster on a limited number of hosts or even run multiple clusters on the same set of hosts.
The first section defines the most basic parameters required to create a MongoDB container:
The field "Type" indicates the type of container to use to deploy the MongoDB node. The container type specifies the amount of memory reserved for this container. You can change the container type if you want to use more memory for your MongoDB nodes.
The Image Repository field specifies the Docker Image Repository to use in order to create the container. We have posted the MongoDB Image Repository on DockerHub. Keep in mind that this image is only intended to be deployed using Nirmata solution. It won't work outside of Nirmata. We have also posted on GitHub all the files used to build that Image Repository: https://github.com/NirmataOSS/mongodb-2.6
The last parameter in this section of the blueprint is the Cluster flag. It is used to indicate to the Nirmata orchestration that a special type of orchestration is required: The placement of all the nodes is computed up-front so specific environment variables can be injected in dependent client services, restart and recovery of the nodes always happen on the same nodes, etc.
The Next section of the blueprint is the networking section.
In this section we have specified the 1 ports exposed by each MongoDB node. You should not change the name of these ports as there are used in the MongoDB startup script. However, you can change the host port if you which to use different values. You can also let Nirmata allocate dynamically the host port by setting its value to 0. This option allows you to run a multi node cluster using a number of hosts smaller than the size of the cluster. Letting Nirmata allocate the port values dynamically prevents from having port conflicts when more than one node run on a single host. You can run an entire cluster of 3 nodes, 5 nodes or more on a single host. You can also run multiple clusters on a single host if you want to .
The next section of the blueprint is the Volumes section:
This section specifies how the MongoDB data directory and the log directory are mounted on the host. We have used two Nirmata environment variables that are instantiated at runtime when the containers are created: NIRMATA_ENVIRONMENT_NAME. This variable is replaced at runtime by the name you gave to your environment.
What's Next?
We have seen that by "containerizing" a cluster like MongoDB and by using an advanced orchestration solution, we can transform what used to be long and complex operations into a fast and painless exercise. The current blueprint doesn't address yet some of the more advanced MongoDB features such as sharding. This ould be added in the furture. We also could expose some of the configuration paramaters directly in the blueprint.
In the next few weeks, we will publish similar blueprints for Kafka and Elasticsearch. Let us know if there are other cluster services that you would like us to add to the list or prioritize. You can contact us at customer-success@nirmata.com.
- Damien Toledo
Follow us: @NirmataCloud