Total Pageviews


January 6, 2018

Configuring MongoDb High Availability Replication Cluster in RHEL7

by 4hathacker  |  in Redhat Enterprise Linux at  6:58 PM

Hi everyone...

This is one off-the-beat post for MyAnsibleQuest.

MongoDb is an open-source document based database that provides high performance, high availability, and automatic scaling. We all know about the power of NoSQL in which a schema is given less important over high rate of transactions and reliability. And MongoDb is one of its kind. Its document like data structure is based on key value pairs, which makes it easy to understand and elegant. Replication facility in mongo provides high availability and data redundancy which simply means that we do not have to rely on one instance/server of mongo in a large cluster for read/write of data.

In this post, we will be going through the steps for installing mongodb 3.4 in RHEL7, and then will configure a mongodb HA cluster with 4 mongo instances. Lets start with installing mongodb 3.4 in RHEL7.

Step 1 - Add mongo repository: After checking the RHEL7/CentOS repositories, I found mongo version 2.6. But in this post I will be covering it with mongo 3.4. To install the mongo 3.4, create a repo in your machine as described below. 

vi  /etc/yum.repos.d/mongo.repo

In this repo, add the following lines,

name=MongoDB Repository

Step 2 - Install mongodb: To install mongodb type the usual yum install command as provided,

yum -y install mongodb-org

This will install the following, 
mongodb-org-server – The server daemon (mongod) with init scripts and configurations.
mongodb-org-mongos – The MongoDB Shard daemon
mongodb-org-shell – The MongoDB shell, a command line interface.
mongodb-org-tools – Contains MongoDB tools for import, export, restore, dump, and other functions.
Step 3 - Check whether installation is fine or not by running the version command for mongod (server daemon) as well as for mongo (client).

Step 4 - Create admin and siteRootAdmin users with appropriate password. And create a mongo_ha database with mongo_ha_admin user.

Since we are defining a mongo HA cluster, we need the above three steps to be done on all the mongo instances/servers. But before that, its time to know about the Replication in mongodb. I have one mongo cluster diagram which will make it easy to understand.
MongoDB replication is based on replicaSet configuration in the cluster. The replicaSet includes several members with a definite role to play in the HA Cluster. 
According to the structure, there are four instances/servers of mongodb viz.,
1. Primary Instance (node218) :  The primary instance is the base/default access point for transactions.  It is the only member that can accept write operations. The primary's operations log are then copied on secondary's dataset.
2. Secondary Instance (node227 and node228): The secondary instances in a cluster can be multiple in number. They reproduce the changes from primary's oplog. A secondary instance becomes primary if the primary crashes or seems to be unavailable. This decision is based on  failure of communication between primary and secondary for more than 10-30 seconds.
3. Arbiter (node229): This arbiter is only required when failover occurs and a new primary is to be elected. In case of even number of secondaries, it will play a significant role for the election of new primary. It is clear that no dedicated hardware is required for arbiter, although its a part of replicaSet but no data ever went to arbiter.
4. The blue arrows in the structure represent the replicaSet instances involved in data replication.
5. The black arrows represent that a continuous communication(heartbeat) is taking place between all the members since the cluster started.
After understanding the same, lets move on and install mongodb in all the four instances/servers following the steps 1 to 3 and come back to our primary server. 

Step 4 - Create a keyFile for authentication among the mongo instances present in the cluster. This could be easily done with OpenSSL as described in the images.

I have created a long key encrypting a random number in base64 encoding. I gave the necessary permissions to the same and then securely copied it in all the members of cluster.

Step 5 - Create a directory for dbpath in every mongo instance. 

mkdir /etc/mongodata

Step 6 - Now edit the configuration file at /etc/mongod.conf  for enabling replication, setting authentication and providing the dbpath. Although we still have to provide the same info while turning on the mongod.

vim /etc/mongod.conf

And the file will look like,
# mongod.conf

# for documentation of all options, see:

# where to write logging data.
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
  quiet: true

# Where and how to store data.
  dbPath: /etc/mongodata
    enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# how the process runs
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/  # location of pidfile

# network interfaces
  port: 27017
  bindIp: [,,,]  # Listen to local interface only, comment to listen on all interfaces.

  keyFile: /etc/mongo-key


  replSetName: mongo-HA


## Enterprise-Only Options



Step 7 - Start mongod in each mongo instance of cluster by the command,

mongod --dbpath /etc/mongodata --port 27017 --replSet mongo_HA 

Step 8 - Start mongo on Primary instance (node218), authenticate with siteRootAdmin and password. Use command rs.initiate() to include the first member of HA Cluster. If you run rs.conf(), its visible in members that the "_id" is given zero to this host.

Step 9 - Add other members in the cluster and check the status,


To confirm that replication is working fine, create a s3 database in primary insatnce and you can see them in secondary instances automatically.

Important Notes:
1. In my case, I have DNS configured properly. If you do not have DNS, your member servers can't resolve themselves amongst each other. Do proper entries in /etc/hosts file of each member about the member servers.
2. If error occurs in replicaSet commands, check whether mongod was started with --replSet option or not, or the configuration file entries are there or not.
3. If you are getting warning messages like this, /sys/kernel/mm/transparent_hugepage/defrag is 'always', visit this link.

Finally, I would like to say that this setup could be easily achieved with the help of Ansible or other management tools. I will definitely cover the same with Ansible in one of the upcoming blog posts.


Like Our Facebook Page

Nitin Sharma's DEV Profile
Proudly Designed by 4hathacker.