Configuring HDFS High availability using Cloudera Manager

One of the main advantages of Hadoop 2 is its High availability capability through adding a standby NameNode and Resource Manager.

Cloudera Hadoop default installation is not Highly available and this should be configured after the installation. I used CDH5 with Cloudera manager which automates most of the hard work and makes configuring HA very easy. In the next post we will discuss YARN high availability.

We start with HDFS. Go to the clusters menu and select the HDFS service.

At this point you must have the server on which the standby nameNode will run ready and added to the cluster.


View full size image

Click on “Enable high availability” button above the list.

When using HA, the servers communicate with a new entity called name service instead of communicating directly with the physical NameNode host. The name service points to the active nameNode at all times. In the next screen we will pick a name for the name service (default is nameservice1):

View full size image

Now we choose which machines will host the two NameNodes and which will host the journalNodes (choose at least 3):


View full size image

In the “review changes” screen you will have to specify the directory in which every journal node will keep its copy of the edits file.

There are also some extra options at the bottom:


View full size image

View full size image

Now cloudera manager will configure the cluster with the new services and definitions, and will restart HDFS. You can watch all the steps and the overall progress:


View full size image

In the end, you may receive a message about some additional actions that should be done manually to complete the HA configuration (In my case it’s Configuring httpFS and HUE for high availability and update Hive MetaStore to work with the name service instead of the NameNode server. If you run Impala or HBase on your cluster they will have to be configured manually too, but I did not have them running in my cluster), We will look at how to do that next :

View full size image

Do not worry too much about fencing, Cloudera manager automatically configures the NameNodes with the shell fencing method.

Now we will configure httpFS and HUE:

  1. Go to the HDFS service.
  2. Click the Instances
  3. Click Add Role Instances.
  4. Click the text box below the HttpFS The Select Hosts dialog displays.
  5. Select the host on which to run the role and click OK.
  6. Click Continue.
  7.  Back in Instances, Check the check box next to the HttpFS role and select Actions for Selected > Start.
  8. Now go to HUE service.
    Click Configuration (It is probably shown in Red. that’s ok for now).
  9. Locate the HDFS Web Interface Role property or search for it by typing its name in the Search box.
  10. Select the HttpFS role you just created instead of the NameNode role, and save your changes.
  11. Restart the Hue service.



Configuring HIVE MetaStore for High Availability:

This updates the Hive Metastore to point to a service name instead of a physical server name, so it can point to the active server at all times.

Stop HUE, Impala and Hive services in this order.

  1. Back up the Hive metastore database.
  2. Go to Hive service.
  3. Select Actions> Update Hive Metastore NameNodes and confirm the command.

  4. Start Hive, Impala and Hue services in this order.

That’s it, HDFS is now highly available.

Now let’s test it:

View full size image

If we go to HDFS service, you can see that cloudera5 is the active NameNode. From the actions menu we select manual failover. After a while you can see the two NameNodes change their roles.

Another, more aggressive way to check this is to just pull the plug of the active NameNode. If you do that for cloudera5 you can see that now cloudera1 became the active NameNode and cloudera5 is unavailable:

View full size image

I tried running HDFS commands during the failover. Right after I killed cloudera5 I got exceptions, but few seconds later, when cloudera1 took over, I was able to run HDFS commands again:


View full size image

So we now have a working highly available HDFS. Next time we will cover configuring Highly available YARN.

This entry was posted in Hadoop, HDFS and tagged , , . Bookmark the permalink.

Leave a Reply