ZooKeeper is a  distributed, open source coordination service for distributed applications.

It was started by Yahoo developers to overcome their issues in distributed applications and later on it was undertaken and developed by the Apache foundation. You can find the making of zookeeper at


To supervise the ZooKeeper instances, periodic backups, checking nodes status and auto restart on znode failures we use a project called Exhibitor that was open sourced by Netflix.

Features of Exhibitor

Instance Monitoring

Each Exhibitor instance monitors the ZooKeeper server running on the same server. If ZooKeeper is not running, Exhibitor will write the zoo.cfg file (see Cluster-wide Configuration below) and start it. If ZooKeeper crashes for some reason, Exhibitor will restart it.


Backups in a ZooKeeper ensemble are more complicated than for a traditional data store (e.g. a RDBMS). Generally, most of the data in ZooKeeper is ephemeral. It would be harmful to blindly restore an entire ZooKeeper data set. What is needed is selective restoration to prevent accidental damage to a subset of the data set. Exhibitor enables this.

Exhibitor will periodically backup the ZooKeeper transaction files. Once backed up, you can index any of these transaction files. Once indexed, you can search for individual transactions and “replay” them to restore a given ZNode to ZooKeeper.

Log Cleanup

Exhibitor does this maintenance automatically.



  • It is Java based application, requires java 1.6 and above
  • jps CLI tool to check Java processes/instances running or not.
  • maven or gradle – To build Exhibitor
    • maven to build exhibitor.jar file using pom.xml
    • gradle to build exhibitor.jar file using build.gradle

Here I already have a ZooKeeper cluster installed with five nodes. They are as follows.

Building Exhibitor jar

This is required only once in any of the nodes as we can use the same jar in the other nodes.
There are two methods to build the jar file.

  • Maven
  • Gradle

Using Maven
Install maven package

Create a new directory and download pom.xml files ( This file consists of paths of sources to download the jar)

Run the following command to download necessary code using pom.xml

If you receive any warning, you can ignore them safely.

Now build is done and you will be able to see the following files.

Using Gradle

Install gradle package

Create a new directory and download the build.gradle file into it.

Run the following command to get required code using build.gradle

A common problem experienced while building and its solution

The solution is to edit the build.gradle file and add the below mentioned code

Once the build is successful, build the jar

Gradle creates a directory “build/libs” as shown below

Rename the gradle-1.5.1.jar to Exhibitor-1.5.1.jar.

Monitoring with Exhibitor

Accessing Exhibitor

Open any of the ZooKeeper’s ensemble node IP in the web browser and give the server IP address in the form http://zk1:8080/exhibitor/v1/ui/index.html

Integrate Exhibitor with single node ZooKeeper

Make sure your ZooKeeper node is up and running.

In the Exhibitor web console, switch on Editing -> add the ZooKeeper Install dir /usr/local/* ( parent directory for ZooKeeper Install). If you put * to the end of the value and Exhibitor will search for the latest version of ZooKeeper in that directory. It does this by choosing the directory with the highest version number in the name. i.e. ‘zookeeper-3.4.3’ will be chosen over ‘ZooKeeper.3.3.5’.


ZooKeeper snapshot Dir : It is nothing but a backup of ZooKeepers transactional log files. In ZooKeeper all the data is stored in .log format. The default max log size is 64MB.

The following options are the paths where we need to save our backups.


Commit  the changes, this will cause Exhibitor to stop and start the ZooKeeper instance. If successful, you should see following.

Single Node Exhibitor

Integrate Exhibitor with a ZooKeeper cluster

  • To monitor this cluster, we have to copy the Exhibitor-1.5.6.jar file built previously to all the nodes and jar has to be run on all of them.
  • First run the below basic Exhibitor command with default options.

This command only helps us to monitor the node where we ran this command. This will not update the configuration even if you add all host details in the config tab  under ensemble section in exhibitor GUI.

We want to be able to monitor all the nodes in the cluster. To do this, all the Exhibitor nodes have to share configuration. This sharing can be done in 3 ways – shared file-system, Amazon S3 or a ZooKeeper cluster itself (which may be different from the ensemble we are monitoring).

The -c with ZooKeeper option in command enables shared configuration. This option provide the facility to sync the configuration entry’s between all hosts in a cluster.

Keeping the Configuration in a Shared Filesystem

To know about the syntaxes in command, please refer exhibitor help notes:

I ran my ZooKeeper instances in VirtualBox VMs. In order to create a shared folder, I have used the VirtualBox shared folder concept across all guests with Automount option. Then I mounted it to /mnt/sf_ex-shared in all the nodes in the ZooKeeper cluster.

Now run the following command in all the cluster nodes.

Once you ran this you won’t see anything in exhibitor gui config section at first time and even you can’t able to monitor the cluster. To monitor the cluster, edit any of the exhibitor config section (in any of the nodes) and add required entry’s and do commit.


It will create a in shared location. Please do remember, by default exhibitor doesn’t create a file. This file looks like

Your Exhibitor GUI should now look like this –


Keeping the Configuration in ZooKeeper

We can also use a separate ZooKeeper ensemble to achieve the same thing as the shared file system. The command for that is –


Repeat the above command in all nodes in a cluster. If you are using the same ensemble for storing configuration, then exclude the current host entry and add all other hosts with port at zkconfigconnect. If it’s a different ensemble, include all the hosts. It will initiate the connection between all nodes in a cluster, once this is done, wait for some time to get all nodes available in exhibitor console.

The –filesystembackup enables the Backup and Restore option in exhibitor GUI.

Key Points

  • Paths should not be /tmp
  • Make sure exhibitor is started in all the nodes.
  • ZooKeeper must be in running state before Exhibitor starts.
  • Each Exhibitor instance monitors the ZooKeeper server running on the same server. If ZooKeeper is not running, Exhibitor will write the zoo.cfg fie and start it. If ZooKeeper crashes for some reasons, Exhibitor will restart it.