This topic contains a basic procedure on how to set up Storage Server with HDFS.
- Kerberos should be installed on the client and host machines.If not installed you will need to install the following in UNIX (see https://help.ubuntu.com/10.04/serverguide/kerberos.html for more information).
- Open a terminal and type the following:
sudo apt-get install krb5-kdc krb5-admin-server
- Set the realm as WSO2.ORG.
Starting the Storage Server node
- Follow the steps below to create a keytab with the following service principals.
The default carbon.keytab file that comes with the Storage Server pack includes this keytab. However, in case a new datanode needs to be added, then the new datanode's service principal, which should be unique, is added to this keytab file and a new carbon.keytab file needs to be created. By default, SS is configured to start one namenode and a datanode.
- admin/carbon.super - password: admin
- datanode/carbon.super - password: node0
- If you are starting a data node, add the data node principal as well.
- The keytabs are created.
Cache the principle key using the following command.
ktutil: addent -password -p <your principle> -k 1 -e <encryption algo>
The following is a sample for this:
Write a keytab for the service principle using the following command:
ktutil: write_kt <keytab file name>The following is a sample for this:
Copy the created keytab file to
[SS_HOME]/repository/conf/etc/hadoop/keytabs/and rename it to carbon.keytab.
- Start the server with HDFS enabled.
Access the Carbon configuration menu and create a new service principal for data nodes with relevant passwords.
When the namenode starts, the user should go to the Carbon console in the namenode and create a service principal for the datanodes.
datanode/carbon.supershould be added, plus any other datanodes the user is willing to start.
If your name node is up and HDFS is set up properly, you will notice the following lines in your console.
If data node needs to be up, open another terminal, and run the following command.
$ HADOOP_SECURE_DN_USER=<username> sudo -E bin/hadoop datanode
Starting multiple Data nodes pointing to one Namenode
Change the following property values in the hdfs-site.xml file to point to the namenode:
Change the following properties in the core-site.xml file to point to the namenode:
Change the following in the hdfs-site.xml file to start the datanode.
- Add the datanode IP and port to the slaves, one per line.
- When starting multiple datanodes in the same machine, make sure you change the PID_DIR and the IDENT_STRING for the data node in the hadoop-env.shfile.
If starting as a secure datanode then add the following line:
Alternatively add the following: