Contributing Specific Amount Of Storage As Slave To The Hadoop Cluster.

Rohit Raut
Oct 26, 2020

--

In the Hadoop cluster, sometimes we want to give limited space to name-node and this can be achieved using a partition.

here I already did the setup of the Hadoop cluster having one name node and one data node on the top of AWS.

This is my previous article where I have explained how can you set up a Hadoop cluster on AWS?

Let's get started

Step 1: First we need a hard disk from where we can get storage.

here we have to create volume and attach it to the data-node instance. EBS is a zonal service.Hence instance and volume must be in the same AZ

Volume and DataNode

Step2: Create New Folder from cluster pick storage and create Partition.

mkdir /dn2

here I am creating a partition of 5GB and mount to /dn2

Step 3: Stop the Hadoop data node Service.

hadoop-deamon.sh stop datanode

Step 4: Changes in the configuration file.

change /dn to previously created folder /dn2.

Step4: Start the DataNode service.

hadoop-deamon.sh start datanode

Step4: Check report of Hadoop Cluster

hadoop dfsadmin -report

You can see that now my HADOOP Cluster is only able to use 5GB from my slave node.

Thank you for reading:)

--

--

No responses yet