author-pic

Abid Zaidi

Download and setup Hadoop and Spark on Ubuntu


Published on December 29, 2021

Prerequisite

sudo apt update

sudo apt install openjdk-8-jdk -y
java -version; javac -version

sudo apt install openssh-server openssh-client -y
sudo adduser hdoop

Login from sudo and make hdoop a sudoer

visudo
hdoop  ALL=(ALL) NOPASSWD:ALL
echo "hdoop  ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/username

setup ssh for hdoop

su hdoop
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

# Just to check
ssh localhost

Install and Setup Hadoop

https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
tar xzf hadoop-3.3.1.tar.gz

sudo vim ~/.bashrc

export HADOOP_HOME=/home/hdoop/hadoop-3.2.1
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_OPTS"-Djava.library.path=$HADOOP_HOME/lib/nativ"

source ~/.bashrc

Configure hadoop files

sudo vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

sudo vim $HADOOP_HOME/etc/hadoop/core-site.xml

<configuration>
	<property>
	  <name>hadoop.tmp.dir</name>
	  <value>/home/hdoop/tmpdata</value>
	</property>
	<property>
	  <name>fs.default.name</name>
	  <value>hdfs://127.0.0.1:9000</value>
	</property>
</configuration>

sudo vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml

<configuration>
	<property>
	  <name>dfs.data.dir</name>
	  <value>/home/hdoop/dfsdata/namenode</value>
	</property>
	<property>
	  <name>dfs.data.dir</name>
	  <value>/home/hdoop/dfsdata/datanode</value>
	</property>
	<property>
	  <name>dfs.replication</name>
	  <value>1</value>
	</property>
</configuration>

mkdir -p /home/hdoop/dfsdata/namenode mkdir -p /home/hdoop/dfsdata/datanode


sudo vim $HADOOP_HOME/etc/hadoop/mapred-site.xml

<configuration> 
	<property> 
	  <name>mapreduce.framework.name</name> 
	  <value>yarn</value> 
	</property> 
</configuration>

sudo vim $HADOOP_HOME/etc/hadoop/yarn-site.xml

<configuration>
   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>
   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>
   <property>
      <name>yarn.resourcemanager.hostname</name>
      <value>127.0.0.1</value>
   </property>
   <property>
      <name>yarn.acl.enable</name>
      <value>0</value>
   </property>
   <property>
      <name>yarn.nodemanager.env-whitelist</name>
      <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PERPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
   </property>
</configuration>

Hadoop running commands

For first time hdfs namenode -format

cd $HADOOP_HOME/sbin

Start hdfs datanode and namenode ./start-dfs.sh

Start yarn after all nodes are up ./start-yarn.sh

To view all hdfs, yarn daemons jps

Download and Install apache spark

wget https://dlcdn.apache.org/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
tar xvzf spark-3.1.2-bin-hadoop3.2.tgz

export SPARK_HOME=/home/hdoop/spark-3.1.2-bin-hadoop3.2
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

source ~/.bashrc

Start Spark services

cd $SPARK_HOME/sbin
start-master.sh

# To start a slave
start-slave.sh spark://ubuntu:7077

Useful commands

bin/hadoop dfsadmin -report

hdfs getconf -namenodes
hdfs getconf -secondaryNamenodes
hdfs getconf -confKey fs.defaultFS

Sample Spark Program

flight data

from pyspark.sql import SparkSession
spark = SparkSession.builder.config("spark.executor.memory", "500mb").config("spark.sql.shuffle.partitions","5").appName("Retail").getOrCreate()

flightData2015 = spark.read.option("inferSchema", "true").option("header", "true").csv("/home/hdoop/Spark-The-Definitive-Guide/data/flight-data/csv/2015-summary.csv") 
flightData2015.take(3)

Links

If you like it, share it!


Abid Zaidi's DEV diary

Built with Gatsby Theme by @willjw3