Installing Apache spark

2017/12/08

Spark Overview

Spark is a general computation engine that uses distributed memory to perform fault-tolerant computations with a cluster. Even though Spark is relatively under heavy developmenr, it is one of the hottest open-source technologies at the moment and has begun to surpass Hadoop’s MapReduce model. This is partly because Spark’s Resilient Distributed Dataset (RDD) model can do everything the MapReduce paradigm can, and also sparks in memory computation greatly aceelerates the Speed of computations.

In addition, Spark can perform iterative computations at scale, which opens up the possibility of executing machine learning algorithms (with Spark MLlib) much faster than with Hadoop alone.

Modes of Deployment in spark

Single Node Cluster in for Development

Before Going into Installation Part of spark we need to install dependencies. Here we are assuming that you are using ubuntu server installing spark server. Here are the Pre-reqisites for installing spark

Dowanloading and installing JVM and Scala

Enter the following commands in the bas terminal to install scala and java virtual machine

sudo apt update && sudo apt upgrade
sudo apt install openjdk-8-jdk
sudo apt install scala

If You want to install oracle Jdk instead of the open jdk for performace considerations please use following commands

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

Download Apache spark from This link and extract it using an archieve manager

tar xvf spark-2.2.0-bin-hadoop2.7.tgz
cd spark-2.0.2-bin-hadoop2.7.tgz
cd bin
SPARK_HOME=/path_to_extracted_spark_folder
export PATH=$SPARK_HOME/bin:$PATH
spark-shell

Here we are assigning user to assign owner privilages

sudo chown -R your_username $SPARK_HOME 

This concludes the installation of Apache spark in ubuntu systems

Launching clusters Manually

However Most of the Use of the Spark is in cluster of spark since its all about the parallel execution of ETL workloads. For The standalone deploy mode of apache spark. To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release

You can start a standalone master server by executing:

./sbin/start-master.sh

Once started, the master will print out a spark://HOST:PORT URL for itself, which you can use to connect workers to it, or pass as the “master” argument to SparkContext. You can also find this URL on the master’s web UI, which is http://localhost:8080 by default.

Similarly, you can start one or more workers and connect them to the master via: ./sbin/start-slave.sh

Once you have started a worker, look at the master’s web UI (http://localhost:8080 by default). You should see the new node listed there, along with its number of CPUs and memory (minus one gigabyte left for the OS).

Finally, the following configuration options can be passed to the master and worker:

Argument Function
-h HOST, --host HOST Hostname to listen on
-i HOST, --ip HOST Hostname to listen on (deprecated, use -h or –host)
-p PORT, --port PORT Port for service to listen on (default: 7077 for master, random for worker)
--webui-port PORT Port for web UI (default: 8080 for master, 8081 for worker)
-c CORES, --cores CORES Total CPU cores to allow (default: all available); only on worker
-m MEM, --memory MEM Total amount of memory to allow Spark applications to use on the machine for worker only
-d DIR, --work-dir DIR Directory to use for scratch space and job output logs (default: SPARK_HOME/work) for worker only
--properties-file FILE Path to a custom Spark properties file to load (default: conf/spark-defaults.conf)```

Automatically Launching clusters

To launch a Spark standalone cluster you should create a file called conf/slaves in your Spark directory, which must contain the hostnames of all the machines where you intend to start Spark workers, one per line.

Once you’ve set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop’s deploy scripts, available in SPARK_HOME/sbin:

Note that these scripts must be executed on the machine you want to run the Spark master on, not your local machine.

The only file to focus on is the $SPARK_HOME/conf/spark-env.sh. First make a copy of the template and rename it and thene execute following in all nodes cp $SPARK_HOME/conf/spark-env.sh.template $SPARK_HOME/conf/spark-env.sh

The following Confiuratiion is example of the Spark conf


```bash
#!/usr/bin/env bash

# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.

export JAVA_HOME=/usr 
export SPARK_PUBLIC_DNS="current_node_public_dns" 
export SPARK_WORKER_CORES=6 

We have chosen SPARK_WORKER_CORES to be 6 based on the instances that we are using for this example cluster setup. In general, this variable defines the amount of parallelism each Spark Worker node has. The variable SPARK_WORKER_CORES can be misleading, since it does not represent the number of physical cores on your Spark Worker machine. Instead it represents the number of Spark tasks (or threads) a Spark Worker can give to its Spark Executors.

In spark Master Create a slaves file in the Spark configuration folder which will contain the public DNS’s of all the Spark Worker nodes:

$ touch $SPARK_HOME/conf/slaves

$SPARK_HOME/conf/slaves
spark_worker1_public_dns
spark_worker2_public_dns
spark_worker3_public_dns

We can now start up the Spark cluster from the Spark Master node spark_master_node$ $SPARK_HOME/sbin/start-all.sh There are simple ways to configure cluster in aws using pre built scipts and will be covered in next sextion However You can use manages services like google cloud and the best of them all is Data Bricks Spark Platform.

Please Comment your Suggestions or Email them to [email protected]