Skip to content

PySpark Project Creation

Awantik Das edited this page Feb 18, 2019 · 21 revisions

/home/sm/Downloads/spark-2.4.0-bin-hadoop2.7/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin

Java Installation

  • sudo add-apt-repository ppa:webupd8team/java
  • sudo apt update; sudo apt install oracle-java8-installer

bashrc

  • export SPARK_HOME=/home/awantik/packages/spark-2.4.0-bin-hadoop2.7
  • export PATH=$SPARK_HOME/bin:$PATH
  1. Create Project directory
  2. Copy launch_spark_submit script here ( Required if notebook also running on same spark )

#!/bin/bash unset PYSPARK_DRIVER_PYTHON spark-submit $* export PYSPARK_DRIVER_PYTHON=jupyter

  1. Now create entry program entry.py with 'main' from pyspark.sql import SparkSession

import argparse

if __name__ == '__main__': spark = SparkSession.builder.appName('PySpark-App').getOrCreate() print('Session created') emp_data = spark.read.csv('~/Downloads/HR_comma_sep.csv',inferSchema=True,header=True) print (emp_data.count())

  1. create another dir 'additionalCode'

  2. cd additionalCode

  3. Create setup.py from setuptools import setup

    setup( name='PySparkUtilities', version='0.1dev', packages=['utilities'], license=''' Creative Commons Attribution-Noncommercial-Share Alike license''', long_description=''' An example of how to package code for PySpark''' )

  4. mkdir utilities

  5. Copy modules inside it 9a. sudo apt-get install python-pip 9b. pip install setuptools

  6. In additionalCode execute - python setup.py bdist_egg

  7. This will create dist dir.

  8. dist will contain egg file

  9. To run ./launch_spark_submit.sh --master local[4] --py-files additionalCode/dist/PySparkUtilities-0.2.dev0-py2.7.egg entry.py

Clone this wiki locally