Apache Spark is a fast and general-purpose cluster computing system. Designed for large-scale data processing, it run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk. Spark can run on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, and S3. Spark has an advanced DAG execution engine that supports acyclic data flow and in-memory computing. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. You can learn more about spark at their quick start guide here.
This repository contains the projects and exercises done by me using Spark on Python. Each folder has the code as well as data associated with that project. All the code should be executable as long as the computer meets the requirements mentioned in the dependencies section.
To execute my projects you will need a system that satisfies below dependencies. These projects were done on Linux machine so you can use Linux Ubuntu, AWS EC2, AWS EMR (Elastic MapReduce) or any cluster distributed computing environment that has spark.
- Python 3.5
- Spark 2.1
- Scala
- Java
- Linux(Ubuntu, AWS EC2, AWS EMR, Databricks Notebook)
- The code for spark DataFrameAPI can be found here.
- The code and project for Walmart data analysis can be found here.