Create a ETL Pipeline for Data Modeling task with Apache Cassandra
Report Bug
·
Request Feature
Table of Contents
A startup called Sparkify wants to analyze the data they've been collecting on songs and user activity on their new music streaming app. The analysis team is particularly interested in understanding what songs users are listening to. Currently, there is no easy way to query the data to generate the results, since the data reside in a directory of CSV files on user activity on the app.
They'd like a data engineer to create an Apache Cassandra database which can create queries on song play data to answer the questions, and wish to bring you on the project. Your role is to create a database for this analysis. You'll be able to test your database by running queries given to you by the analytics team from Sparkify to create the results.
Path files example
event_data/2018-11-08-events.csv
event_data/2018-11-09-events.csv
Below is an example of what the data in a log file, 2018-11-12-events.json, looks like.
This is my database Star Schema.
install package with requirements.txt
pip install - requirements.txt
- run python create_keyspace.py to create keyspace
- run python etl.py to do ETL
- watch results in test.ipynb
Facebook - @Natchapol Patamawisut
Project Link: https://github.com/BankNatchapol/ETL-Apache-Cassandra