-
Notifications
You must be signed in to change notification settings - Fork 1
7. Install Kaspacoresystem (Docker version)
-
Make sure you have already installed Docker at the Defense Center environment, see https://docs.docker.com/install/ for Docker installation tutorial.
-
Edit
docker-compose.yml
file, change with your host IP, and then save it. -
Pull the image
$ docker-compose pull
-
Clone the KaspaCoreSystem repository, then navigate the terminal to that directory.
$ git clone https://github.com/mata-elang-pens/KaspaCoreSystem.git && cd KaspaCoreSystem
-
Change the value in
src/main/resources/application.conf
to match your environment. -
Also change the MongoDB IP from 127.0.0.1 to your IP in
src/main/scala/me/mamotis/kaspacore/jobs/DataStream.scala
. -
Run these command from KaspaCoreSystem directory:
$ sbt assembly
-
Copy
target/scala-2.11/KaspaCore-assembly-0.1.jar
file to the defense center and place it in this directory withdocker-compose.yml
and other files. -
Start Docker services in daemon mode using these command:
$ docker-compose up -d
-
Make sure that all service running
$ docker-compose ps
- Open the web browser, and navigate to http://your-server-ip:8080, check if there is still running app, if not, try restarting the spark-submit service and then check again :
$ docker-compose start spark-submit
- Next, we need to set up the scheduled batch job. First, create a directory to place the required files for the batch job.
$ sudo mkdir -p /etc/mataelang-spark
- Create a new file called
spark.env
$ sudo nano /etc/mataelang-spark/spark.env
add the following lines to the spark.env
file (just change the SPARK_MASTER_HOST with your server IP)
SPARK_MASTER_HOST: yourip
SPARK_MASTER_PORT: 7077
SPARK_TOTAL_EXECUTOR_CORES: 1
SPARK_CONF_FILE_PATH: /opt/spark.conf
SPARK_SUBMIT_JAR: file:///opt/KaspaCore-assembly-0.1.jar
Next, create a new file called spark.conf
$ sudo nano /etc/mataelang-spark/spark.conf
and then add the following lines to the spark.conf
file
spark.submit.deployMode=client
spark.executor.cores=1
spark.executor.memory=2g
- Copy application file (KaspaCore-assembly-0.1.jar) to the
/etc/mataelang-spark/
$ sudo cp /path/to/KaspaCore-assembly-0.1.jar /etc/mataelang-spark/
- Add cron job for the KaspaCoreSystem batch job. Run the following command to open the crontab file:
$ sudo crontab -e
After the text editor opened, add the following line :
0 0 * * * docker run --rm --name spark-submit-daily --network host -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone -v /etc/mataelang-spark/spark.conf:/opt/spark.conf -v /etc/mataelang-spark/KaspaCore-assembly-0.1.jar:/opt/KaspaCore-assembly-0.1.jar --env-file /etc/mataelang-spark/spark.env -e SPARK_SUBMIT_CLASS=me.mamotis.kaspacore.jobs.DailyCount mfscy/me-spark-submit:latest
0 0 1 * * docker run --rm --name spark-submit-monthly --network host -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone -v /etc/mataelang-spark/spark.conf:/opt/spark.conf -v /etc/mataelang-spark/KaspaCore-assembly-0.1.jar:/opt/KaspaCore-assembly-0.1.jar --env-file /etc/mataelang-spark/spark.env -e SPARK_SUBMIT_CLASS=me.mamotis.kaspacore.jobs.MonthlyCount mfscy/me-spark-submit:latest
0 0 1 1 * docker run --rm --name spark-submit-yearly --network host -v /etc/localtime:/etc/localtime -v /etc/timezone:/etc/timezone -v /etc/mataelang-spark/spark.conf:/opt/spark.conf -v /etc/mataelang-spark/KaspaCore-assembly-0.1.jar:/opt/KaspaCore-assembly-0.1.jar --env-file /etc/mataelang-spark/spark.env -e SPARK_SUBMIT_CLASS=me.mamotis.kaspacore.jobs.AnnuallyCount mfscy/me-spark-submit:latest