Skip to content

Commit

Permalink
Update oshinko-cli version to v0.5.6
Browse files Browse the repository at this point in the history
Mmodify start.sh to account for output changes in yaml/json
for get
  • Loading branch information
tmckayus committed Sep 24, 2018
1 parent b79c2f6 commit c5f0f22
Show file tree
Hide file tree
Showing 26 changed files with 48 additions and 50 deletions.
4 changes: 2 additions & 2 deletions image.java.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,8 @@ packages:
artifacts:
- url: https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
md5: af45eeb06dc1beee6d4c70b92c0e0237
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.4/oshinko_v0.5.4_linux_amd64.tar.gz
md5: e4b6b46f86bef72cc0b99a2d96273472
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.6/oshinko_v0.5.6_linux_amd64.tar.gz
md5: a839a8942ec08650c28f789570f5c85e
run:
user: 185
cmd:
Expand Down
4 changes: 2 additions & 2 deletions image.pyspark-py36.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ packages:
artifacts:
- url: https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
md5: af45eeb06dc1beee6d4c70b92c0e0237
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.4/oshinko_v0.5.4_linux_amd64.tar.gz
md5: e4b6b46f86bef72cc0b99a2d96273472
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.6/oshinko_v0.5.6_linux_amd64.tar.gz
md5: a839a8942ec08650c28f789570f5c85e
run:
user: 185
cmd:
Expand Down
4 changes: 2 additions & 2 deletions image.pyspark.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ packages:
artifacts:
- url: https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
md5: af45eeb06dc1beee6d4c70b92c0e0237
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.4/oshinko_v0.5.4_linux_amd64.tar.gz
md5: e4b6b46f86bef72cc0b99a2d96273472
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.6/oshinko_v0.5.6_linux_amd64.tar.gz
md5: a839a8942ec08650c28f789570f5c85e
run:
user: 185
cmd:
Expand Down
4 changes: 2 additions & 2 deletions image.scala.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ packages:
artifacts:
- url: https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
md5: af45eeb06dc1beee6d4c70b92c0e0237
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.4/oshinko_v0.5.4_linux_amd64.tar.gz
md5: e4b6b46f86bef72cc0b99a2d96273472
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.6/oshinko_v0.5.6_linux_amd64.tar.gz
md5: a839a8942ec08650c28f789570f5c85e
run:
user: 185
cmd:
Expand Down
4 changes: 2 additions & 2 deletions image.sparklyr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ packages:
artifacts:
- url: https://archive.apache.org/dist/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
md5: af45eeb06dc1beee6d4c70b92c0e0237
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.4/oshinko_v0.5.4_linux_amd64.tar.gz
md5: e4b6b46f86bef72cc0b99a2d96273472
- url: https://github.com/radanalyticsio/oshinko-cli/releases/download/v0.5.6/oshinko_v0.5.6_linux_amd64.tar.gz
md5: a839a8942ec08650c28f789570f5c85e
run:
user: 185
cmd:
Expand Down
2 changes: 1 addition & 1 deletion java-build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ RUN yum install -y tar wget \
# directory
COPY \
spark-2.3.0-bin-hadoop2.7.tgz \
oshinko_v0.5.4_linux_amd64.tar.gz \
oshinko_v0.5.6_linux_amd64.tar.gz \
/tmp/artifacts/

# Add scripts used to configure the image
Expand Down
6 changes: 3 additions & 3 deletions java-build/modules/common/added/utils/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function get_cluster_value {
# get the value
echo "$1" \
| sed -e 's/^[ \t]*//' \
| grep ^$2 \
| grep -i ^$2 \
| cut -d\ -f2
}

Expand Down Expand Up @@ -211,7 +211,7 @@ function wait_for_workers_alive {
# If someone scales down the cluster while we're still waiting
# then we need to know what the real target is so check again
line=$($CLI get $OSHINKO_CLUSTER_NAME $GET_FLAGS $CLI_ARGS)
desired=$(get_cluster_value "$line" workerCount)
desired=$(get_cluster_value "$line" workersCount)
done
echo "All spark workers alive"
}
Expand Down Expand Up @@ -287,7 +287,7 @@ function use_spark_standalone {

else
# Build the spark-submit command and execute
desired=$(get_cluster_value "$CLI_LINE" workerCount)
desired=$(get_cluster_value "$CLI_LINE" workersCount)
master=$(get_cluster_value "$CLI_LINE" masterUrl)
masterweb=$(get_cluster_value "$CLI_LINE" masterWebUrl)
ephemeral=$(get_cluster_value "$CLI_LINE" ephemeral)
Expand Down
5 changes: 2 additions & 3 deletions java-build/modules/java/added/s2i/usage
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ command:
$ mvn package dependency:copy-dependencies -Popenshift -DskipTests -e
See https://docs.openshift.org/latest/using_images/s2i_images/java.html for more
information on maven options.
See https://docs.okd.io/latest/using_images/s2i_images/index.html for more
information on s2i images.
To use it, first install S2I: https://github.com/openshift/source-to-image
Expand All @@ -25,7 +25,6 @@ The resulting application image contains a startup script that expects the follo
optional environment variables which can be set when the application is launched:
OSHINKO_CLUSTER_NAME -- name of the spark cluster to use or create, required
OSHINKO_REST -- ip of the oshinko-rest controller, required
SPARK_OPTIONS -- options to spark-submit, e.g. "--conf spark.executor.memory=2G --conf spark.driver.memory=2G"
APP_MAIN_CLASS -- name of the main class, required
APP_ARGS -- arguments to the python application (if the app takes arguments)
Expand Down
6 changes: 3 additions & 3 deletions modules/common/added/utils/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function get_cluster_value {
# get the value
echo "$1" \
| sed -e 's/^[ \t]*//' \
| grep ^$2 \
| grep -i ^$2 \
| cut -d\ -f2
}

Expand Down Expand Up @@ -211,7 +211,7 @@ function wait_for_workers_alive {
# If someone scales down the cluster while we're still waiting
# then we need to know what the real target is so check again
line=$($CLI get $OSHINKO_CLUSTER_NAME $GET_FLAGS $CLI_ARGS)
desired=$(get_cluster_value "$line" workerCount)
desired=$(get_cluster_value "$line" workersCount)
done
echo "All spark workers alive"
}
Expand Down Expand Up @@ -287,7 +287,7 @@ function use_spark_standalone {

else
# Build the spark-submit command and execute
desired=$(get_cluster_value "$CLI_LINE" workerCount)
desired=$(get_cluster_value "$CLI_LINE" workersCount)
master=$(get_cluster_value "$CLI_LINE" masterUrl)
masterweb=$(get_cluster_value "$CLI_LINE" masterWebUrl)
ephemeral=$(get_cluster_value "$CLI_LINE" ephemeral)
Expand Down
2 changes: 1 addition & 1 deletion pyspark-build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ RUN yum install -y java-1.8.0-openjdk \
# directory
COPY \
spark-2.3.0-bin-hadoop2.7.tgz \
oshinko_v0.5.4_linux_amd64.tar.gz \
oshinko_v0.5.6_linux_amd64.tar.gz \
/tmp/artifacts/

# Add scripts used to configure the image
Expand Down
6 changes: 3 additions & 3 deletions pyspark-build/modules/common/added/utils/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function get_cluster_value {
# get the value
echo "$1" \
| sed -e 's/^[ \t]*//' \
| grep ^$2 \
| grep -i ^$2 \
| cut -d\ -f2
}

Expand Down Expand Up @@ -211,7 +211,7 @@ function wait_for_workers_alive {
# If someone scales down the cluster while we're still waiting
# then we need to know what the real target is so check again
line=$($CLI get $OSHINKO_CLUSTER_NAME $GET_FLAGS $CLI_ARGS)
desired=$(get_cluster_value "$line" workerCount)
desired=$(get_cluster_value "$line" workersCount)
done
echo "All spark workers alive"
}
Expand Down Expand Up @@ -287,7 +287,7 @@ function use_spark_standalone {

else
# Build the spark-submit command and execute
desired=$(get_cluster_value "$CLI_LINE" workerCount)
desired=$(get_cluster_value "$CLI_LINE" workersCount)
master=$(get_cluster_value "$CLI_LINE" masterUrl)
masterweb=$(get_cluster_value "$CLI_LINE" masterWebUrl)
ephemeral=$(get_cluster_value "$CLI_LINE" ephemeral)
Expand Down
4 changes: 2 additions & 2 deletions pyspark-build/modules/pyspark/added/s2i/usage
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ Usage: s2i build [-e APP_FILE=filename] <github repo> radanalytics-pyspark <appl
** Important! ** The APP_FILE environment variable specifies the name of the python file that
contains the main routine for your pyspark application. The default value is
"app.py". If your main routine is in a different file, you must use the APP_FILE
"app.py" or the first .py file it finds in the directory.
If your main routine is in a different file, you must use the APP_FILE
option to s2i build to specify it.
Example:
Expand All @@ -22,7 +23,6 @@ The resulting application image contains a startup script that expects the follo
optional environment variables which can be set when the application is launched:
OSHINKO_CLUSTER_NAME -- name of the spark cluster to use or create, required
OSHINKO_REST -- ip of the oshinko-rest controller, required
SPARK_OPTIONS -- options to spark-submit, e.g. "--conf spark.executor.memory=2G --conf spark.driver.memory=2G"
APP_ARGS -- arguments to the python application (if the app takes arguments)
OSHINKO_DEL_CLUSTER -- if a new cluster is created and this flag is set to "true" then the
Expand Down
2 changes: 1 addition & 1 deletion pyspark-py36-build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ RUN yum install -y java-1.8.0-openjdk \
# directory
COPY \
spark-2.3.0-bin-hadoop2.7.tgz \
oshinko_v0.5.4_linux_amd64.tar.gz \
oshinko_v0.5.6_linux_amd64.tar.gz \
/tmp/artifacts/

# Add scripts used to configure the image
Expand Down
6 changes: 3 additions & 3 deletions pyspark-py36-build/modules/common/added/utils/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function get_cluster_value {
# get the value
echo "$1" \
| sed -e 's/^[ \t]*//' \
| grep ^$2 \
| grep -i ^$2 \
| cut -d\ -f2
}

Expand Down Expand Up @@ -211,7 +211,7 @@ function wait_for_workers_alive {
# If someone scales down the cluster while we're still waiting
# then we need to know what the real target is so check again
line=$($CLI get $OSHINKO_CLUSTER_NAME $GET_FLAGS $CLI_ARGS)
desired=$(get_cluster_value "$line" workerCount)
desired=$(get_cluster_value "$line" workersCount)
done
echo "All spark workers alive"
}
Expand Down Expand Up @@ -287,7 +287,7 @@ function use_spark_standalone {

else
# Build the spark-submit command and execute
desired=$(get_cluster_value "$CLI_LINE" workerCount)
desired=$(get_cluster_value "$CLI_LINE" workersCount)
master=$(get_cluster_value "$CLI_LINE" masterUrl)
masterweb=$(get_cluster_value "$CLI_LINE" masterWebUrl)
ephemeral=$(get_cluster_value "$CLI_LINE" ephemeral)
Expand Down
4 changes: 2 additions & 2 deletions pyspark-py36-build/modules/pyspark/added/s2i/usage
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@ Usage: s2i build [-e APP_FILE=filename] <github repo> radanalytics-pyspark <appl
** Important! ** The APP_FILE environment variable specifies the name of the python file that
contains the main routine for your pyspark application. The default value is
"app.py". If your main routine is in a different file, you must use the APP_FILE
"app.py" or the first .py file it finds in the directory.
If your main routine is in a different file, you must use the APP_FILE
option to s2i build to specify it.
Example:
Expand All @@ -22,7 +23,6 @@ The resulting application image contains a startup script that expects the follo
optional environment variables which can be set when the application is launched:
OSHINKO_CLUSTER_NAME -- name of the spark cluster to use or create, required
OSHINKO_REST -- ip of the oshinko-rest controller, required
SPARK_OPTIONS -- options to spark-submit, e.g. "--conf spark.executor.memory=2G --conf spark.driver.memory=2G"
APP_ARGS -- arguments to the python application (if the app takes arguments)
OSHINKO_DEL_CLUSTER -- if a new cluster is created and this flag is set to "true" then the
Expand Down
2 changes: 1 addition & 1 deletion scala-build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ RUN yum install -y git \
# directory
COPY \
spark-2.3.0-bin-hadoop2.7.tgz \
oshinko_v0.5.4_linux_amd64.tar.gz \
oshinko_v0.5.6_linux_amd64.tar.gz \
/tmp/artifacts/

# Add scripts used to configure the image
Expand Down
6 changes: 3 additions & 3 deletions scala-build/modules/common/added/utils/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function get_cluster_value {
# get the value
echo "$1" \
| sed -e 's/^[ \t]*//' \
| grep ^$2 \
| grep -i ^$2 \
| cut -d\ -f2
}

Expand Down Expand Up @@ -211,7 +211,7 @@ function wait_for_workers_alive {
# If someone scales down the cluster while we're still waiting
# then we need to know what the real target is so check again
line=$($CLI get $OSHINKO_CLUSTER_NAME $GET_FLAGS $CLI_ARGS)
desired=$(get_cluster_value "$line" workerCount)
desired=$(get_cluster_value "$line" workersCount)
done
echo "All spark workers alive"
}
Expand Down Expand Up @@ -287,7 +287,7 @@ function use_spark_standalone {

else
# Build the spark-submit command and execute
desired=$(get_cluster_value "$CLI_LINE" workerCount)
desired=$(get_cluster_value "$CLI_LINE" workersCount)
master=$(get_cluster_value "$CLI_LINE" masterUrl)
masterweb=$(get_cluster_value "$CLI_LINE" masterWebUrl)
ephemeral=$(get_cluster_value "$CLI_LINE" ephemeral)
Expand Down
1 change: 0 additions & 1 deletion scala-build/modules/scala/added/s2i/usage
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ The resulting application image contains a startup script that expects the follo
optional environment variables which can be set when the application is launched:
OSHINKO_CLUSTER_NAME -- name of the spark cluster to use or create, required
OSHINKO_REST -- ip of the oshinko-rest controller, required
SPARK_OPTIONS -- options to spark-submit, e.g. "--conf spark.executor.memory=2G --conf spark.driver.memory=2G"
APP_MAIN_CLASS -- name of the main class, required
APP_ARGS -- arguments to the python application (if the app takes arguments)
Expand Down
2 changes: 1 addition & 1 deletion sparklyr-build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ RUN yum install -y java-1.8.0-openjdk \
# directory
COPY \
spark-2.3.0-bin-hadoop2.7.tgz \
oshinko_v0.5.4_linux_amd64.tar.gz \
oshinko_v0.5.6_linux_amd64.tar.gz \
/tmp/artifacts/

# Add scripts used to configure the image
Expand Down
6 changes: 3 additions & 3 deletions sparklyr-build/modules/common/added/utils/start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function get_cluster_value {
# get the value
echo "$1" \
| sed -e 's/^[ \t]*//' \
| grep ^$2 \
| grep -i ^$2 \
| cut -d\ -f2
}

Expand Down Expand Up @@ -211,7 +211,7 @@ function wait_for_workers_alive {
# If someone scales down the cluster while we're still waiting
# then we need to know what the real target is so check again
line=$($CLI get $OSHINKO_CLUSTER_NAME $GET_FLAGS $CLI_ARGS)
desired=$(get_cluster_value "$line" workerCount)
desired=$(get_cluster_value "$line" workersCount)
done
echo "All spark workers alive"
}
Expand Down Expand Up @@ -287,7 +287,7 @@ function use_spark_standalone {

else
# Build the spark-submit command and execute
desired=$(get_cluster_value "$CLI_LINE" workerCount)
desired=$(get_cluster_value "$CLI_LINE" workersCount)
master=$(get_cluster_value "$CLI_LINE" masterUrl)
masterweb=$(get_cluster_value "$CLI_LINE" masterWebUrl)
ephemeral=$(get_cluster_value "$CLI_LINE" ephemeral)
Expand Down
18 changes: 9 additions & 9 deletions sparklyr-build/modules/sparklyr/added/s2i/usage
Original file line number Diff line number Diff line change
@@ -1,28 +1,28 @@
#!/bin/bash -e
cat <<EOF
This is the radanalytics-pyspark S2I builder image. It will build
a radanalytics pyspark application image based on a github repository
This is the radanalytics-r-spark S2I builder image. It will build
a radanalytics R application image based on a github repository
that you specify and the resulting image may be used to run
your pyspark application.
your sparklyr application.
To use it, first install S2I: https://github.com/openshift/source-to-image
Usage: s2i build [-e APP_FILE=filename] <github repo> radanalytics-pyspark <application image name>
Usage: s2i build [-e APP_FILE=filename] <github repo> radanalytics-r-spark <application image name>
** Important! ** The APP_FILE environment variable specifies the name of the python file that
contains the main routine for your pyspark application. The default value is
"app.py". If your main routine is in a different file, you must use the APP_FILE
** Important! ** The APP_FILE environment variable specifies the name of the R file that
contains the main routine for your R application. The default value is
"app.R" or the first .R file it finds.
If your main routine is in a different file, you must use the APP_FILE
option to s2i build to specify it.
Example:
$s2i build -e APP_FILE=my_main.py git://github.com/mygithub/wordcount.git radanalytics-pyspark mynewapp
$s2i build -e APP_FILE=my_main.R git://github.com/mygithub/wordcount.git radanalytics-r-spark mynewapp
The resulting application image contains a startup script that expects the following required and
optional environment variables which can be set when the application is launched:
OSHINKO_CLUSTER_NAME -- name of the spark cluster to use or create, required
OSHINKO_REST -- ip of the oshinko-rest controller, required
SPARK_OPTIONS -- options to spark-submit, e.g. "--conf spark.executor.memory=2G --conf spark.driver.memory=2G"
APP_ARGS -- arguments to the python application (if the app takes arguments)
OSHINKO_DEL_CLUSTER -- if a new cluster is created and this flag is set to "true" then the
Expand Down

0 comments on commit c5f0f22

Please sign in to comment.