The Blaze accelerator for Apache Spark leverages native vectorized execution to accelerate query processing. It combines the power of the Apache Arrow-DataFusion library and the scale of the Spark distributed computing framework.
Blaze takes a fully optimized physical plan from Spark, mapping it into DataFusion's execution plan, and performs native plan computation in Spark executors.
Blaze is composed of the following high-level components:
- Spark Extension: hooks the whole accelerator into Spark execution lifetime.
- Spark Shims: specialized codes for different versions of spark.
- Native Engine: implements the native engine in rust, including:
- ExecutionPlan protobuf specification
- JNI gateway
- Customized operators, expressions, functions
Based on the inherent well-defined extensibility of DataFusion, Blaze can be easily extended to support:
- Various object stores.
- Operators.
- Simple and Aggregate functions.
- File formats.
We encourage you to extend DataFusion capability directly and add the supports in Blaze with simple modifications in plan-serde and extension translation.
To build Blaze, please follow the steps below:
- Install Rust
The native execution lib is written in Rust. So you're required to install Rust (nightly) first for compilation. We recommend you to use rustup.
- Install JDK+Maven
Blaze has been well tested on jdk8 and maven3.5, should work fine with higher versions.
- Check out the source code.
git clone [email protected]:blaze-init/blaze.git
cd blaze
- Build the project.
Specify shims package of which spark version that you would like to run on. You could either build Blaze in dev mode for debugging or in release mode to unlock the full potential of Blaze.
SHIM=spark333 # or spark303
MODE=release # or dev
mvn package -P"${SHIM}" -P"${MODE}"
After the build is finished, a fat Jar package that contains all the dependencies will be generated in the target
directory.
You can use the following command to build a centos-7 compatible release:
SHIM=spark333 MODE=release ./release-docker.sh
This section describes how to submit and configure a Spark Job with Blaze support.
-
move blaze jar package to spark client classpath (normally
spark-xx.xx.xx/jars/
). -
add the follow confs to spark configuration in
spark-xx.xx.xx/conf/spark-default.conf
:
spark.sql.extensions org.apache.spark.sql.blaze.BlazeSparkSessionExtension
spark.shuffle.manager org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleManager
# other blaze confs defined in spark-extension/src/main/java/org/apache/spark/sql/blaze/BlazeConf.java
- submit a query with spark-sql, or other tools like spark-thriftserver:
spark-sql -f tpcds/q01.sql
Check Benchmark Results with the latest date for the performance comparison with vanilla Spark on TPC-DS 1TB dataset. The benchmark result shows that Blaze saved ~55% query time and ~60% cluster resources in average. ~6x performance achieved for the best case (q06). Stay tuned and join us for more upcoming thrilling numbers.
We also encourage you to benchmark Blaze and share the results with us. 🤗
We're using Discussions to connect with other members of our community. We hope that you:
- Ask questions you're wondering about.
- Share ideas.
- Engage with other community members.
- Welcome others who are open-minded. Remember that this is a community we build together 💪 .
Blaze is licensed under the Apache 2.0 License. A copy of the license can be found here.