Skip to content

Commit

Permalink
parent 6ab796e
Browse files Browse the repository at this point in the history
author shirly121 <[email protected]> 1694167237 +0800
committer xiaolei.zl <[email protected]> 1695348300 +0800

parent 6ab796e
author shirly121 <[email protected]> 1694167237 +0800
committer xiaolei.zl <[email protected]> 1695348286 +0800

[GIE Compiler] fix bugs of columnId in schema

refactor(flex): Replace the Adhoc csv reader with Arrow CSV reader (#3154)

1. Use Arrow CSV Reader to replace current adhoc csv reader, to support
more configurable options in `bulk_load.yaml`.
2. Introduce `CSVFragmentLoader`, `BasicFragmentLoader` for
`MutablePropertyFragment`.

With this PR merged, `MutablePropertyFragment` will support loading
fragment from csv with options:
- delimeter: default '|'
- header_row: default true
- quoting: default false
- quoting_char: default '"'
- escaping: default false
- escaping_char: default'\\'
- batch_size: the batch size of when reading file into memory, default
1MB.
- batch_reader: default false. If set to true,
`arrow::csv::StreamingReader` will be used to parse the input file.
Otherwise, `arrow::TableReader` will be used.

With this PR merged, the performance of graph loading will be improved.
The Adhoc Reader denote the current implemented csv parser, 1,2,4,8
denotes the parallelism of graph loading, i.e. how many labels of
vertex/edge are concurrently processed.

Note that TableReader is around 10x faster than StreamingReader. The
possible reason could be the multi-threading is used.
See [arrow-csv-doc](https://arrow.apache.org/docs/cpp/csv.html) for
details.

| Reader | Phase | 1 | 2 | 4 | 8 |
| --------- | -------------- | ---------- |---------- |----------
|---------- |
| Adhoc Reader | ReadFile\+LoadGraph |805s|	468s|	349s|	313s|
| Adhoc Reader | Serialization | 126s|	126s|	126s|	126s|
| Adhoc Reader  | **Total** |931s|	594s|	475s|	439s|
| Table Reader |  ReadFile | 9s	|9s	|9s|	9s|
| Table Reader | LoadGraph |455s|	280s|	211s|	182s|
| Table Reader |Serialization |126s|	126s|	126s|	126s|
| Table Reader | **Total** | 600s|	415s|	346s|	317s|
| Streaming Reader | ReadFile |91s|	91s|	91s|	91s|
| Streaming Reader | LoadGraph | 555s|	289s|	196s|	149s|
| Streaming Reader | Serialization |126s|	126s|	126s|	126s|
| Streaming Reader | **Total** | 772s|	506s|	413s|	366s|

| Reader | Phase | 1 | 2 | 4 | 8 |
| --------- | -------------- | ---------- |---------- |----------
|---------- |
| Adhoc Reader | ReadFile\+LoadGraph |2720s|	1548s|	1176s|	948s|
| Adhoc Reader | Serialization | 409s|	409s|	409s|	409s|
| Adhoc Reader  | **Total** | 3129s|	1957s|	1585s|	1357s|
| Table Reader |  ReadFile |24s|	24s|	24s|	24s|
| Table Reader | LoadGraph |1576s|	949s|	728s|	602s|
| Table Reader |Serialization |409s|	409s|	409s|	409s|
| Table Reader | **Total** | 2009s|	1382s|	1161s|	1035s|
| Streaming Reader | ReadFile |300s|	300s|	300s|	300s|
| Streaming Reader | LoadGraph | 1740s|	965s|	669s|	497s|
| Streaming Reader | Serialization | 409s|	409s|	409s|	409s|
| Streaming Reader | **Total** | 2539s|	1674s|	1378s|	1206s|
| Reader | Phase | 1 | 2 | 4 | 8 |
| --------- | -------------- | ---------- |---------- |----------
|---------- |
| Adhoc Reader | ReadFile\+LoadGraph | 8260s|	4900s	|3603s	|2999s|
| Adhoc Reader | Serialization | 1201s |	1201s|	1201s	|1201s|
| Adhoc Reader  | **Total** | 9461s|	6101s | 4804s	|4200s|
| Table Reader |  ReadFile | 73s	|73s|	96s|	96s|
| Table Reader | LoadGraph |4650s|	2768s|	2155s	|1778s|
| Table Reader |Serialization | 1201s |	1201s|	1201s	|1201s|
| Table Reader | **Total** | 5924s|	4042s|	3452s|	3075s|
| Streaming Reader | ReadFile | 889s |889s | 889s| 889s|
| Streaming Reader | LoadGraph | 5589s|	3005s|	2200s|	1712s|
| Streaming Reader | Serialization | 1201s| 1201s| 1201s |1201s |
| Streaming Reader | **Total** | 7679s	| 5095s |4290s| 	3802s|

FIx #3116

minor fix and move modern graph

fix grin test

todo: do_start

fix

fix

stash

fix

fix

make rules unique

dockerfile stash

minor change

remove plugin-dir

fix

minor fix

debug

debug

fix

fix

fix bulk_load.yaml

bash format

some fix

fix format

fix grin test

some fi

check ci

fix ci

set

fix ci

fix

dd

f

disable tmate

fix some bug

fix

fix

refactor

fix

fix

fix

minor

some fix

fix

support default src_dst primarykey mapping in bulk load

fix

fix

fix

fix

Ci

rename

fix java and add get_person_name.cypher

[GIE Compiler] minor fix

use graphscope gstest

format

add movie queries

dd

debug

add movie test

format

format

fix script

debug

fix test script

minor

sort query results

minor

minor

format

fix ci

format

gstest

Add License

fix bugs
  • Loading branch information
shirly121 authored and zhanglei1949 committed Sep 26, 2023
1 parent 6e40478 commit f129398
Show file tree
Hide file tree
Showing 24 changed files with 658 additions and 86 deletions.
40 changes: 36 additions & 4 deletions .github/workflows/hqps-db-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ jobs:
which cargo
# build compiler
cd ${GIE_HOME}/compiler
make build
cd ${GIE_HOME}/
mvn clean install -Pexperimental -DskipTests
- name: Prepare dataset and workspace
env:
Expand All @@ -91,6 +91,8 @@ jobs:
mkdir -p ${INTERACTIVE_WORKSPACE}/data/ldbc
GRAPH_SCHEMA_YAML=${GS_TEST_DIR}/flex/ldbc-sf01-long-date/audit_graph_schema.yaml
cp ${GRAPH_SCHEMA_YAML} ${INTERACTIVE_WORKSPACE}/data/ldbc/graph.yaml
mkdir -p ${INTERACTIVE_WORKSPACE}/data/movies
cp ${GS_TEST_DIR}/flex/movies/movies_schema.yaml ${INTERACTIVE_WORKSPACE}/data/movies/graph.yaml
- name: Sample Query test
env:
Expand Down Expand Up @@ -129,7 +131,19 @@ jobs:
eval ${cmd}
done
- name: Run End-to-End cypher adhoc query test
# test movie graph, 8,9,10 are not supported now
# change the default_graph config in ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml to movies
sed -i 's/default_graph: ldbc/default_graph: movies/g' ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml
for i in 1 2 3 4 5 6 7 11 12 13 14 15;
do
cmd="./load_plan_and_gen.sh -e=hqps -i=../tests/hqps/queries/movie/query${i}.cypher -w=/tmp/codgen/"
cmd=${cmd}" -o=/tmp/plugin --ir_conf=${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml "
cmd=${cmd}" --graph_schema_path=${INTERACTIVE_WORKSPACE}/data/movies/graph.yaml"
echo $cmd
eval ${cmd}
done
- name: Run End-to-End cypher adhoc ldbc query test
env:
GS_TEST_DIR: ${{ github.workspace }}/gstest
HOME : /home/graphscope/
Expand All @@ -138,5 +152,23 @@ jobs:
cd ${GITHUB_WORKSPACE}/flex/tests/hqps/
export FLEX_DATA_DIR=${GS_TEST_DIR}/flex/ldbc-sf01-long-date
export ENGINE_TYPE=hiactor
bash hqps_cypher_test.sh ${GS_TEST_DIR} ${INTERACTIVE_WORKSPACE}
# change the default_graph config in ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml to ldbc
sed -i 's/default_graph: movies/default_graph: ldbc/g' ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml
bash hqps_cypher_test.sh ${INTERACTIVE_WORKSPACE} ldbc ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/audit_bulk_load.yaml \
${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml
- name: Run End-to-End cypher adhoc movie query test
env:
GS_TEST_DIR: ${{ github.workspace }}/gstest
HOME : /home/graphscope/
INTERACTIVE_WORKSPACE: /tmp/interactive_workspace
run: |
cd ${GITHUB_WORKSPACE}/flex/tests/hqps/
export FLEX_DATA_DIR=../../interactive/examples/movies/
export ENGINE_TYPE=hiactor
# change the default_graph config in ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml to movies
sed -i 's/default_graph: ldbc/default_graph: movies/g' ${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml
bash hqps_cypher_test.sh ${INTERACTIVE_WORKSPACE} movies ${GS_TEST_DIR}/flex/movies/movies_import.yaml \
${GS_TEST_DIR}/flex/ldbc-sf01-long-date/engine_config.yaml
142 changes: 94 additions & 48 deletions flex/interactive/bin/gs_interactive
Original file line number Diff line number Diff line change
Expand Up @@ -370,6 +370,10 @@ function update_engine_config_from_yaml(){
if [[ -n "${default_graph}" ]]; then
DATABASE_CURRENT_GRAPH_NAME="${default_graph}"
fi
# update hiactor shard num
if [[ -n "${compute_engine_shard_num}" ]]; then
DATABASE_COMPUTE_ENGINE_SHARD_NUM="${compute_engine_shard_num}"
fi
# compiler
if [[ -n ${compiler_planner_is_on} ]]; then
DATABASE_COMPILER_PLANNER_IS_ON="${compiler_planner_is_on}"
Expand All @@ -380,18 +384,25 @@ function update_engine_config_from_yaml(){
fi
# append the founded compiler planner rules to DATABASE_COMPILER_PLANNER_RULES
x=1
CURRENT_DATABASE_COMPILER_PLANNER_RULES=""
while true; do
compiler_planner_rules_x_key="compiler_planner_rules_${x}"
compiler_planner_rules_x=$(eval echo "\$${compiler_planner_rules_x_key}")
if [ -z "${compiler_planner_rules_x}" ]; then
break
fi
# check compiler_planner_rules_x present in DATABASE_COMPILER_PLANNER_RULES, if not, append
if [[ ! "${DATABASE_COMPILER_PLANNER_RULES}" =~ "${compiler_planner_rules_x}" ]]; then
DATABASE_COMPILER_PLANNER_RULES="${DATABASE_COMPILER_PLANNER_RULES},${compiler_planner_rules_x}"
if [[ ! "${CURRENT_DATABASE_COMPILER_PLANNER_RULES}" =~ "${compiler_planner_rules_x}" ]]; then
CURRENT_DATABASE_COMPILER_PLANNER_RULES="${CURRENT_DATABASE_COMPILER_PLANNER_RULES},${compiler_planner_rules_x}"
fi
x=$((x + 1))
done
# if CURRENT_DATABASE_COMPILER_PLANNER_RULES is not empty,override DATABASE_COMPILER_PLANNER_RULES
if [[ -n "${CURRENT_DATABASE_COMPILER_PLANNER_RULES}" ]]; then
# remove the first ','
CURRENT_DATABASE_COMPILER_PLANNER_RULES=$(echo "${CURRENT_DATABASE_COMPILER_PLANNER_RULES}" | sed 's/^,//g')
DATABASE_COMPILER_PLANNER_RULES="${CURRENT_DATABASE_COMPILER_PLANNER_RULES}"
fi
if [[ -n "${compiler_endpoint_address}" ]]; then
DATABASE_COMPILER_ENDPOINT_ADDRESS="${compiler_endpoint_address}"
fi
Expand Down Expand Up @@ -1106,7 +1117,7 @@ function do_import(){
info "Graph Schema exists"
# copy the bulk_load_file to container
bulk_load_file_name=$(basename "${bulk_load_file}")
docker_bulk_load_file="/tmp/${bulk_load_file_name}"
docker_bulk_load_file="${HOST_DB_TMP_DIR}/${bulk_load_file_name}"
docker cp "${bulk_load_file}" "${GIE_DB_CONTAINER_NAME}:${docker_bulk_load_file}"

docker_graph_data_dir="${DATABASE_WORKSPACE}/data//${graph_name}/indices"
Expand Down Expand Up @@ -1186,6 +1197,26 @@ function do_destroy() {
if [ -f "${HOST_DB_ENV_FILE}" ]; then
rm ${HOST_DB_ENV_FILE}
fi
# rm the temp files used
if [ -f "${HOST_DB_TMP_DIR}/graph0.yaml" ]; then
rm ${HOST_DB_TMP_DIR}/graph0.yaml
fi
if [ -f "${HOST_DB_TMP_DIR}/.enable" ]; then
rm ${HOST_DB_TMP_DIR}/.enable
fi
# rm ${HOST_DB_TMP_DIR}/engine_config.yaml ${HOST_DB_TMP_DIR}/real_engine_config.yaml, ${HOST_DB_TMP_DIR}/graph_exists ${HOST_DB_TMP_DIR}/graph_loaded
if [ -f "${HOST_DB_TMP_DIR}/engine_config.yaml" ]; then
rm ${HOST_DB_TMP_DIR}/engine_config.yaml
fi
if [ -f "${HOST_DB_TMP_DIR}/real_engine_config.yaml" ]; then
rm ${HOST_DB_TMP_DIR}/real_engine_config.yaml
fi
if [ -f "${HOST_DB_TMP_DIR}/graph_exists" ]; then
rm ${HOST_DB_TMP_DIR}/graph_exists
fi
if [ -f "${HOST_DB_TMP_DIR}/graph_loaded" ]; then
rm ${HOST_DB_TMP_DIR}/graph_loaded
fi

info "Finish destroy database"
}
Expand Down Expand Up @@ -1242,7 +1273,7 @@ function do_start(){
esac
done
# try parse default_graph from engine_config_file
# generate real engine config file, put it at /tmp/real_engine_config.yaml
# generate real engine config file, put it at ${HOST_DB_TMP_DIR}/real_engine_config.yaml
if [ -z "${graph_name}" ]; then
graph_name=${DATABASE_CURRENT_GRAPH_NAME}
else
Expand All @@ -1255,7 +1286,7 @@ function do_start(){
exit 1
fi

real_engine_config_file="/tmp/real_engine_config.yaml"
real_engine_config_file="${HOST_DB_TMP_DIR}/real_engine_config.yaml"
if [ -z "${engine_config_file}" ]; then
if ! generate_real_engine_conf "${real_engine_config_file}"; then
err "generate engine config file failed"
Expand All @@ -1276,22 +1307,22 @@ function do_start(){
# check if modern_graph exists in container, get the result as bool
docker_graph_schema_file="${DATABASE_WORKSPACE}/data/${graph_name}/graph.yaml"
wal_file="${DATABASE_WORKSPACE}/data/${graph_name}/indices/init_snapshot.bin"
if [ -f "/tmp/graph_exists" ]; then
if ! rm /tmp/graph_exists; then
err "fail to remove /tmp/graph_exists, please remove it manually"
if [ -f "${HOST_DB_TMP_DIR}/graph_exists" ]; then
if ! rm "${HOST_DB_TMP_DIR}/graph_exists"; then
err "fail to remove ${HOST_DB_TMP_DIR}/graph_exists, please remove it manually, maybe sudo rm -rf ${HOST_DB_TMP_DIR}/graph_exists"
exit 1
fi
fi
if [ -f "/tmp/graph_loaded" ]; then
if ! rm /tmp/graph_loaded; then
err "fail to remove /tmp/graph_loaded, please remove it manually"
if [ -f "${HOST_DB_TMP_DIR}/graph_loaded" ]; then
if ! rm "${HOST_DB_TMP_DIR}/graph_loaded"; then
err "fail to remove ${HOST_DB_TMP_DIR}/graph_loaded, please remove it manually, maybe sudo rm -rf ${HOST_DB_TMP_DIR}/graph_loaded"
exit 1
fi
fi
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "( [ -f ${docker_graph_schema_file} ] && echo \"true\" e) || echo \"false\"" > /tmp/graph_exists
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "( [ -f ${wal_file} ] && echo \"true\" e) || echo \"false\"" > /tmp/graph_loaded
graph_exists=$(cat /tmp/graph_exists)
graph_loaded=$(cat /tmp/graph_loaded)
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "( [ -f ${docker_graph_schema_file} ] && echo \"true\" e) || echo \"false\"" > ${HOST_DB_TMP_DIR}/graph_exists
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "( [ -f ${wal_file} ] && echo \"true\" e) || echo \"false\"" > ${HOST_DB_TMP_DIR}/graph_loaded
graph_exists=$(cat ${HOST_DB_TMP_DIR}/graph_exists)
graph_loaded=$(cat ${HOST_DB_TMP_DIR}/graph_loaded)
if [ "${graph_exists}" = "false" ]; then
# if graph_name is default_graph, we should create it first
# otherwise, we should tell user to create it first
Expand Down Expand Up @@ -1584,7 +1615,7 @@ function do_compile() {
. ${HOST_DB_ENV_FILE}
fi

real_engine_config_file="/tmp/real_engine_config.yaml"
real_engine_config_file="${HOST_DB_TMP_DIR}/real_engine_config.yaml"
# update default graph name
DATABASE_CURRENT_GRAPH_NAME=${graph_name}
if ! generate_real_engine_conf "${real_engine_config_file}"; then
Expand All @@ -1598,9 +1629,30 @@ function do_compile() {
docker_graph_dir="${DATABASE_WORKSPACE}/data/${graph_name}"
docker_graph_schema="${docker_graph_dir}/graph.yaml"
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "[ -d ${docker_graph_dir} ] || (echo -e \"${RED} Graph [${graph_name}] not exists, please create it first.${NC}\" && exit 1)"
# Fetch current installed procedures, and check if the procedure is already installed
# if not compile_only, we should add the stored_procedure_name to .enable
docker_graph_enable_file="${docker_graph_dir}/plugins/.enable"
# copy container to host
if [ -f "${HOST_DB_TMP_DIR}/.enable" ]; then
if ! rm -f ${HOST_DB_TMP_DIR}/.enable; then
err "fail to remove ${HOST_DB_TMP_DIR}/.enable, please remove it manually, maybe sudo rm -rf ${HOST_DB_TMP_DIR}/.enable"
exit 1
fi
fi
# if docker_graph_enable_file exists. copy it to host
docker exec "${GIE_DB_CONTAINER_NAME}" test -e "${docker_graph_enable_file}" && (docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "${HOST_DB_TMP_DIR}/.enable")

if [ ! -f "${HOST_DB_TMP_DIR}/.enable" ]; then
touch "${HOST_DB_TMP_DIR}/.enable"
fi
# check if the stored_procedure_name is already in .enable
if grep -q "${stored_procedure_name}" "${HOST_DB_TMP_DIR}/.enable"; then
err "stored_procedure_name [${stored_procedure_name}] already exists, please use another name"
exit 1
fi

container_output_dir="${DATABASE_WORKSPACE}/data/${graph_name}/plugins"
cotainer_input_path="/tmp/${file_name}"
cotainer_input_path="${HOST_DB_TMP_DIR}/${file_name}"
# docker cp file to container
cmd="docker cp ${real_file_path} ${GIE_DB_CONTAINER_NAME}:${cotainer_input_path}"
eval ${cmd} || exit 1
Expand Down Expand Up @@ -1631,22 +1683,12 @@ function do_compile() {
fi
info "success generate dynamic lib ${output_file}."

# if not compile_only, we should add the stored_procedure_name to .enable
docker_graph_enable_file="${docker_graph_dir}/plugins/.enable"
# copy container to host
rm -f /tmp/.enable
# if docker_graph_enable_file exists. copy it to host
docker exec "${GIE_DB_CONTAINER_NAME}" test -e "${docker_graph_enable_file}" && (docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "/tmp/.enable")

if [ ! -f "/tmp/.enable" ]; then
touch "/tmp/.enable"
fi
# if compile_only equal to false
if [ "${compile_only}" = false ]; then
echo "${stored_procedure_name}" >> /tmp/.enable
echo "${stored_procedure_name}" >> ${HOST_DB_TMP_DIR}/.enable
fi
# copy back
docker cp "/tmp/.enable" "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" || exit 1
docker cp "${HOST_DB_TMP_DIR}/.enable" "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" || exit 1
}

function do_enable(){
Expand Down Expand Up @@ -1716,26 +1758,31 @@ function do_enable(){
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "[ -d ${docker_graph_dir} ] || (echo -e \"${RED} Graph ${graph_name} not exists, please create it first.${NC}\" && exit 1)"
docker_graph_plugin_dir="${docker_graph_dir}/plugins"
docker_graph_enable_file="${docker_graph_plugin_dir}/.enable"
rm -f /tmp/.enable
if [ -f "${HOST_DB_TMP_DIR}/.enable" ]; then
if ! rm -f ${HOST_DB_TMP_DIR}/.enable; then
err "fail to remove ${HOST_DB_TMP_DIR}/.enable, please remove it manually, maybe sudo rm -rf ${HOST_DB_TMP_DIR}/.enable"
exit 1
fi
fi
# copy the .enable file to host, and append the stored_procedure_names to it; if the stored_procedure_names already exists, do nothing
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "/tmp/.enable" || true
if [ ! -f "/tmp/.enable" ]; then
touch "/tmp/.enable"
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "${HOST_DB_TMP_DIR}/.enable" || true
if [ ! -f "${HOST_DB_TMP_DIR}/.enable" ]; then
touch "${HOST_DB_TMP_DIR}/.enable"
fi
old_line_num=$(wc -l < /tmp/.enable)
old_line_num=$(wc -l < ${HOST_DB_TMP_DIR}/.enable)
# split the stored_procedure_names by ',' and append them to .enable file
IFS=',' read -ra stored_procedure_names_array <<< "${stored_procedure_names}"
for stored_procedure_name in "${stored_procedure_names_array[@]}"; do
# check if the stored_procedure_name already exists in .enable file
if grep -q "${stored_procedure_name}" "/tmp/.enable"; then
if grep -q "${stored_procedure_name}" "${HOST_DB_TMP_DIR}/.enable"; then
info "stored_procedure_name ${stored_procedure_name} already exists in .enable file, skip"
else
echo "${stored_procedure_name}" >> /tmp/.enable
echo "${stored_procedure_name}" >> ${HOST_DB_TMP_DIR}/.enable
fi
done
# copy the .enable file back to container
docker cp "/tmp/.enable" "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" || exit 1
new_line_num=$(wc -l < /tmp/.enable)
docker cp "${HOST_DB_TMP_DIR}/.enable" "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" || exit 1
new_line_num=$(wc -l < ${HOST_DB_TMP_DIR}/.enable)
info "Successfuly enable stored_procedures ${stored_procedure_names} for graph [${graph_name}], ${old_line_num} -> ${new_line_num}"
}

Expand Down Expand Up @@ -1823,16 +1870,16 @@ function do_disable(){
# add the names to .enable file for graph_name

# copy the .enable file to host, and remove the stored_procedure_names from it
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "/tmp/.enable" || exit 1
old_line_num=$(wc -l < /tmp/.enable)
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "${HOST_DB_TMP_DIR}/.enable" || exit 1
old_line_num=$(wc -l < ${HOST_DB_TMP_DIR}/.enable)
# split the stored_procedure_names by ',' and remove them from .enable file
IFS=',' read -ra stored_procedure_names_array <<< "${stored_procedure_names}"
for stored_procedure_name in "${stored_procedure_names_array[@]}"; do
sed -i "/${stored_procedure_name}/d" /tmp/.enable
sed -i "/${stored_procedure_name}/d" ${HOST_DB_TMP_DIR}/.enable
done
# copy the .enable file back to container
docker cp "/tmp/.enable" "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" || exit 1
new_line_num=$(wc -l < /tmp/.enable)
docker cp "${HOST_DB_TMP_DIR}/.enable" "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" || exit 1
new_line_num=$(wc -l < ${HOST_DB_TMP_DIR}/.enable)
info "Successfuly disable stored_procedures ${stored_procedure_names} for graph [${graph_name}], ${old_line_num} -> ${new_line_num}"
}

Expand Down Expand Up @@ -1880,10 +1927,9 @@ function do_show(){
# check if docker_graph_yaml exists, if not ,exit
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "[ -f ${docker_graph_yaml} ] || (echo -e \"${RED}Graph [${graph_name}] not exists, please create it first. ${NC}\" && exit 1)" || exit 1
# copy to host
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_yaml}" "/tmp/graph.yaml" || (echo "fail to copy" && exit 1)
# read /tmp/graph.yaml find the enabled_list list, and print the following lines
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_yaml}" "${HOST_DB_TMP_DIR}/graph.yaml" || (echo "fail to copy" && exit 1)
# parse /tmp/graph.yaml and get stored_procedures_enable_lists array
eval $(parse_yaml "/tmp/graph.yaml")
eval $(parse_yaml "${HOST_DB_TMP_DIR}/graph.yaml")
x=1
stored_procedures_enable_lists_array=()
while true; do
Expand All @@ -1903,14 +1949,14 @@ function do_show(){
docker_graph_enable_file="${docker_graph_plugin_dir}/.enable"
# check if docker_graph_enable_file exists, if not ,exit
docker exec "${GIE_DB_CONTAINER_NAME}" bash -c "[ -f ${docker_graph_enable_file} ] || (echo -e \"${RED}Graph [${graph_name}] has no procedures registered. ${NC}\" && exit 1)" || exit 1
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "/tmp/.enable" || exit 1
docker cp "${GIE_DB_CONTAINER_NAME}:${docker_graph_enable_file}" "${HOST_DB_TMP_DIR}/.enable" || exit 1
disabled_list=()
# iterate the .enable file, for each line, check if it is in stored_procedures_enable_lists_array, if not, add it to disabled_list
while read line; do
if [[ ! " ${stored_procedures_enable_lists_array[@]} " =~ " ${line} " ]]; then
disabled_list+=("${line}")
fi
done < /tmp/.enable
done < ${HOST_DB_TMP_DIR}/.enable
info "Disabled Size: ${#disabled_list[@]}"

# print the enabled_list and disabled_list
Expand Down
1 change: 0 additions & 1 deletion flex/interactive/conf/engine_config.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
log_level: INFO # default INFO
default_graph: modern # configure the graph to be loaded while starting the service, if graph name not specified
compute_engine:
shard_num: 1 # the number of shared workers, default 1
compiler:
Expand Down
Original file line number Diff line number Diff line change
@@ -1 +1 @@
MATCH(p : person {id: $personId}) RETURN p.name;
MATCH(p : person {id: $personId}) RETURN p.name;
Loading

0 comments on commit f129398

Please sign in to comment.