-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run tests for different connectors in different builds #10440
Run tests for different connectors in different builds #10440
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but I'll leave it to someone more familiar with product tests to accept
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also make Presto to use INFO for logging?
@@ -73,7 +73,7 @@ script: | |||
- | | |||
if [[ -v PRODUCT_TESTS_BASIC_ENVIRONMENT ]]; then | |||
presto-product-tests/bin/run_on_docker.sh \ | |||
multinode -x quarantine,big_query,storage_formats,profile_specific_tests,tpcds | |||
multinode -x quarantine,big_query,storage_formats,profile_specific_tests,tpcds,cassandra,mysql_connector,postgresql_connector,mysql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we have mysql
and mysql_connector
? It would be nice to merge them. Maybe something for following PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, i noticed that. We should definitely do that as part of the follow up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please file a ticket for that, to not to forget about it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EXTERNAL_SERVICES="hadoop-master sqlserver" | ||
elif [[ "$ENVIRONMENT" == "singlenode-ldap" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master ldapserver" | ||
elif [[ "$ENVIRONMENT" == "singlenode-mysql" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master mysql" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need hadoop when testing mysql?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't. But due to the limitation in Tempto related to convention based tests, no matter what test suited you select it will alwasy try to provision TPCH dataset in Hive. We should fix that as part of the followup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you file a ticket for tempto for that? I will try to fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elif [[ "$ENVIRONMENT" == "singlenode-mysql" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master mysql" | ||
elif [[ "$ENVIRONMENT" == "singlenode-postgresql" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master postgresql" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we really need hadoop when testing psql?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
elif [[ "$ENVIRONMENT" == "singlenode-postgresql" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master postgresql" | ||
elif [[ "$ENVIRONMENT" == "singlenode-cassandra" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master cassandra" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here do we need hadoop when testing cassandra? I would prefer to use tpch if any connector has to be used other than cassandra.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same problem
@@ -7,6 +7,5 @@ source "${BASH_SOURCE%/*}/../common/compose-commons.sh" | |||
docker-compose \ | |||
-f ${BASH_SOURCE%/*}/../common/standard.yml \ | |||
-f ${BASH_SOURCE%/*}/../common/kerberos.yml \ | |||
-f ${BASH_SOURCE%/*}/../common/jdbc_db.yml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you haven't changed tests suites for this this environment, are you sure that it was not using jdbc connectors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We used to start jdbc services only for multinode
and singlenode
environments. This include was simply extra / copy-paste artifact.
https://github.com/prestodb/presto/blob/master/presto-product-tests/bin/run_on_docker.sh#L145
@@ -6,6 +6,5 @@ source "${BASH_SOURCE%/*}/../common/compose-commons.sh" | |||
|
|||
docker-compose \ | |||
-f ${BASH_SOURCE%/*}/../common/standard.yml \ | |||
-f ${BASH_SOURCE%/*}/../common/jdbc_db.yml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you haven't changed tests suites for this this environment, are you sure that it was not using jdbc connectors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although jdbc_db
was included it was never provisioned
https://github.com/prestodb/presto/blob/master/presto-product-tests/bin/run_on_docker.sh#L150
@@ -7,6 +7,5 @@ source "${BASH_SOURCE%/*}/../common/compose-commons.sh" | |||
docker-compose \ | |||
-f ${BASH_SOURCE%/*}/../common/standard.yml \ | |||
-f ${BASH_SOURCE%/*}/../common/kerberos.yml \ | |||
-f ${BASH_SOURCE%/*}/../common/jdbc_db.yml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you haven't changed tests suites for this this environment, are you sure that it was not using jdbc connectors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
@@ -6,6 +6,5 @@ source "${BASH_SOURCE%/*}/../common/compose-commons.sh" | |||
|
|||
docker-compose \ | |||
-f ${BASH_SOURCE%/*}/../common/standard.yml \ | |||
-f ${BASH_SOURCE%/*}/../common/jdbc_db.yml \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you haven't changed tests suites for this this environment, are you sure that it was not using jdbc connectors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
version: '2' | ||
services: | ||
|
||
postgresql: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
automation fails with: Caused by: java.net.UnknownHostException: postgres
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works fine locally. On Travis for some reason hostname comes after servicename, and a hostname
field is ignored. This is weird. For now i'm just going to revert back the service name.
This allows us to avoid spinning up all backends (hadoop, mysql, cassandra, posgree) alltogether to save some memory
f93d6c8
to
e5c5a79
Compare
You mean setting all here to |
@arhimondr you asked me for review, but i think @electrum's and @kokosing's is more than enough. Let me know if I'm still needed here. |
@@ -73,7 +73,7 @@ script: | |||
- | | |||
if [[ -v PRODUCT_TESTS_BASIC_ENVIRONMENT ]]; then | |||
presto-product-tests/bin/run_on_docker.sh \ | |||
multinode -x quarantine,big_query,storage_formats,profile_specific_tests,tpcds | |||
multinode -x quarantine,big_query,storage_formats,profile_specific_tests,tpcds,cassandra,mysql_connector,postgresql_connector,mysql |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please file a ticket for that, to not to forget about it
EXTERNAL_SERVICES="hadoop-master sqlserver" | ||
elif [[ "$ENVIRONMENT" == "singlenode-ldap" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master ldapserver" | ||
elif [[ "$ENVIRONMENT" == "singlenode-mysql" ]]; then | ||
EXTERNAL_SERVICES="hadoop-master mysql" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you file a ticket for tempto for that? I will try to fix that.
Thanks for fixing this @arhimondr, merged. |
We were observing a lot of memory related intermittent product test failures on master. In order to test statistics API we are required to run
ANALYZE
on hive, that takes us some additional memory to run.The idea is to move mysql, cassandra, posgree tests to a separate build. So there is no need to provision both, memory demanding cassandra and hive for the same build.
This allows us to avoid spinning up all backends (hadoop, mysql, cassandra, posgree)
alltogether to save some memory
This is just a temporary solution to simply fix intermittent failutes.
Eventually we would like to have something like this: #10439