-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(trino): add delay time to avoid Trino issue #735
base: main
Are you sure you want to change the base?
fix(trino): add delay time to avoid Trino issue #735
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #735 +/- ##
=======================================
Coverage ? 85.58%
=======================================
Files ? 12
Lines ? 666
Branches ? 104
=======================================
Hits ? 570
Misses ? 74
Partials ? 22 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
what exception does it throw? can't that exception be added to wait_container_is_ready or one of the other decorator we have and it will catch that and rerun the waiter method? this module seems to be abusing the decorators a bit here.... |
The raised exception is when we use the Trino client to connect the container. We can't catch it.
|
The trino logs in the container
You can see the But the first query is deadline = time.time() + c.max_tries
while time.time() < deadline:
try:
cur = conn.cursor()
cur.execute("SELECT * FROM tpch.tiny.nation LIMIT 1")
cur.fetchall()
return
except Exception:
time.sleep(c.sleep_time)
raise TimeoutError(f"Trino did not start within {c.max_tries:.3f} seconds") |
Hi @alexanderankin, could you recheck this PR? |
I found the Trino container usually query failed due to
nodes is empty
.I tried to execute
SELECT * FROM system.runtime.nodes
andSHOW CATALOGS LIKE 'tpch'
before querying to ensure that the node was active and the 'tpch' catalog was ready. It still encountered the problem. This issue may be related to a Trino discovery node problem. A similar issue was reported previously (see trinodb/trino#13388)The final solution is sleeping for a few seconds.