You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But everytime when redshift abort our query, the spark job is still running and didn't receive any exceptions or signals. Do you have any suggestions on exception handling?
The text was updated successfully, but these errors were encountered:
That is most likely due to spark session not being closed.
I am assuming you created or using a spark session in the above code.
Could you try adding sc.stop() in your exception handling?
Hi, we are using the following command to unload some query results from redshift to s3.
But everytime when redshift abort our query, the spark job is still running and didn't receive any exceptions or signals. Do you have any suggestions on exception handling?
The text was updated successfully, but these errors were encountered: