You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we are using the spark 2.3.0 version with Hadoop3 to fetch the records using hive table. While using the hive connector library we are facing issue where it does only 1000 records fetch though we have more than millions records eligible for query we pass. Any possibility of overriding this value set for us to get more records.
The text was updated successfully, but these errors were encountered:
I saw this issue a bit ago. execute looks like it only runs through the driver (and increasing that limit can OOM) and should be used primarily for catalog operations. The solution was to use executeQuery instead.
I saw this issue a bit ago. execute looks like it only runs through the driver (and increasing that limit can OOM) and should be used primarily for catalog operations. The solution was to use executeQuery instead.
Yes, but the LIMIT instruction used in executeQuery() does not return the required number of lines (up to 260x from my tests).
we are using the spark 2.3.0 version with Hadoop3 to fetch the records using hive table. While using the hive connector library we are facing issue where it does only 1000 records fetch though we have more than millions records eligible for query we pass. Any possibility of overriding this value set for us to get more records.
The text was updated successfully, but these errors were encountered: