We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parquet format is up to 2x faster to unload and consumes up to 6x less storage in Amazon S3, compared with text formats. See https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html
Currently CSV is used to unload which is inneficient. I can't see a valid reason to keep using CSV and maintain custom code to transform it to rdd / dataframe See https://github.com/spark-redshift-community/spark-redshift/blob/master/src/main/scala/io/github/spark_redshift_community/spark/redshift/RedshiftRelation.scala#L198
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
Currently CSV is used to unload which is inneficient. I can't see a valid reason to keep using CSV and maintain custom code to transform it to rdd / dataframe
See https://github.com/spark-redshift-community/spark-redshift/blob/master/src/main/scala/io/github/spark_redshift_community/spark/redshift/RedshiftRelation.scala#L198
The text was updated successfully, but these errors were encountered: