-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dbt generated model tables's Data Archiving #113
Comments
I am analyzing this requirement and update you accordingly. |
Hi Ajay, |
Hi Santosh, I have analyze this requirement and its a new feature of dbt Vertica connector. I will plan to release this feature with DBT Vertica 1.7.0 release. If development and QA will complete in time then will release with dbt vertica 1.7.0 other wise will release in later release of dbt vertica connector. Thanks & Regards, |
Hi Ajay, Thanks & Best Regards |
Hi Ajay, |
This functionalities will be part of release 1.7.0 or more and not compatible with previous version of connector. . |
Requirements
We are using the dbt vertica in one of our products. We require data archiving to S3 as a parquet file. We require to archive data for 3 years before purging permanently for the database.
We are required to archive the older than one-year data records and have only last year's data in the model tables for our reporting purposes.
Solution
In most cases, we are using incremental materialization and data is growing hugely. We on investigating & looking for an elegant solution to archive the data in place of deleting and purging.
Our Approach
We would like to achieve model table data using the following available features of Vertica below.
• Is it a good idea to alter the DBT model table using any other script to bring the partition?
• Is it a good idea to use any other script to shift the data records from the model table to a temp table using Vertica function MOVE_PARTITIONS_TO_TABLE and then using “EXPORT TO PARQUET” for data record archiving from the temp table to S3?
It would be great help if you suggest a better solution to us.
The text was updated successfully, but these errors were encountered: