Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ETL-644] Update drop_table_duplicates function to use new method #113

Merged
merged 1 commit into from
May 8, 2024

Conversation

philerooski
Copy link
Contributor

Primary changes are L192-208 where we swap the old (and faulty) sort + drop duplicate method of removing duplicates with the new window + sort + rank + filter method.

I also did some refactoring to make things a little more intuitive:

  • drop_table_duplicates and drop_deleted_healthkit_data now return Spark DataFrames rather than Glue DynamicFrames. These functions already produced DataFrames so I removed the final cast as a DynamicFrame to simplify things. We cast back to a DynamicFrame on L662 since both branches of the conditional (relationalize+write else write) take a DynamicFrame as input.
  • Mostly as a consequence of the above refactor, rather than deriving the data type from the table name in non-main functions I derive the data type once in main and pass that as an argument to functions which reference the data type. I think this consolidates the number of places where a data type is derived from its table name (or from the job arguments, where the table name is derived), making things more consistent generally.

@philerooski philerooski requested a review from a team as a code owner May 8, 2024 00:05
Copy link

sonarqubecloud bot commented May 8, 2024

Quality Gate Passed Quality Gate passed

Issues
13 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
0.0% Duplication on New Code

See analysis details on SonarCloud

Copy link
Contributor

@BryanFauble BryanFauble left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work!

window_ordered = window_unordered.orderBy(
col("export_end_date").desc()
)
table_no_duplicates = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this way may be more common in databases, versus sorting and dropping as you would an in-memory dataframe. Perhaps it has something to do with the distributed (across partitions) nature of the data.

Copy link
Member

@thomasyu888 thomasyu888 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔥 LGTM!

Copy link
Contributor

@rxu17 rxu17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@philerooski
Copy link
Contributor Author

As a sanity check, I compared the Fitbit Intraday Combined Parquet data produced by the main branch (which uses the older method of dropping duplicates) to this feature branch. Both were produced from the same set of JSON pilot data in the dev environment. While each contained the same number of unique records, I saw differences in which exports the records were sourced from, confirming that these two methods were dropping duplicates differently:

>>> fitbit_branch.export_end_date.value_counts() 
export_end_date
2023-01-12T00:00:00    1996228
2023-01-03T00:00:00     173251
2023-01-14T00:00:00      59005
2023-06-27T00:00:00      10050
Name: count, dtype: int64
>>> fitbit_main.export_end_date.value_counts()
export_end_date
2023-01-12T00:00:00    1918724
2023-01-03T00:00:00     251167
2023-01-14T00:00:00      58593
2023-06-27T00:00:00      10050
Name: count, dtype: int64

I then read the JSON data as a pandas dataframe and dropped duplicates. I understand and trust how duplicates are dropped in an in-memory dataframe more so than in a Spark dataframe, so this just a way to independently verify that we are dropping duplicates correctly. I saw the same distribution of records as the feature branch:

>>> fitbit_json_df_no_dups.export_end_date.value_counts()
2023-01-12T00:00:00    1996228
2023-01-03T00:00:00     173251
2023-01-14T00:00:00      59005
2023-06-27T00:00:00      10050
Name: export_end_date, dtype: int64

@philerooski philerooski merged commit 5216f97 into main May 8, 2024
15 checks passed
@philerooski philerooski deleted the etl-644 branch May 8, 2024 18:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants