Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(incremental): optimize 'insert_overwrite' strategy (#1409) #1410

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

AxelThevenot
Copy link

@AxelThevenot AxelThevenot commented Nov 21, 2024

resolves #1409
docs "N/A"

Problem

The MERGE statement is sub-optimized in BigQuery when it comes to only replace partitions in the 'insert_overwrite' strategy for incremental models

Solution

For the insert_overwrite strategy where we are looking to replace rows at the partition-level, there is a better solution and here is why:

  • a DELETE or INSERT statement is cheapest than a MERGE statement.
  • incremental tables are the most expensive tables in real-world projects.
  • The DELETE statement in BigQuery is free at the partition-level.

This has been tested at Carrefour which is my company.

  • On this replacement of the MERGE statement it reduces the cost by 50.4% and the elapsed time by 35.2% (slot based and not on demand)
  • On the overall procedure it reduces the cost by 26.1% and the elapsed time by 23.1%

This is wrapped in a transaction to avoid deleting rows if any error occurs.

Checklist

  • I have read the contributing guide and understand what's expected of me
  • I have run this code in development and it appears to resolve the stated issue
  • This PR includes tests, or tests are not required/relevant for this PR --> (using the same existing tests for 'insert_overwrite')
  • This PR has no interface changes (e.g. macros, cli, logs, json artifacts, config files, adapter interface, etc) or this PR has already received feedback and approval from Product or DX

@AxelThevenot AxelThevenot requested a review from a team as a code owner November 21, 2024 18:18
@AxelThevenot AxelThevenot changed the title refactor(incremental): optimize 'insert_overwrite' strategy (#1409) feat(incremental): optimize 'insert_overwrite' strategy (#1409) Nov 21, 2024
{{ sql_header if sql_header is not none and include_sql_header }}

begin
begin transaction;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had a problem with transactions, where other jobs can conflict with it.

For example, if this transaction statement is running and another (normal) statement runs on it, the transaction statement one fails:

https://cloud.google.com/bigquery/docs/transactions#transaction_concurrency

This is different as non transaction queries can run concurrently.

At my company it's relatively common to delete things as part of GDPR, or update late arriving columns in posthooks

I'm not saying this reduction in slot time is not worth this cost of conflicting jobs, but just want to point it out as a past learning! And if there is a non transaction version of this logic, that would swerve the transaction concurrency issue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also had a problem where we tried a separate DELETE + INSERT without a transaction, and jobs ran in between with no data (especially when the DELETE + INSERT was catching up with the context date in airflow)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Optimize incremental 'insert_overwrite' strategy
2 participants