-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move tests from scheduled queries / business queries to DBT #83
Changes from 7 commits
88d0fed
a05afc4
8dc3b54
acfc893
c62a947
a86ea70
228f0dc
6bd6abf
74dad5a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,7 +14,6 @@ sources: | |
- incremental_unique_combination_of_columns: | ||
combination_of_columns: | ||
- account_id | ||
- sequence_number | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Got it. I will keep it then, It was not present in the scheduled query, so I treated that as source of truth There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in 6bd6abf |
||
- ledger_entry_change | ||
- last_modified_ledger | ||
date_column_name: "batch_run_date" | ||
|
Original file line number | Diff line number | Diff line change | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -0,0 +1,17 @@ | ||||||||||||||||
{{ config( | ||||||||||||||||
severity="error" | ||||||||||||||||
, tags=["singular_test"] | ||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI this tag runs every 30 mins in airflow. This is a higher frequency compared to what the cloud function and scheduled query tests used to run at. Which is good. Just mentioning this in case we get noisy alerts where we might want to adjust the query and/or the frequency the tests are run (possibly with a separate dbt tag). |
||||||||||||||||
) | ||||||||||||||||
}} | ||||||||||||||||
|
||||||||||||||||
with bucketlist_db_size as ( | ||||||||||||||||
select sequence, | ||||||||||||||||
closed_at, | ||||||||||||||||
total_byte_size_of_bucket_list / 1000000000 as bl_db_gb | ||||||||||||||||
from {{ source('crypto_stellar', 'history_ledgers') }} | ||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this should be a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wont test still use Example: stellar-dbt-public/models/staging/stg_history_ledgers.sql Lines 6 to 12 in 2e63a8a
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It will use the test project/dataset because the source for staging tables will be overwritten by the dbt_project.yml in the private dbt repo
I don't think generic tests has such an override defined. So technically you can add a generic test source override to dbt_project.yml. But my preference would be to just change the generic test There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Got it. Yes, agree in that case we should just use ref There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in 74dad5a |
||||||||||||||||
where closed_at >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 HOUR ) | ||||||||||||||||
-- alert when the bucketlist has grown larger than 12 gb | ||||||||||||||||
and total_byte_size_of_bucket_list / 1000000000 >= 12 | ||||||||||||||||
) | ||||||||||||||||
|
||||||||||||||||
select * from bucketlist_db_size |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
{{ config( | ||
severity="error" | ||
, tags=["singular_test"] | ||
) | ||
}} | ||
|
||
-- Enriched_history_operations table is dependent on the | ||
-- history_operations table to load. It is assumed that | ||
-- any id present in the upstream table should be loaded in | ||
-- the downstream. If records are not present, alert the team. | ||
WITH find_missing AS ( | ||
SELECT op.id, | ||
op.batch_run_date, | ||
op.batch_id | ||
FROM {{ source('crypto_stellar', 'history_operations') }} op | ||
LEFT OUTER JOIN {{ ref('enriched_history_operations') }} eho | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same comment here I think this should be a ref('stg_history_operations') instead of a source. Otherwise these tests would be hardcoded to just prod right? Edit: also in this case there would be a miss match between data if run in test because there is a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Addressed in 74dad5a |
||
ON op.id = eho.op_id | ||
WHERE eho.op_id IS NULL | ||
-- Scan only the last 24 hours of data. Alert runs intraday so failures | ||
-- are caught and resolved quickly. | ||
AND TIMESTAMP(op.batch_run_date) >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 DAY ) | ||
), | ||
find_max_batch AS ( | ||
SELECT MAX(batch_run_date) AS max_batch | ||
FROM {{ source('crypto_stellar', 'history_operations') }} | ||
WHERE TIMESTAMP(batch_run_date) >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 DAY ) | ||
) | ||
SELECT batch_run_date, | ||
batch_id, | ||
count(*) | ||
FROM find_missing | ||
-- Account for delay in loading history_operations table prior to | ||
-- enriched_history_operations table being loaded. | ||
WHERE batch_run_date != (SELECT max_batch FROM find_max_batch) | ||
GROUP BY 1, 2 | ||
ORDER BY 1 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
{{ config( | ||
severity="error" | ||
, tags=["singular_test"] | ||
) | ||
}} | ||
|
||
-- Query studies the number of reported transactions and operations | ||
-- reported and committed per ledger in history_ledgers with the | ||
-- actual transaction count and operation count in the ledger. | ||
-- If the counts mismatch, there was a batch processing error | ||
-- and transactions or operations were dropped from the dataset. | ||
-- Get the actual count of transactions per ledger | ||
WITH txn_count AS ( | ||
SELECT ledger_sequence, COUNT(id) as txn_transaction_count | ||
FROM {{ source('crypto_stellar', 'history_transactions') }} | ||
--Take all ledgers committed in the last 36 hours to validate newly written data | ||
-- Alert runs at 12pm UTC in GCP which creates the 36 hour interval | ||
WHERE TIMESTAMP(batch_run_date) >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 DAY ) | ||
GROUP BY ledger_sequence | ||
), | ||
-- Get the actual count of operations per ledger | ||
operation_count AS ( | ||
SELECT A.ledger_sequence, COUNT(B.id) AS op_operation_count | ||
FROM {{ source('crypto_stellar', 'history_transactions') }} A | ||
JOIN {{ source('crypto_stellar', 'history_operations') }} B | ||
ON A.id = B.transaction_id | ||
WHERE TIMESTAMP(A.batch_run_date) >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 DAY ) | ||
AND TIMESTAMP(B.batch_run_date) >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 DAY ) | ||
GROUP BY A.ledger_sequence | ||
), | ||
-- compare actual counts with the counts reported in the ledgers table | ||
final_counts AS ( | ||
SELECT A.sequence, A.closed_at, A.batch_id, | ||
A.tx_set_operation_count as expected_operation_count, | ||
A.operation_count, | ||
(A.failed_transaction_count + A.successful_transaction_count) as expected_transaction_count, | ||
COALESCE(B.txn_transaction_count, 0) as actual_transaction_count, | ||
COALESCE(C.op_operation_count, 0) as actual_operation_count | ||
FROM {{ source('crypto_stellar', 'history_ledgers') }} A | ||
LEFT OUTER JOIN txn_count B | ||
ON A.sequence = B.ledger_sequence | ||
LEFT OUTER JOIN operation_count C | ||
ON A.sequence = C.ledger_sequence | ||
WHERE TIMESTAMP(A.batch_run_date) >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 DAY ) | ||
) | ||
, raw_values AS ( | ||
SELECT sequence, closed_at, batch_id, | ||
expected_transaction_count, actual_transaction_count, | ||
expected_operation_count, actual_operation_count | ||
FROM final_counts | ||
WHERE | ||
((expected_transaction_count <> actual_transaction_count) | ||
OR (expected_operation_count <> actual_operation_count)) | ||
) | ||
SELECT batch_id, | ||
SUM(expected_transaction_count) as exp_txn_count, | ||
SUM(actual_transaction_count ) as actual_txn_count, | ||
SUM(expected_operation_count ) as exp_op_count, | ||
SUM(actual_operation_count ) as actual_op_count | ||
FROM raw_values | ||
--@TODO: figure out a more precise delay for ledgers. Since tables are loaded on a 15-30 min delay, | ||
-- we do not want a premature alert to row count mismatches when it could be loading latency | ||
WHERE closed_at <= TIMESTAMP_ADD('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL -180 MINUTE ) | ||
GROUP BY batch_id | ||
ORDER BY batch_id |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
{{ config( | ||
severity="warn" | ||
, tags=["singular_test"] | ||
) | ||
}} | ||
|
||
with surge_pricing_check as ( | ||
select inclusion_fee_charged, | ||
ledger_sequence, | ||
closed_at | ||
from {{ ref('enriched_history_operations_soroban') }} | ||
where closed_at >= TIMESTAMP_SUB('{{ dbt_airflow_macros.ts(timezone=none) }}', INTERVAL 1 HOUR ) | ||
-- inclusion fees over 100 stroops indicate surge pricing on the network | ||
and inclusion_fee_charged > 100 | ||
) | ||
|
||
select * from surge_pricing_check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure
batch_run_date
should be included. My assumption was that thehistory_assets
table was unique onasset_code, asset_issuer, and asset_type
otherwise we would have "duplicate assets" based where each asset would have multiplebatch_run_dates
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 6bd6abf