Skip to content

Commit

Permalink
Merge pull request #525 from RedHatInsights/fix_irregular_test
Browse files Browse the repository at this point in the history
Remove log checks, rely on process already finished
  • Loading branch information
joselsegura authored Nov 3, 2023
2 parents 180c0d5 + e715d39 commit 52f164b
Show file tree
Hide file tree
Showing 4 changed files with 47 additions and 10 deletions.
36 changes: 36 additions & 0 deletions docs/scenarios_list.md
Original file line number Diff line number Diff line change
Expand Up @@ -1004,3 +1004,39 @@ nav_order: 3
* Check if CCX Upgrade Risk Data Engineering Service application is available
* Check if CCX Upgrade Risk Data Engineering Service can be run

## [`parquet-factory/indexes.feature`](https://github.com/RedHatInsights/insights-behavioral-spec/blob/main/features/parquet-factory/indexes.feature)

* If Parquet file already exists, the index of the new one should be 1

## [`parquet-factory/kafka_messages.feature`](https://github.com/RedHatInsights/insights-behavioral-spec/blob/main/features/parquet-factory/kafka_messages.feature)

* Parquet Factory should fail if it cannot read from Kafka
* Parquet Factory shouldn't finish if only messages from the previous hour arrived
* Parquet Factory shouldn't finish if not all the topics and partitions are filled with current hour messages
* Parquet Factory should finish if all the topics and partitions are filled with current hour messages
* After aggregating messages from previous hour, the first messages from current hour has to be processed first
* Parquet Factory should finish if the limit of kafka messages is exceeded even if no messages from current hour arrived
* Parquet Factory should not commit the messages from current hour if there are no prior messages
* Parquet Factory shouldn't send duplicate rows

## [`parquet-factory/metrics.feature`](https://github.com/RedHatInsights/insights-behavioral-spec/blob/main/features/parquet-factory/metrics.feature)

* If the Pushgateway is not accessible, Parquet Factory should run successfully
* If the Pushgateway is accessible, Parquet Factory should run successfully and send the metrics to the Pushgateway
* If the Pushgateway is accessible and I run Parquet Factory with messages from the previous hour, the "files_generated" and "inserted_rows" metrics should be 1 for all the tables
* If the Pushgateway is accessible and Parquet Factory errors, the "error_count" metric should increase

## [`parquet-factory/parquet_files.feature`](https://github.com/RedHatInsights/insights-behavioral-spec/blob/main/features/parquet-factory/parquet_files.feature)

* Table generation: cluster_info
* Table generation: available_updates
* Table generation: conditional_update_conditions
* Table generation: conditional_update_risks
* Table generation: cluster_thanos_info

## [`parquet-factory/s3.feature`](https://github.com/RedHatInsights/insights-behavioral-spec/blob/main/features/parquet-factory/s3.feature)

* Parquet Factory should fail if it cannot connect with S3. When I rerun it, it should re-process the messages from the beginning
* Parquet Factory should fail if it cannot find the bucket
* Parquet Factory shouldn't fail if it cannot find the folder/prefix where the files are stored

10 changes: 0 additions & 10 deletions features/parquet-factory/kafka_messages.feature
Original file line number Diff line number Diff line change
Expand Up @@ -156,16 +156,6 @@ Feature: Ability to process the Kafka messages correctly
And I set the environment variable "PARQUET_FACTORY__KAFKA_FEATURES__MAX_CONSUMED_RECORDS" to "1"
And I run Parquet Factory with a timeout of "10" seconds
Then Parquet Factory should have finish
And The logs should contain
| topic | partition | offset | message |
| incoming_features_topic | 0 | 0 | message processed |
| incoming_features_topic | 1 | 0 | message processed |
| incoming_rules_topic | 0 | 0 | message processed |
| incoming_rules_topic | 1 | 0 | message processed |
| incoming_features_topic | 0 | 1 | FINISH |
| incoming_features_topic | 1 | 1 | FINISH |
| incoming_rules_topic | 0 | 1 | FINISH |
| incoming_rules_topic | 0 | 1 | FINISH |
Then The S3 bucket is not empty

Scenario: Parquet Factory should not commit the messages from current hour if there are no prior messages
Expand Down
10 changes: 10 additions & 0 deletions parquet_factory_tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,16 @@ function code_coverage_report() {
EOF
}

function add_exit_trap {
local to_add=$1
if [[ -z "$exit_trap_command" ]]
then
exit_trap_command="$to_add"
else
exit_trap_command="$exit_trap_command; $to_add"
fi
}

flag=${1:-""}

if [[ "${flag}" = "coverage" ]]
Expand Down
1 change: 1 addition & 0 deletions tools/gen_scenario_list.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@
"ccx-notification-writer",
"ccx-upgrades-inference",
"ccx-upgrades-data-eng",
"parquet-factory",
)

# generate page header
Expand Down

0 comments on commit 52f164b

Please sign in to comment.