Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2023-10-16 main -> prod #2516

Merged
merged 5 commits into from
Oct 16, 2023
Merged

2023-10-16 main -> prod #2516

merged 5 commits into from
Oct 16, 2023

Conversation

danswick
Copy link
Contributor

@danswick danswick commented Oct 16, 2023

Deploying to staging:

jadudm and others added 5 commits October 16, 2023 16:48
* This... validated two workbooks...

Huh. That worked.

* Fixes an extraction bug

When a cell has a None value, it should become "" in the JSON.

* Broke things apart.

This made no changes other than to take the previous work and break it
into semantically meaningful files/pieces.

Now, each workbook has a file.

The IR work has its own file.

* Small improvements, merging main.

* Passes regression tests.

* Passes all regressions, demonstrates new checks

This commit now passes all workbooks through the IR.

The README is started, but not complete.

It demonstrates the authoring of checks in keeping with the
cross-validations. The errors generated pass through to the frontend
correctly, and visualize like all other workbook upload errors.

A no-op check is provided. It always passes.

A check for a missing UEI is provided. It correctly stops empty UEIs. If
a UEI is present, but does not match the regex, it passes through the
IR check, and is caught by the schema validation.

* Fixes state/other cluster errors

This takes a recent helpdesk ticket and demonstrates how to improve the
error messages.

In this case, a user had

N/A N/A N/A

for cluster name, state cluster name, and other cluster name.

This causes horrible JSON Schema validation errors.

Now, we tell them exactly what they need to do in order to fix the
workbook.

Once the instructions are followed, the user's workbook is fixed, and it
passes JSON Schema validation.:

* is_direct and passthrough names

This adds two checks:

1. The user failed to include the is_direct value(s)
2. The user failed to include a passthrough name when is_direct is `N`.

* Adds a loan balance check

The JSON Schema validation for this was confusing.

When N, we expect nothing in the balance column.

When Y, we expect a balance.

This adds a check that provides a clear error message.

* This checks that the workbook is right

This adds up-front checks for:

1. Making sure we are getting a FAC workbook (it checks for a
   Coversheet)
2. It checks to see if it is the right workbook for a given validation
   section

Now runs notes-to-sefa-specific checks.

* Ran black.

However, I might have run the wrong version.

* Fixing import, removing excel.

Not needed anymore, and confuses the linter.

* Handles the is_major and audit_report_type

This Y/N pair breaks in JSON Schemas.

Better error reporting on this pair of columns.

* Adds an award finding check

Disambinguates the Y/N for prior reference numbers.

* Checks for empty rows in the middle of the data.

How do people do these things?

* Checks award numbers

1. Makes sure they are all unique
2. Makes sure they're in the right sequence

Removed some logging from no_major_program_type

* Checking for blank fields

1. Cannot have blank findings counts
2. Cannot have blank cluster names

When these are blank, they sometimes get through to the schema, and then
really bad errors come back.

* Moving import to a better place w.r.t. the list.

All of these probably have to go to the top of the file for the linter.
it made sense to me to put them near the lists I was building for now.

* Added other half of passthrough names

Handles more workbook cases now.

* Replaced some print statements

Can't chase down an error with a weird... Excel issue where fonts are
involved.

* Ready to fix tests...

Changing to the IR broke a lot of tests.

* Fixing more missing columns

ZD 114 had many fields missing; so many that it would make it through
the improved checks and still fail the schema.

Now, 114 would be guided through the submission by these errors. Their
workbook can be started in the errored state, and the error messages
will guide them to a valid workbook.

* Adds transforms

The first transform is on Notes to SEFA.

We have an invisible column called seq_number. Users somehow manage to
delete things in a way that the cell ends up with a `REF!` instead of a
number. The solution is to replace the column with values that are
generated computationally on the server side.

It is not clear that we use this value?

* Using section names, adding prechecks

Broke the general checks out, so they can run before transforms.

Some transforms may have to run before. (I found one in notes-to-sefa.)

So, we need to make sure we're in the right workbook. Then we can clean
up the data. Then we can begin checking it for semantics. Then we can
rewrite it into a JSON doc.

So.

Those changes are in this commit, and some tightening on the passthrough
names. All inspired by the same workbook...

* Adding the Audit Finding grid check

As a backstop to the schemas, adding in the allowed grid from the UG.
Makes sure the Y/N combos are allowed.

* Minor cleanups/removals

Comments and printlns that made it through.

* Bringing in workbook E2E code.

* Adds workbooks to validate with, tests

This now tests many workbooks.

It runs them through the new generic importer and the JSON validator.

If they validate, it is good. If not, it fails.

Next is to add explicit failing tests.

* Adding a failure test

This walks workbooks that are constructed to fail.

The test runs all of them, and counts the failures.

It should be the case that every workbook in the directory fails.
Therefore, we count the workbooks and count the failures.

If they come out different, clearly, we did not fail everywhere we
expected.

* Confirmed fails correctly with... correct wb

Placing a correct workbook in the directory does the job.

* Adding a breadcrumb

The naming in the fail directory structure matters.

* Adding another breadcrumb.

* Removing print statements

* Adds full check paths everywhere

This adds the full check path everywhere.

It also adds a new failure check for CAP, to make sure the general

validations are running at the start.

* Linting.

* More linting.

* Linting.

* These are needed for tests in the tree.

The linter says they have to go.

* Removing more prints

There's one I can't find.

Also, I'm not going to be able to satisfy the linter. I give up for now.

* Some unit tests, removing an unused function

From some tests off to the side earlier.

* Linting.

My version of black does not match the team's.

Our docs do not make clear to me how I should be running the linter so
as to match the online environment.

* This runs end-to-end with cross-val

I didn't realize cross-val was baked into the `sac` object.

This now runs cross-validation on the workbooks when the E2E code is
run.

* Trying to work into unit tests

Can't quite, because it wants to write to the real DB (vs. a mocked DB).

For now, this will have to be a future thing.

* Updates from intial review.

Expanding variable names, adding comments.

TBD some more unit tests.

* Fixing test books.
1

* Fixing error introduced through simplification

Forgot that the *range* is needed for error message construction, not
just the *values* from the range.

* Fixed.

* Removing a period from an error message.

* Updated workbook versions and updated federalAwards template formula in column J to prevent endless non-zero values

* Linting

* Necessary change to prevent award reference like this AWARD-0001-EXTRA to pass

* Necessary change to prevent check_sequential_award_numbers from cratching

* More linting

* Linting ...

* Code update to make mypy happy

* Adding a new transform for the EINs

They all need to be strings.

* Adding more Notes to SEFA checks

* Passing tests.

I have met the enemy and they is me.

* Linting.

* Formatting.

---------

Co-authored-by: Hassan D. M. Sambo <[email protected]>
Co-authored-by: Tadhg O'Higgins <[email protected]>
* Restore prior strictness in dissemination step.

* Remove unnecessary function.
* First draft of BR SOP

* Small edits

* Update BackupRestore.md
* Add item on migration files to PR template.

* Add item on migration files to PR template.

* Add item on migration files to PR template.

* Add item on migration files to PR template.
* First working version of staff admin screens

* Include cog_over filter in SAC

* Implement tests

* Fix linting issue with static passwords
@danswick danswick requested a review from jadudm October 16, 2023 20:58
@danswick danswick temporarily deployed to production October 16, 2023 20:58 — with GitHub Actions Inactive
@danswick danswick temporarily deployed to staging October 16, 2023 20:58 — with GitHub Actions Inactive
@github-actions
Copy link
Contributor

github-actions bot commented Oct 16, 2023

Terraform plan for staging

Plan: 0 to add, 1 to change, 0 to destroy.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.staging.module.clamav.cloudfoundry_app.clamav_api will be updated in-place
  ~ resource "cloudfoundry_app" "clamav_api" {
      ~ docker_image                    = "ghcr.io/gsa-tts/fac/clamav@sha256:080f8d91a5abe5e9069aac951bd02173394833bf763b2ed03eb31420f5c55db8" -> "ghcr.io/gsa-tts/fac/clamav@sha256:979d85192e53e377f9b740baa566a4ee2ad5bd609a5b0fb27df7bf1e222663dd"
        id                              = "d1bea029-d2d3-4b68-b16d-b216bcaea573"
        name                            = "fac-av-staging"
        # (15 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Warning: Argument is deprecated

  with module.staging.module.database.cloudfoundry_service_instance.rds,
  on /tmp/terraform-data-dir/modules/staging.database/database/main.tf line 14, in resource "cloudfoundry_service_instance" "rds":
  14:   recursive_delete = var.recursive_delete

Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases

(and 2 more similar warnings elsewhere)

✅ Plan applied in Deploy to Staging Environment #66

@github-actions
Copy link
Contributor

github-actions bot commented Oct 16, 2023

Terraform plan for production

Plan: 0 to add, 1 to change, 0 to destroy.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.production.module.clamav.cloudfoundry_app.clamav_api will be updated in-place
  ~ resource "cloudfoundry_app" "clamav_api" {
      ~ docker_image                    = "ghcr.io/gsa-tts/fac/clamav@sha256:080f8d91a5abe5e9069aac951bd02173394833bf763b2ed03eb31420f5c55db8" -> "ghcr.io/gsa-tts/fac/clamav@sha256:979d85192e53e377f9b740baa566a4ee2ad5bd609a5b0fb27df7bf1e222663dd"
        id                              = "5d0afa4f-527b-472a-8671-79a60335417f"
        name                            = "fac-av-production"
        # (15 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Warning: Argument is deprecated

  with module.domain.cloudfoundry_service_instance.external_domain_instance,
  on /tmp/terraform-data-dir/modules/domain/domain/main.tf line 45, in resource "cloudfoundry_service_instance" "external_domain_instance":
  45:   recursive_delete = var.recursive_delete

Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases

(and 3 more similar warnings elsewhere)

✅ Plan applied in Deploy to Production Environment #21

Copy link
Contributor

@jadudm jadudm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviews of the PRs that make up this merge have been covered by multiple members of the team. I've been part of a number of those reviews.

@github-merge-queue github-merge-queue bot temporarily deployed to dev October 16, 2023 21:15 Inactive
@github-merge-queue github-merge-queue bot temporarily deployed to meta October 16, 2023 21:15 Inactive
@github-actions
Copy link
Contributor

File Coverage Missing
All files 86%
api/serializers.py 88% 177-178 183 188
api/test_views.py 96% 105
api/uei.py 87% 17-18 87 119-120 164 168-169
api/views.py 97% 196-197 204-205 226 362-363
audit/forms.py 47% 22-29 142-149
audit/intake_to_dissemination.py 92% 67-68 201-207 257
audit/models.py 86% 57 59 64 66 213 246 419 437-438 446 468 544-545 549 557 566 572
audit/test_commands.py 87%
audit/test_mixins.py 90% 112-113 117-119 184-185 189-191
audit/test_validators.py 95% 436 440 608-609 848 855 862 869
audit/test_workbooks_should_fail.py 85% 56 83-84 88
audit/test_workbooks_should_pass.py 90% 56 81
audit/utils.py 70% 13 21 33-35 38
audit/validators.py 94% 137 189 288-289 304-305 486-490 495-499 515-524
audit/views.py 42% 87-108 131-132 206-207 252-253 264-265 267-271 318-331 334-348 353-366 383-389 394-414 441-445 450-479 522-526 531-551 578-582 587-616 659-663 668-680 683-693 698-710 737-738 743-792 795-835 838-855
audit/cross_validation/additional_ueis.py 93% 33
audit/cross_validation/check_award_ref_declaration.py 90%
audit/cross_validation/check_award_reference_uniqueness.py 93%
audit/cross_validation/check_certifying_contacts.py 87%
audit/cross_validation/check_findings_count_consistency.py 91%
audit/cross_validation/check_ref_number_in_cap.py 90%
audit/cross_validation/check_ref_number_in_findings_text.py 90%
audit/cross_validation/errors.py 78% 30 69
audit/cross_validation/naming.py 68% 178-182
audit/cross_validation/submission_progress_check.py 92% 62 79
audit/cross_validation/tribal_data_sharing_consent.py 81% 33 36 40
audit/cross_validation/validate_general_information.py 93% 28-29
audit/fixtures/single_audit_checklist.py 79% 156 231-240
audit/intakelib/exceptions.py 71% 7-9 12
audit/intakelib/intermediate_representation.py 94% 23-24 87 125 158 182-185
audit/intakelib/mapping_audit_findings.py 97% 51
audit/intakelib/mapping_audit_findings_text.py 97% 51
audit/intakelib/mapping_federal_awards.py 93% 87
audit/intakelib/mapping_util.py 40% 28 32 36 70-99 104 124-135 142-158 162-167 171-185 190 195-215 220-237 258 263-264 273-279 289 304 309
audit/intakelib/checks/check_all_unique_award_numbers.py 79% 24
audit/intakelib/checks/check_cluster_name_always_present.py 82% 21
audit/intakelib/checks/check_federal_award_passed_always_present.py 82% 18
audit/intakelib/checks/check_findings_grid_validation.py 84% 57
audit/intakelib/checks/check_is_a_workbook.py 68% 16
audit/intakelib/checks/check_loan_guarantee.py 81% 42 51
audit/intakelib/checks/check_look_for_empty_rows.py 91% 18
audit/intakelib/checks/check_missing_award_numbers.py 72% 16 22-23
audit/intakelib/checks/check_no_major_program_no_type.py 72% 22 31 40
audit/intakelib/checks/check_no_repeat_findings.py 76% 17 26
audit/intakelib/checks/check_other_cluster_names.py 81% 24 34
audit/intakelib/checks/check_passthrough_name_when_no_direct.py 88% 9 47
audit/intakelib/checks/check_sequential_award_numbers.py 76% 14 22
audit/intakelib/checks/check_start_and_end_rows_of_all_columns_are_same.py 89% 14
audit/intakelib/checks/check_state_cluster_names.py 65% 23-24 34
audit/intakelib/checks/check_uei_exists.py 65% 17-18
audit/intakelib/checks/runners.py 90% 98 105
audit/intakelib/checks/util.py 84% 16 33 38
audit/management/commands/load_fixtures.py 46% 39-45
audit/viewlib/submission_progress_view.py 89% 111 171-172
audit/viewlib/tribal_data_consent.py 34% 23-41 44-79
audit/viewlib/upload_report_view.py 26% 32-35 44 91-117 120-170 178-209
cms/views.py 57% 11-16 29-30
config/urls.py 71% 87
dissemination/models.py 99% 458
dissemination/migrations/0002_general_fac_accepted_date.py 47% 10-12
djangooidc/backends.py 78% 32 57-63
djangooidc/exceptions.py 66% 19 21 23 28
djangooidc/oidc.py 16% 32-35 45-51 64-70 92-149 153-199 203-226 230-275 280-281 286
djangooidc/views.py 80% 22 43 114
djangooidc/tests/common.py 96%
report_submission/forms.py 92% 35
report_submission/views.py 76% 83 215-216 218 240-241 260-261 287-396 399-409
report_submission/templatetags/get_attr.py 76% 8 11-14 18
support/admin.py 88% 76 79 84 91-97 100-102
support/cog_over.py 90% 30-33 86 93 145
support/signals.py 66% 23-24 33-34
support/test_cog_over.py 98% 134-135 224
support/management/commands/seed_cog_baseline.py 98% 20-21
tools/update_program_data.py 89% 96
users/auth.py 95% 40-41
users/models.py 97% 51-52
users/fixtures/user_fixtures.py 91%

Minimum allowed coverage is 90%

Generated by 🐒 cobertura-action against 405b184

@github-merge-queue github-merge-queue bot temporarily deployed to dev October 16, 2023 21:16 Inactive
@danswick danswick merged commit f822e12 into prod Oct 16, 2023
29 checks passed
@asteel-gsa asteel-gsa temporarily deployed to staging October 17, 2023 09:01 — with GitHub Actions Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants