-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2023-10-16 main -> prod #2516
2023-10-16 main -> prod #2516
Conversation
* This... validated two workbooks... Huh. That worked. * Fixes an extraction bug When a cell has a None value, it should become "" in the JSON. * Broke things apart. This made no changes other than to take the previous work and break it into semantically meaningful files/pieces. Now, each workbook has a file. The IR work has its own file. * Small improvements, merging main. * Passes regression tests. * Passes all regressions, demonstrates new checks This commit now passes all workbooks through the IR. The README is started, but not complete. It demonstrates the authoring of checks in keeping with the cross-validations. The errors generated pass through to the frontend correctly, and visualize like all other workbook upload errors. A no-op check is provided. It always passes. A check for a missing UEI is provided. It correctly stops empty UEIs. If a UEI is present, but does not match the regex, it passes through the IR check, and is caught by the schema validation. * Fixes state/other cluster errors This takes a recent helpdesk ticket and demonstrates how to improve the error messages. In this case, a user had N/A N/A N/A for cluster name, state cluster name, and other cluster name. This causes horrible JSON Schema validation errors. Now, we tell them exactly what they need to do in order to fix the workbook. Once the instructions are followed, the user's workbook is fixed, and it passes JSON Schema validation.: * is_direct and passthrough names This adds two checks: 1. The user failed to include the is_direct value(s) 2. The user failed to include a passthrough name when is_direct is `N`. * Adds a loan balance check The JSON Schema validation for this was confusing. When N, we expect nothing in the balance column. When Y, we expect a balance. This adds a check that provides a clear error message. * This checks that the workbook is right This adds up-front checks for: 1. Making sure we are getting a FAC workbook (it checks for a Coversheet) 2. It checks to see if it is the right workbook for a given validation section Now runs notes-to-sefa-specific checks. * Ran black. However, I might have run the wrong version. * Fixing import, removing excel. Not needed anymore, and confuses the linter. * Handles the is_major and audit_report_type This Y/N pair breaks in JSON Schemas. Better error reporting on this pair of columns. * Adds an award finding check Disambinguates the Y/N for prior reference numbers. * Checks for empty rows in the middle of the data. How do people do these things? * Checks award numbers 1. Makes sure they are all unique 2. Makes sure they're in the right sequence Removed some logging from no_major_program_type * Checking for blank fields 1. Cannot have blank findings counts 2. Cannot have blank cluster names When these are blank, they sometimes get through to the schema, and then really bad errors come back. * Moving import to a better place w.r.t. the list. All of these probably have to go to the top of the file for the linter. it made sense to me to put them near the lists I was building for now. * Added other half of passthrough names Handles more workbook cases now. * Replaced some print statements Can't chase down an error with a weird... Excel issue where fonts are involved. * Ready to fix tests... Changing to the IR broke a lot of tests. * Fixing more missing columns ZD 114 had many fields missing; so many that it would make it through the improved checks and still fail the schema. Now, 114 would be guided through the submission by these errors. Their workbook can be started in the errored state, and the error messages will guide them to a valid workbook. * Adds transforms The first transform is on Notes to SEFA. We have an invisible column called seq_number. Users somehow manage to delete things in a way that the cell ends up with a `REF!` instead of a number. The solution is to replace the column with values that are generated computationally on the server side. It is not clear that we use this value? * Using section names, adding prechecks Broke the general checks out, so they can run before transforms. Some transforms may have to run before. (I found one in notes-to-sefa.) So, we need to make sure we're in the right workbook. Then we can clean up the data. Then we can begin checking it for semantics. Then we can rewrite it into a JSON doc. So. Those changes are in this commit, and some tightening on the passthrough names. All inspired by the same workbook... * Adding the Audit Finding grid check As a backstop to the schemas, adding in the allowed grid from the UG. Makes sure the Y/N combos are allowed. * Minor cleanups/removals Comments and printlns that made it through. * Bringing in workbook E2E code. * Adds workbooks to validate with, tests This now tests many workbooks. It runs them through the new generic importer and the JSON validator. If they validate, it is good. If not, it fails. Next is to add explicit failing tests. * Adding a failure test This walks workbooks that are constructed to fail. The test runs all of them, and counts the failures. It should be the case that every workbook in the directory fails. Therefore, we count the workbooks and count the failures. If they come out different, clearly, we did not fail everywhere we expected. * Confirmed fails correctly with... correct wb Placing a correct workbook in the directory does the job. * Adding a breadcrumb The naming in the fail directory structure matters. * Adding another breadcrumb. * Removing print statements * Adds full check paths everywhere This adds the full check path everywhere. It also adds a new failure check for CAP, to make sure the general validations are running at the start. * Linting. * More linting. * Linting. * These are needed for tests in the tree. The linter says they have to go. * Removing more prints There's one I can't find. Also, I'm not going to be able to satisfy the linter. I give up for now. * Some unit tests, removing an unused function From some tests off to the side earlier. * Linting. My version of black does not match the team's. Our docs do not make clear to me how I should be running the linter so as to match the online environment. * This runs end-to-end with cross-val I didn't realize cross-val was baked into the `sac` object. This now runs cross-validation on the workbooks when the E2E code is run. * Trying to work into unit tests Can't quite, because it wants to write to the real DB (vs. a mocked DB). For now, this will have to be a future thing. * Updates from intial review. Expanding variable names, adding comments. TBD some more unit tests. * Fixing test books. 1 * Fixing error introduced through simplification Forgot that the *range* is needed for error message construction, not just the *values* from the range. * Fixed. * Removing a period from an error message. * Updated workbook versions and updated federalAwards template formula in column J to prevent endless non-zero values * Linting * Necessary change to prevent award reference like this AWARD-0001-EXTRA to pass * Necessary change to prevent check_sequential_award_numbers from cratching * More linting * Linting ... * Code update to make mypy happy * Adding a new transform for the EINs They all need to be strings. * Adding more Notes to SEFA checks * Passing tests. I have met the enemy and they is me. * Linting. * Formatting. --------- Co-authored-by: Hassan D. M. Sambo <[email protected]> Co-authored-by: Tadhg O'Higgins <[email protected]>
* Restore prior strictness in dissemination step. * Remove unnecessary function.
* First draft of BR SOP * Small edits * Update BackupRestore.md
* Add item on migration files to PR template. * Add item on migration files to PR template. * Add item on migration files to PR template. * Add item on migration files to PR template.
* First working version of staff admin screens * Include cog_over filter in SAC * Implement tests * Fix linting issue with static passwords
Terraform plan for staging Plan: 0 to add, 1 to change, 0 to destroy.Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.staging.module.clamav.cloudfoundry_app.clamav_api will be updated in-place
~ resource "cloudfoundry_app" "clamav_api" {
~ docker_image = "ghcr.io/gsa-tts/fac/clamav@sha256:080f8d91a5abe5e9069aac951bd02173394833bf763b2ed03eb31420f5c55db8" -> "ghcr.io/gsa-tts/fac/clamav@sha256:979d85192e53e377f9b740baa566a4ee2ad5bd609a5b0fb27df7bf1e222663dd"
id = "d1bea029-d2d3-4b68-b16d-b216bcaea573"
name = "fac-av-staging"
# (15 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Warning: Argument is deprecated
with module.staging.module.database.cloudfoundry_service_instance.rds,
on /tmp/terraform-data-dir/modules/staging.database/database/main.tf line 14, in resource "cloudfoundry_service_instance" "rds":
14: recursive_delete = var.recursive_delete
Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases
(and 2 more similar warnings elsewhere) ✅ Plan applied in Deploy to Staging Environment #66 |
Terraform plan for production Plan: 0 to add, 1 to change, 0 to destroy.Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# module.production.module.clamav.cloudfoundry_app.clamav_api will be updated in-place
~ resource "cloudfoundry_app" "clamav_api" {
~ docker_image = "ghcr.io/gsa-tts/fac/clamav@sha256:080f8d91a5abe5e9069aac951bd02173394833bf763b2ed03eb31420f5c55db8" -> "ghcr.io/gsa-tts/fac/clamav@sha256:979d85192e53e377f9b740baa566a4ee2ad5bd609a5b0fb27df7bf1e222663dd"
id = "5d0afa4f-527b-472a-8671-79a60335417f"
name = "fac-av-production"
# (15 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
Warning: Argument is deprecated
with module.domain.cloudfoundry_service_instance.external_domain_instance,
on /tmp/terraform-data-dir/modules/domain/domain/main.tf line 45, in resource "cloudfoundry_service_instance" "external_domain_instance":
45: recursive_delete = var.recursive_delete
Since CF API v3, recursive delete is always done on the cloudcontroller side.
This will be removed in future releases
(and 3 more similar warnings elsewhere) ✅ Plan applied in Deploy to Production Environment #21 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviews of the PRs that make up this merge have been covered by multiple members of the team. I've been part of a number of those reviews.
Minimum allowed coverage is Generated by 🐒 cobertura-action against 405b184 |
Deploying to staging: