Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DRAFT Read replication and moving the API to a second database #4427

Draft
wants to merge 96 commits into
base: main
Choose a base branch
from
Draft
Changes from 1 commit
Commits
Show all changes
96 commits
Select commit Hold shift + click to select a range
f1e4a70
Updating with new layout
jadudm Sep 27, 2024
40158bb
Updating/cleaning up for testing
jadudm Sep 27, 2024
660b98e
Merge branch 'main' into jadudm/api-perf
jadudm Sep 27, 2024
b2263e7
In-progress.
jadudm Sep 29, 2024
d54e8fa
Updating ignores...
jadudm Sep 29, 2024
7e54577
Runs a ffull sequence
jadudm Sep 30, 2024
6e4470e
Cleaining up
jadudm Sep 30, 2024
70e2a13
Updating .profile to match run.sh
jadudm Sep 30, 2024
a31eea0
Bringup for the Admin API
jadudm Oct 2, 2024
b3e8523
Cleanup, missed files.
jadudm Oct 2, 2024
e4fa18f
These are important...
jadudm Oct 2, 2024
de6c7ea
Adding GH action, script
jadudm Oct 2, 2024
b30ba03
Expand the admin API to increase visibility
jadudm Oct 2, 2024
1337097
Full-up, with performance testing
jadudm Oct 3, 2024
65f7365
Updates for live test.
jadudm Oct 4, 2024
1be0b83
Improving/simplifying
jadudm Oct 5, 2024
a809bea
Ready for testing in preview
jadudm Oct 9, 2024
88bc8a5
Source the script...
jadudm Oct 9, 2024
0e4f240
Fix path to sling
jadudm Oct 9, 2024
a94598c
Making the checks noisy.
jadudm Oct 9, 2024
2006d8a
Noisyer.
jadudm Oct 9, 2024
b95cb61
Don't exit so quickly...
jadudm Oct 9, 2024
6f1d153
Try to catch the error and continue if found
asteel-gsa Oct 9, 2024
8c2a047
Before further renaming.
jadudm Oct 10, 2024
e3cc950
Simplified.
jadudm Oct 10, 2024
eb12bad
Forgot to delete a bunch of things...
jadudm Oct 10, 2024
d5d4915
Set +/-e around check...
jadudm Oct 10, 2024
6b45c49
Trying to get past error...
jadudm Oct 10, 2024
328e9b9
Fixing SQL checks
jadudm Oct 10, 2024
3539ffc
Still... fixing return values...
jadudm Oct 10, 2024
1093b0f
Make the sources the same
jadudm Oct 10, 2024
cd525cb
Fix echo statements
jadudm Oct 10, 2024
cbadf2c
Try set -e?
jadudm Oct 10, 2024
2b33286
What about set +e...
jadudm Oct 10, 2024
cf6b5c9
Working around bash...
jadudm Oct 10, 2024
a511d9c
Incremental on removal of Tribal/admin API.
jadudm Oct 22, 2024
a58ebb2
Big update...
jadudm Oct 24, 2024
d1238ec
Merge branch 'main' into jadudm/api-perf
jadudm Oct 24, 2024
2ea604a
Simulating pre-deploy backup
jadudm Oct 24, 2024
8b5d43a
Fixes Minio, copies data
jadudm Oct 25, 2024
1656bad
Using a VCAP_SERVICES locally
jadudm Oct 25, 2024
6f5b205
Minor change, move to /tmp
jadudm Oct 25, 2024
9ea4a29
Move curation tracking init
jadudm Oct 25, 2024
cb0fa77
Removing unnecessary CREATE SCHEMA
jadudm Oct 25, 2024
03a4e6d
Fixed my partial rename
jadudm Oct 25, 2024
1747897
This points advanced search at pd.combined
jadudm Oct 25, 2024
7315cf9
Admin Panel and touch-ups
rnovak338 Oct 25, 2024
0ff0629
Remove Sling README
rnovak338 Oct 25, 2024
551f85e
Moves general back into the public_100 tables.
jadudm Oct 25, 2024
5f300a2
Merge branch 'jadudm/api-perf' of github.com:GSA-TTS/FAC into jadudm/…
jadudm Oct 25, 2024
affe2b7
Simplified run.sh
jadudm Oct 25, 2024
3aa6266
Updated the sling script for bulk data
jadudm Oct 25, 2024
d329f27
Simplified local data load
jadudm Oct 25, 2024
c2149f1
Updating to reflect new startup sequence
jadudm Oct 28, 2024
e1a3bb8
And, forgot the sourcing...
jadudm Oct 28, 2024
edf2c1f
Fake audit, test if migrations will go through
asteel-gsa Oct 28, 2024
5dd7e92
Add replaces to Django
asteel-gsa Oct 28, 2024
db1927b
Undo
asteel-gsa Oct 28, 2024
3ec3e81
Should fix local standup/build
jadudm Oct 28, 2024
ffad20f
Removing orderby on combined
jadudm Oct 29, 2024
409792b
Splits things away from startup
jadudm Oct 29, 2024
7abe12a
Prep API Standup Test
asteel-gsa Oct 29, 2024
decc89a
Fixes run.sh
jadudm Oct 29, 2024
dcb3581
Merge branch 'jadudm/api-perf' of github.com:GSA-TTS/FAC into jadudm/…
jadudm Oct 29, 2024
337ea0d
Update Cypress E2E API checks
rnovak338 Oct 29, 2024
f2ae9de
Changing apparent error to warning
jadudm Oct 29, 2024
df8236d
Fixing email, whitespace
jadudm Oct 29, 2024
f2055c5
Test api refresh
asteel-gsa Oct 29, 2024
825a3e0
Api chmod +x
asteel-gsa Oct 29, 2024
814a5a7
Remove historical data load
asteel-gsa Oct 29, 2024
72a86a0
Push to run api_refresh
asteel-gsa Oct 29, 2024
04970aa
Use pushd & popd to put us where we need to be for source & other utils
asteel-gsa Oct 29, 2024
d2d910d
API Refresh test
asteel-gsa Oct 29, 2024
53ede5a
Overthinking bash pathing
asteel-gsa Oct 29, 2024
5f8a7ec
Debugging typo removal
asteel-gsa Oct 29, 2024
9b361e9
Version bump backup-util to v0.1.9
asteel-gsa Oct 29, 2024
bdbbe31
TF, sequencing
jadudm Oct 29, 2024
cf769b9
API Refresh Test - Final?
asteel-gsa Oct 29, 2024
8977b6d
disable api refresh on push
asteel-gsa Oct 29, 2024
209c26a
Testing with data
jadudm Oct 30, 2024
38ac254
Allows for configuration of the DB
jadudm Oct 31, 2024
8033d3d
Merge branch 'main' into jadudm/api-perf
jadudm Oct 31, 2024
ec602d5
Adding documentation
jadudm Oct 31, 2024
2d009d6
Update api refresh workflow
asteel-gsa Nov 1, 2024
e569dc6
Fix workflows
asteel-gsa Nov 1, 2024
fc614dd
Fixing two missing tables...
jadudm Nov 1, 2024
ab68fa0
Updating tests.
jadudm Nov 1, 2024
24c2c5c
Linting
rnovak338 Nov 7, 2024
8815dd2
Document django admin access
rnovak338 Nov 7, 2024
04a85b8
Merge branch 'main' into jadudm/api-perf
rnovak338 Nov 7, 2024
f2084e9
Linting
rnovak338 Nov 7, 2024
b04e944
More linting - bandit
rnovak338 Nov 8, 2024
0256047
Reformat local python API tests to rest of the testing suite
rnovak338 Nov 8, 2024
015af34
FE Linting
rnovak338 Nov 8, 2024
d1a17ae
Merge branch 'main' into jadudm/api-perf
rnovak338 Nov 14, 2024
04e19e4
Linting
rnovak338 Nov 14, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fixing two missing tables...
jadudm committed Nov 1, 2024
commit fc614dd4b6555b33e2ca2991131fd4f8c413d462
82 changes: 40 additions & 42 deletions backend/dissemination/sql/SQL_README.md
Original file line number Diff line number Diff line change
@@ -12,57 +12,48 @@ This document describes

## the database layout of the FAC

The FAC has two databases.
*This is high-level background for reference.*

**DB1** is the **production** database. The app talks to this database for all live operations.
The FAC has two databases.

1. When a user updates their submission, they are updating a `singleauditchecklist` record in DB1.
2. When a user does a *basic* search, they are searching `dissemination_general` in DB1.
3. When you update user roles in `/admin`, you are editing a table in DB1.
**DB1** is `fac-db`. The app talks to this database for all live operations.

**DB2** began life as a place to do a database snapshot before deploy. We are now using this as a *read replica* for DB1. It hosts a *data pipeline* that is implemented entirely as a sequence of actions in SQL.
When a user updates their submission, they are updating a `singleauditchecklist` record in DB1. When a user does a *basic* search, they are searching `dissemination_general` in DB1. And, finally, when you update user roles in `/admin`, you are editing a table in DB1.

**DB2 updates nightly.** It is *completely* destroyed every night, and completely rebuilt. No data is persisted. In this regard, DB2 is serves a *stateless data pipeline*. More on this later.
**DB2** is `fac-snapshot-db`. began life as a place to do a database snapshot before deploy. It still serves this purpos. However, we are now using it as a place to build a *data pipeline* that is implemented entirely as a sequence of actions in SQL. In this regard, it becomes a *read replica* of sorts where we can serve both *advanced search* and the API.

1. The PostgREST API uses DB2 to resolve *all* API queries.
2. When a user does an *advanced search*, they are using DB2.
**DB2 updates nightly.** The tables described below are *completely* destroyed and rebuilt every night. No data is persisted: DB2 is serves as a *stateless data pipeline*.

## what is in this folder

The SQL folder contains one folder for each database: `fac-db` and `fac-snapshot-db`. These names align with our Terraform configuration.
The SQL folder contains one folder for each database: `fac-db` and `fac-snapshot-db`. These names align with the "friendly names" of our database services in our system configuration.

Inside of each folder are two sub-folders: `pre` and `post`.

1. `pre` contains SQL we run against the databases *before* migrations.
2. `post` contains SQL we run against the databases *after* migrations.

In the case of `fac-db` (DB1), we run all of the scripts in the `pre` folder, we run migrations, and then we run everything in the `post` folder.

In the case of `fac-snapshot-db` (DB2), it is slightly different. We tear things down, then run everything in the `pre` folder, and then we run everything in the `post` folder. There are no migrations in DB2, because it is a stateless copy of DB1. The structure is parallel/preserved/kept-the-same-as DB1 for consistency, but it is worth noting that DB2 does not have any migrations.
In the case of `fac-db` (DB1), we run all of the scripts in the `pre` folder when we deploy, we run migrations, and then we run everything in the `post` folder. This is consistent with what took place previously.

There is one other folder, `sling`. More on this later.
In the case of `fac-snapshot-db` (DB2), it is slightly different. We run everything in the `pre` folder, and then we run everything in the `post` folder. There are no migrations in DB2, because it is a stateless copy of DB1.

## pre/post

The `pre` and `post` folders contain SQL files in execution order. That means that the way the files are named matters.
The `pre` and `post` folders contain SQL files in execution order. That means that the ordering of the files matters.

If the following files are in the `pre` folder:

`000_first.sql`
`010_nope.SKIP`
`020_second.sql`
1. `000_first.sql`
2. `010_nope.SKIP`
3. `020_second.sql`

then they will execute in the lexigraphical order as shown. *However*, only files ending in `.sql` will be executed. This means that `000_first.sql` will be executed, `010_nope.SKIP` will be skipped, and `020_second.sql` will be run second. (Although it encourages a dirty tree, we *might* want to keep a file in the tree, but not have it execute.)

### the pre/post sequence, generally

On each DB, in broad strokes (at time of this being written):

#### DB1 (fac-db)
### what happens on DB1 (fac-db)

On DB1, we do not do much.
On DB1, we remove old schemas and tables (if they exist). If they don't exist, we essentially do nothing.

##### pre
#### pre

1. Drop the API schemas.
2. Initialize audit curation code
@@ -71,66 +62,73 @@ The first step is because we will no longer serve the API from DB1. Therefore, a

The second is because we now have SQL triggers to support *data curation*. These triggers are defined here. Finally, we *disable* audit curation as a "just-in-case" measure. Because it is a state in the DB, the app could crash, and we would be in a condition of recording all changes to the SAC table. This would be *bad*. So, we do a "disable" as part of startup.

##### post
#### post

We tear out the old, old, OLD, Census data (used for the cog/over work in early days).

In the case of DB1, all of the actions could *probably* be `pre` actions. It does not particularly matter.

#### DB2 (fac-snapshot-db)
### what happens on DB2 (fac-snapshot-db)

We do a lot on DB2.
Every night, on DB2, we first back up DB1. Then, we tear down our data pipeline and API, and rebuild it all from the backup we just made. This means that the data pipeline---including the backup---is essentially stateless.

##### pre
#### pre

1. Set up roles (for PostgREST). Without these, PostgREST cannot authenticate/operate.
2. Tear down *all* schemas associated with the data pipeline.
3. Tear down and rebuild sequences used in constructing the new `public_data` tables.

##### post
#### post

1. Copy the `dissemination_*` tables to a `dissem_copy` schema.
##### Copy the `dissemination_*` tables to a `dissem_copy` schema.

We do this because the API is going to attach to `dissem_copy.dissemination_*` tables. We do this instead of using `public.dissemination_*` for the simple reason that those tables are overwritten with each deploy. If we attached the API `VIEW`s to the `public` tables, it would interrupt/disrupt/break the pre-deploy backups. So, the first thing we do is make a copy.

2. Create `public_data` tables.
##### Create `public_data` tables.

These tables are a copy of the `dissem_copy` tables, with some changes.

1. We create a `combined` table that does a 4x `JOIN` across `general`, `federal_awards`, `passthrough`, and `findings`. This is all 100% public data.
1. We create a `combined` table that does a 4x `JOIN` across `general`, `federal_awards`, `passthrough`, and `findings`. This is all 100% public data. (It was previously our `MATERIALIZED VIEW`.)
2. We apply a `general.is_public=true` filter to all tables containing suppressed data, therefore guaranteeing that `notes_to_sefa`, `corrective_action_plans`, and `finding_text` (for example) contain *only* public data.
3. Sequences are inserted in all tables, and a `batch_number`. This is indexed for fast downloading of bulk data.

This is the "data pipeline." It is copying and modifying data to put it in the "right" shape for our API. This way, our API becomes a simple `SELECT *` in a `VIEW`.

1. Create `suppressed_data` tables.
As new data needs are discovered, it is assumed that the `post` operations on DB2 will implement additional copies/table creations/etc. to extend our data pipeline in order to address customer/user needs.

##### Create `suppressed_data` tables.

These are "the same" as the above, but they are filtered to contain only suppressed/Tribal data.

4. Create `metadata` table.
These tables are only accessible via API if you have gone through our Tribal API attestation/access process. Only Federal agencies are able to obtain API access to this data in order to support their oversight operations. Non-privileged keys will find empty result sets (`[]`) if they attempt to query these tables.

##### Create `metadata` table.

A `metadata` table containing counts of rows in all tables create above.

A `metadata` table containing counts of rows in all tables is created.
This also is exposed to `api_v2_0_0`. This allows users to quickly find 1) which tables are present, and 2) how much data is in those tables. This meets customer needs in an important way: when they are downloading data, they want to know "did I get everything?" This lets them do a bulk download via API and then answer that question in a programmatic manner.

5. Create the `api_v1_1_0`.
It also serves as a demonstration for one kind of data manipulation that can be used to create new tables and, therefore, new functionality for users via the API.

##### Create the `api_v1_1_0`.

This is the same code as previously existed, flaws and all. It points at `dissem_copy` tables, because they are 1:1 with what used to be in DB1. Hence, it "just works" "as-was."

A good refactoring would be to point these tables at `public_data` tables instead. The views would no longer require `JOIN` statements, and access control could be handled more gracefully.


6. Create `api_v2_0_0`.
##### Create `api_v2_0_0`.

This points at the `public_data` and `suppressed_data` tables.

7. Setup permissions
##### Setup permissions

All of the API access permissions are set in one place after the tables/views are created.

8. Bring up the API
##### Bring up the API

We issue a `NOTIFY` to PostgREST which tells it to re-read the schemas and provide an API.

9. Indexing
##### Indexing

Now, we index *everything*. If something is not performant, *add more indexes*.

Original file line number Diff line number Diff line change
@@ -149,6 +149,29 @@ CREATE OR REPLACE FUNCTION dissem_copy.create_dissemination_secondaryauditor()
$ct$
LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION dissem_copy.create_dissemination_tribalapiaccesskeyids()
RETURNS VOID
AS
$ct$
BEGIN
CREATE TABLE dissem_copy.dissemination_tribalapiaccesskeyids
AS SELECT * FROM public.dissemination_tribalapiaccesskeyids;
END
$ct$
LANGUAGE plpgsql;

CREATE OR REPLACE FUNCTION dissem_copy.create_dissemination_onetimeaccess()
RETURNS VOID
AS
$ct$
BEGIN
CREATE TABLE dissem_copy.dissemination_onetimeaccess
AS SELECT * FROM public.dissemination_onetimeaccess;
END
$ct$
LANGUAGE plpgsql;


DO LANGUAGE plpgsql
$go$
BEGIN
@@ -176,6 +199,10 @@ $go$
PERFORM dissem_copy.create_dissemination_passthrough();
RAISE info 'create_dissemination_secondaryauditor';
PERFORM dissem_copy.create_dissemination_secondaryauditor();
RAISE INFO 'dissemination_tribalapiaccesskeyids';
PERFORM dissem_copy.create_dissemination_tribalapiaccesskeyids();
RAISE info 'create_dissemination_onetimeaccess';
PERFORM dissem_copy.create_dissemination_onetimeaccess();
END
$go$;

Original file line number Diff line number Diff line change
@@ -41,7 +41,7 @@ BEGIN
SELECT
CASE WHEN EXISTS (
SELECT key_id
FROM copy.dissemination_tribalapiaccesskeyids taaki
FROM dissem_copy.dissemination_tribalapiaccesskeyids taaki
WHERE taaki.key_id = uuid_header::TEXT)
THEN 1::BOOLEAN
ELSE 0::BOOLEAN
Original file line number Diff line number Diff line change
@@ -11,19 +11,19 @@ BEGIN

SELECT api_v1_1_0_functions.get_api_key_uuid() INTO v_uuid_header;

-- Check if the provided API key exists in copy.dissemination_TribalApiAccessKeyIds
-- Check if the provided API key exists in dissem_copy.dissemination_TribalApiAccessKeyIds
SELECT
EXISTS(
SELECT 1
FROM copy.dissemination_tribalapiaccesskeyids
FROM dissem_copy.dissemination_tribalapiaccesskeyids
WHERE key_id = v_uuid_header
) INTO v_key_exists;


-- Get the added date of the key from copy.dissemination_TribalApiAccessKeyIds
-- Get the added date of the key from dissem_copy.dissemination_TribalApiAccessKeyIds
SELECT date_added
INTO v_key_added_date
FROM copy.dissemination_tribalapiaccesskeyids
FROM dissem_copy.dissemination_tribalapiaccesskeyids
WHERE key_id = v_uuid_header;


@@ -33,7 +33,7 @@ BEGIN
SELECT gen_random_uuid() INTO v_access_uuid;

-- Inserting data into the one_time_access table
INSERT INTO copy.dissemination_onetimeaccess (uuid, api_key_id, timestamp, report_id)
INSERT INTO dissem_copy.dissemination_onetimeaccess (uuid, api_key_id, timestamp, report_id)
VALUES (v_access_uuid::UUID, v_uuid_header, CURRENT_TIMESTAMP, report_id);

-- Return the UUID to the user
Original file line number Diff line number Diff line change
@@ -41,7 +41,7 @@ BEGIN
SELECT
CASE WHEN EXISTS (
SELECT key_id
FROM copy.dissemination_tribalapiaccesskeyids taaki
FROM dissem_copy.dissemination_tribalapiaccesskeyids taaki
WHERE taaki.key_id = uuid_header::TEXT)
THEN 1::BOOLEAN
ELSE 0::BOOLEAN
Original file line number Diff line number Diff line change
@@ -11,19 +11,19 @@ BEGIN

SELECT api_v2_0_0_functions.get_api_key_uuid() INTO v_uuid_header;

-- Check if the provided API key exists in copy.dissemination_TribalApiAccessKeyIds
-- Check if the provided API key exists in dissem_copy.dissemination_TribalApiAccessKeyIds
SELECT
EXISTS(
SELECT 1
FROM copy.dissemination_tribalapiaccesskeyids
FROM dissem_copy.dissemination_tribalapiaccesskeyids
WHERE key_id = v_uuid_header
) INTO v_key_exists;


-- Get the added date of the key from copy.dissemination_TribalApiAccessKeyIds
-- Get the added date of the key from dissem_copy.dissemination_TribalApiAccessKeyIds
SELECT date_added
INTO v_key_added_date
FROM copy.dissemination_tribalapiaccesskeyids
FROM dissem_copy.dissemination_tribalapiaccesskeyids
WHERE key_id = v_uuid_header;


@@ -33,7 +33,7 @@ BEGIN
SELECT gen_random_uuid() INTO v_access_uuid;

-- Inserting data into the one_time_access table
INSERT INTO copy.dissemination_onetimeaccess (uuid, api_key_id, timestamp, report_id)
INSERT INTO dissem_copy.dissemination_onetimeaccess (uuid, api_key_id, timestamp, report_id)
VALUES (v_access_uuid::UUID, v_uuid_header, CURRENT_TIMESTAMP, report_id);

-- Return the UUID to the user
Original file line number Diff line number Diff line change
@@ -10,6 +10,8 @@
GRANT USAGE ON SCHEMA api_v1_1_0_functions TO api_fac_gov;
GRANT USAGE ON SCHEMA api_v1_1_0 TO api_fac_gov;
GRANT SELECT ON ALL TABLES IN SCHEMA api_v1_1_0 TO api_fac_gov;
-- GRANT SELECT ON ALL TABLES IN SCHEMA dissem_copy to api_fac_gov;

-- There are no sequences currently on api_v1_1_0
-- GRANT SELECT, USAGE ON ALL SEQUENCES IN SCHEMA api_v1_1_0 TO api_fac_gov;

17 changes: 12 additions & 5 deletions backend/tools/cgov_util_local_only.sh
Original file line number Diff line number Diff line change
@@ -1,19 +1,26 @@
source tools/util_startup.sh

function cgov_util_local_only() {
startup_log "CGOV_LOCAL_ONLY" "Making an initial 'backup'"

# Really, really only run this locally. Or in a GH runner.

if [[ "${ENV}" == "LOCAL" || "${ENV}" == "TESTING" ]]; then
startup_log "CGOV_LOCAL_ONLY" "Making an initial 'backup'"

$PSQL_EXE $FAC_SNAPSHOT_URI -c "DROP SCHEMA IF EXISTS public CASCADE"
gonogo "DROP PUBLIC in fac-snapshot-db"
$PSQL_EXE $FAC_SNAPSHOT_URI -c "CREATE SCHEMA public"
gonogo "CREATE PUBLIC fac-snapshot-db"

check_table_exists $FAC_SNAPSHOT_URI 'public' 'dissemination_general'
local is_general_table=$FUNCTION_RESULT
if [ $is_general_table -ne 0 ]; then
# This is the first run.
startup_log "CGOV_LOCAL_ONLY" "Running cgov-util INITIAL."
$CGOV_UTIL_EXE db_to_db \
--src_db fac-db \
--dest_db fac-snapshot-db \
--operation initial

startup_log "CGOV_LOCAL_ONLY" "Done"
fi

startup_log "CGOV_LOCAL_ONLY" "Done"
return 0
}
79 changes: 0 additions & 79 deletions backend/tools/setup_cgov_env.py

This file was deleted.

6 changes: 6 additions & 0 deletions backend/util/nightly_api_refresh.sh
Original file line number Diff line number Diff line change
@@ -17,3 +17,9 @@ gonogo "sql_pre_fac_snapshot_db"

sql_post_fac_snapshot_db
gonogo "sql_post_fac_snapshot_db"

# We might, at some point,
# consider running a vacuum on DB1
# as part of a nightly or weekly job.
# Below is *representative* code.
# run_sql $FAC_DB_URI -c "VACUUM(FULL, ANALYZE)"