Skip to content

Latest commit

 

History

History
288 lines (251 loc) · 15 KB

File metadata and controls

288 lines (251 loc) · 15 KB

1-org

This repo is part of a multi-part guide that shows how to configure and deploy the example.com reference architecture described in Google Cloud security foundations guide (PDF). The following table lists the parts of the guide.

0-bootstrap Bootstraps a Google Cloud organization, creating all the required resources and permissions to start using the Cloud Foundation Toolkit (CFT). This step also configures a CI/CD pipeline for foundations code in subsequent stages.
1-org (this file) Sets up top level shared folders, monitoring and networking projects, and organization-level logging, and sets baseline security settings through organizational policy.
2-environments Sets up development, non-production, and production environments within the Google Cloud organization that you've created.
3-networks Sets up base and restricted shared VPCs with default DNS, NAT (optional), Private Service networking, VPC service controls, on-premises Dedicated Interconnect, and baseline firewall rules for each environment. It also sets up the global DNS hub.
4-projects Sets up a folder structure, projects, and application infrastructure pipeline for applications, which are connected as service projects to the shared VPC created in the previous stage.
5-app-infra Deploy a simple Compute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects.

For an overview of the architecture and the parts, see the terraform-example-foundation README.

Purpose

The purpose of this step is to set up top-level shared folders, monitoring and networking projects, organization-level logging, and baseline security settings through organizational policies.

Prerequisites

  1. 0-bootstrap executed successfully.
  2. Cloud Identity / Google Workspace group for security admins.
  3. Membership in the security admins group for the user running Terraform.
  4. Security Command Center notifications require that you choose a Security Command Center tier and create and grant permissions for the Security Command Center service account as outlined in Setting up Security Command Center
  5. Ensure that you have requested for sufficient projects quota, as the Terraform scripts will create multiple projects from this point onwards. For more information, please see the FAQ.

Note: Make sure that you use the same version of Terraform throughout this series, otherwise you might experience Terraform state snapshot lock errors.

Troubleshooting

Please refer to troubleshooting if you run into issues during this step.

Usage

Disclaimer: This step enables Data Access logs for all services in your organization. Enabling Data Access logs might result in your project being charged for the additional logs usage. For details on costs you might incur, go to Pricing. You can choose not to enable the Data Access logs by setting variable data_access_logs_enabled to false.

Note: This module creates a sink to export all logs to Google Storage. It also creates sinks to export a subset of security related logs to BigQuery and Pub/Sub. This will result in additional charges for those copies of logs. You can change the filters & sinks by modifying the configuration in envs/shared/log_sinks.tf.

Note: Currently, this module does not enable bucket policy retention for organization logs, please, enable it if needed.

Note: It is possible to enable an organization policy for OS Login with this module. OS Login has some limitations. If those limitations do not apply to your workload/environment, you can choose to enable the OS Login policy by setting variable enable_os_login_policy to true.

Note: You need to set variable enable_hub_and_spoke to true to be able to used the Hub-and-Spoke architecture detailed in the Networking section of the google cloud security foundations guide.

Note: If you are using MacOS, replace cp -RT with cp -R in the relevant commands. The -T flag is needed for Linux, but causes problems for MacOS.

Note: This module creates a Security Command Center Notification. The notification name must be unique in the organization. The suggested name in the terraform.tfvars file is scc-notify. To check if it already exists run:

gcloud scc notifications describe <scc_notification_name> --organization=<org_id>

💬 Cloud Storage Retention Policy for Logs

FIs should configure the log_export_storage_retention_policy variable in terraform.tfvars to set a minimum retention period for org-level logs for compliance use cases.

💬 Organization Policy Constraints

In addition to the other Organization Policies, FIs should review org_policy_mas_abs.tf to evaluate if the following constraints are needed:

  • constraints/compute.restrictNonConfidentialComputing to restrict non-Confidential Computing resources from being created (by default, this is disabled, although FIs can enable it and add specific folders or projects to be excluded from this constraint)
  • constraints/storage.retentionPolicySeconds to enforce Cloud Storage retention policy from a list of retention periods (by default, 1 hour and 30 days are configured)

Deploying with Cloud Build

  1. Clone the policy repo based on the Terraform output from the previous section. Clone the repo at the same level of the terraform-example-foundation folder, the next instructions assume that layout. Run terraform output cloudbuild_project_id in the 0-bootstrap folder to see the project again.

    gcloud source repos clone gcp-policies --project=YOUR_CLOUD_BUILD_PROJECT_ID
    
  2. Navigate into the repo. All subsequent steps assume you are running them from the gcp-policies directory. If you run them from another directory, adjust your copy paths accordingly.

    cd gcp-policies
    
  3. Copy contents of policy-library to new repo.

    cp -RT ../terraform-example-foundation/policy-library/ .
    
  4. Commit changes.

    git add .
    git commit -m 'Your message'
    
  5. Push your master branch to the new repo.

    git push --set-upstream origin master
    
  6. Navigate out of the repo.

    cd ..
    
  7. Clone the repo.

    gcloud source repos clone gcp-org --project=YOUR_CLOUD_BUILD_PROJECT_ID
    
  8. Navigate into the repo and change to a non-production branch. All subsequent steps assume you are running them from the gcp-environments directory. If you run them from another directory, adjust your copy paths accordingly.

    cd gcp-org
    git checkout -b plan
    
  9. Copy contents of foundation to new repo (terraform variables will updated in a future step).

    cp -RT ../terraform-example-foundation/1-org/ .
    
  10. Copy Cloud Build configuration files for Terraform. You may need to modify the command to reflect your current directory.

    cp ../terraform-example-foundation/build/cloudbuild-tf-* .
    
  11. Copy the Terraform wrapper script to the root of your new repository (modify accordingly based on your current directory).

    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    
  12. Ensure wrapper script can be executed.

    chmod 755 ./tf-wrapper.sh
    
  13. Check if your organization already has an Access Context Manager Policy.

    gcloud access-context-manager policies list --organization YOUR_ORGANIZATION_ID --format="value(name)"
    
  14. Rename ./envs/shared/terraform.example.tfvars to ./envs/shared/terraform.tfvars and update the file with values from your environment and bootstrap step (you can re-run terraform output in the 0-bootstrap directory to find these values). Make sure that default_region is set to a valid BigQuery dataset region. Also, if the previous step showed a numeric value, make sure to un-comment the variable create_access_context_manager_access_policy = false. See the shared folder README.md for additional information on the values in the terraform.tfvars file.

  15. Commit changes.

    git add .
    git commit -m 'Your message'
    
  16. Push your plan branch to trigger a plan. For this command, the branch plan is not a special one. Any branch which name is different from development, non-production or production will trigger a Terraform plan.

    git push --set-upstream origin plan
    
  17. Review the plan output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID

  18. Merge changes to production branch.

    git checkout -b production
    git push origin production
    
  19. Review the apply output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds?project=YOUR_CLOUD_BUILD_PROJECT_ID

  20. You can now move to the instructions in the 2-environments step.

Troubleshooting: If you received a PERMISSION_DENIED error running the gcloud access-context-manager or the gcloud scc notifications commands you can append

--impersonate-service-account=org-terraform@<SEED_PROJECT_ID>.iam.gserviceaccount.com

to run the command as the Terraform service account.

Deploying with Jenkins

  1. Clone the repo you created manually in 0-bootstrap.

    git clone <YOUR_NEW_REPO-1-org>
    
  2. Navigate into the repo and change to a non-production branch. All subsequent steps assume you are running them from the gcp-environments directory. If you run them from another directory, adjust your copy paths accordingly.

    cd YOUR_NEW_REPO_CLONE-1-org
    git checkout -b plan
    
  3. Copy contents of foundation to new repo.

    cp -RT ../terraform-example-foundation/1-org/ .
    
  4. Copy contents of policy-library to new repo.

    cp -RT ../terraform-example-foundation/policy-library/ ./policy-library
    
  5. Copy the Jenkinsfile script to the root of your new repository.\

    cp ../terraform-example-foundation/build/Jenkinsfile .
    
  6. Update the variables located in the environment {} section of the Jenkinsfile with values from your environment:

    _TF_SA_EMAIL
    _STATE_BUCKET_NAME
    _PROJECT_ID (the cicd project id)
    
  7. Copy Terraform wrapper script to the root of your new repository.

    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    
  8. Ensure wrapper script can be executed.

    chmod 755 ./tf-wrapper.sh
    
  9. Check if your organization already has an Access Context Manager Policy.

    gcloud access-context-manager policies list --organization YOUR_ORGANIZATION_ID --format="value(name)"
    
  10. Rename ./envs/shared/terraform.example.tfvars to ./envs/shared/terraform.tfvars and update the file with values from your environment and bootstrap. You can re-run terraform output in the 0-bootstrap directory to find these values. Make sure that default_region is set to a valid BigQuery dataset region. Also, if the previous step showed a numeric value, make sure to un-comment the variable create_access_context_manager_access_policy = false. See the shared folder README.md for additional information on the values in the terraform.tfvars file.

  11. Commit changes.

    git add .
    git commit -m 'Your message'
    
  12. Push your plan branch. The branch plan is not a special one. Any branch which name is different from development, non-production or production will trigger a Terraform plan.

    • Assuming you configured an automatic trigger in your Jenkins Master (see Jenkins sub-module README), this will trigger a plan. You can also trigger a Jenkins job manually. Given the many options to do this in Jenkins, it is out of the scope of this document see Jenkins website for more details.
    git push --set-upstream origin plan
    
  13. Review the plan output in your Master's web UI.

  14. Merge changes to production branch.

    git checkout -b production
    git push origin production
    
  15. Review the apply output in your Master's web UI. (you might want to use the option to "Scan Multibranch Pipeline Now" in your Jenkins Master UI).

Running Terraform locally

  1. Change into 1-org folder.
  2. Run cp ../build/tf-wrapper.sh .
  3. Run chmod 755 ./tf-wrapper.sh
  4. Change into 1-org/envs/shared/ folder.
  5. Rename terraform.example.tfvars to terraform.tfvars and update the file with values from your environment and bootstrap.
  6. Obtain your bucket name by running the following command in the 0-bootstrap folder.
    terraform output gcs_bucket_tfstate
    
  7. Update backend.tf with your bucket from bootstrap.
    for i in `find -name 'backend.tf'`; do sed -i 's/UPDATE_ME/<YOUR-BUCKET-NAME>/' $i; done
    

We will now deploy our environment (production) using this script. When using Cloud Build or Jenkins as your CI/CD tool each environment corresponds to a branch is the repository for 1-org step and only the corresponding environment is applied.

To use the validate option of the tf-wrapper.sh script, please follow the instructions in the Install Terraform Validator section and install version v0.4.0 in your system. You will also need to rename the binary from terraform-validator-<your-platform> to terraform-validator and the terraform-validator binary must be in your PATH.

  1. Run ./tf-wrapper.sh init production.
  2. Run ./tf-wrapper.sh plan production and review output.
  3. Run ./tf-wrapper.sh validate production $(pwd)/../policy-library <YOUR_CLOUD_BUILD_PROJECT_ID> and check for violations.
  4. Run ./tf-wrapper.sh apply production.

If you received any errors or made any changes to the Terraform config or terraform.tfvars you must re-run ./tf-wrapper.sh plan production before run ./tf-wrapper.sh apply production.