Contents
- Overview
- Prerequisites
- Deploy the Lab
- Troubleshooting
- Outputs
- Testing
- Cleanup
- Requirements
- Inputs
- Outputs
In this lab:
- A hub and spoke VPC peering architecure using network virtual appliances (NVA) to inspect traffic to spokes.
- NVA appliances are simulated using iptables on Linux instances.
- All north-south and east-west traffic are allowed via the NVA instances in this lab.
- Hybrid connectivity to simulated on-premises sites is achieved using HA VPN.
- Network Connectivity Center (NCC) is used to connect the on-premises sites together via the external Hub VPC.
- Other networking features such as Cloud DNS, PSC for Google APIs and load balancers are also deployed in this lab.
Ensure you meet all requirements in the prerequisites before proceeding.
-
Clone the Git Repository for the Labs
git clone https://github.com/kaysalawu/gcp-network-terraform.git
-
Navigate to the lab directory
cd gcp-network-terraform/1-blueprints-d-nva-peering
-
(Optional) If you want to enable additional features such as IPv6, VPC flow logs and logging set the following variables to
true
in the01-main.tf
file.Variable Description Default Link enable_ipv6 Enable IPv6 on all supported resources false main.tf -
Run the following terraform commands and type yes at the prompt:
terraform init terraform plan terraform apply -parallelism=50
See the troubleshooting section for tips on how to resolve common issues that may occur during the deployment of the lab.
The table below shows the auto-generated output files from the lab. They are located in the _output
directory.
Item | Description | Location |
---|---|---|
Hub EU NVA | Linux iptables, web server and test scripts | _output/hub-eu-nva.sh |
Hub US NVA | Linux iptables, web server and test scripts | _output/hub-us-nva.sh |
Hub Unbound DNS | Unbound DNS configuration | _output/hub-unbound.sh |
Site1 Unbound DNS | Unbound DNS configuration | _output/site1-unbound.sh |
Site2 Unbound DNS | Unbound DNS configuration | _output/site2-unbound.sh |
Web server | Python Flask web server, test scripts | _output/vm-startup.sh |
Each virtual machine (VM) is pre-configured with a shell script to run various types of network reachability tests. Serial console access has been configured for all virtual machines. In each VM instance, The pre-configured test script /usr/local/bin/playz
can be run from the SSH terminal to test network reachability.
The full list of the scripts in each VM instance is shown below:
$ ls -l /usr/local/bin/
-rwxr-xr-x 1 root root 98 Aug 17 14:58 aiz
-rwxr-xr-x 1 root root 203 Aug 17 14:58 bucketz
-rw-r--r-- 1 root root 1383 Aug 17 14:58 discoverz.py
-rwxr-xr-x 1 root root 1692 Aug 17 14:58 pingz
-rwxr-xr-x 1 root root 5986 Aug 17 14:58 playz
-rwxr-xr-x 1 root root 1957 Aug 17 14:58 probez
1.1 Login to the instance d-site1-vm
using the SSH-in-Browser from the Google Cloud console.
1.2 Run the playz
script to test network reachability to all VM instances.
playz
Sample output
admin_cloudtuple_com@d-site1-vm:~$ playz
apps ...
200 (0.006318s) - 10.10.1.9 - app1.site1.onprem:8080/
200 (0.289888s) - 10.20.1.9 - app1.site2.onprem:8080/
200 (0.011885s) - 10.1.11.70 - ilb4.eu.hub.gcp:8080/
200 (0.288506s) - 10.1.21.70 - ilb4.us.hub.gcp:8080/
200 (0.032953s) - 10.1.11.80 - ilb7.eu.hub.gcp/
000 (2.002089s) - - ilb7.us.hub.gcp/
200 (0.011126s) - 10.11.11.30 - ilb4.eu.spoke1.gcp:8080/
200 (0.299660s) - 10.22.21.30 - ilb4.us.spoke2.gcp:8080/
000 (2.002707s) - - ilb7.eu.spoke1.gcp/
200 (0.707030s) - 10.22.21.40 - ilb7.us.spoke2.gcp/
200 (0.011300s) - 10.1.11.60 - nva.eu.hub.gcp:8001/
200 (0.009798s) - 10.1.11.60 - nva.eu.hub.gcp:8002/
200 (0.290487s) - 10.1.21.60 - nva.us.hub.gcp:8001/
200 (0.288560s) - 10.1.21.60 - nva.us.hub.gcp:8002/
200 (0.017635s) - 10.2.11.30 - app1.eu.mgt.hub.gcp:8080/
200 (0.293350s) - 10.2.21.30 - app1.us.mgt.hub.gcp:8080/
psc4 ...
000 (0.015615s) - - psc4.consumer.spoke2-us-svc.psc.hub.gcp:8080
000 (0.015597s) - - psc4.consumer.spoke2-us-svc.psc.spoke1.gcp:8080
apis ...
204 (0.002937s) - 142.250.179.234 - www.googleapis.com/generate_204
204 (0.005726s) - 10.1.0.1 - storage.googleapis.com/generate_204
204 (1.071028s) - 10.1.11.80 - europe-west2-run.googleapis.com/generate_204
204 (1.577869s) - 10.22.21.40 - us-west2-run.googleapis.com/generate_204
204 (0.037042s) - 10.1.11.80 - europe-west2-run.googleapis.com/generate_204
204 (1.739673s) - 10.22.21.40 - us-west2-run.googleapis.com/generate_204
200 (0.037060s) - 10.1.0.1 - https://d-hub-us-run-httpbin-i6ankopyoa-nw.a.run.app/
200 (0.041870s) - 10.1.0.1 - https://d-spoke1-eu-run-httpbin-2zcsnlaqcq-nw.a.run.app/
200 (0.858281s) - 10.1.0.1 - https://d-spoke2-us-run-httpbin-bttbo6m6za-wl.a.run.app/
204 (0.007558s) - 10.1.0.1 - dhuball.p.googleapis.com/generate_204
204 (0.003048s) - 142.250.179.234 - dspoke1sec.p.googleapis.com/generate_204
204 (0.003289s) - 142.250.178.10 - dspoke2sec.p.googleapis.com/generate_204
1.3 Run the pingz
script to test ICMP reachability to all VM instances.
pingz
Sample output
admin_cloudtuple_com@d-site1-vm:~$ pingz
ping ...
app1.site1.onprem - OK 0.032 ms
app1.site2.onprem - OK 137.847 ms
ilb4.eu.hub.gcp - NA
ilb4.us.hub.gcp - NA
ilb7.eu.hub.gcp - NA
ilb7.us.hub.gcp - NA
ilb4.eu.spoke1.gcp - NA
ilb4.us.spoke2.gcp - NA
ilb7.eu.spoke1.gcp - NA
ilb7.us.spoke2.gcp - NA
nva.eu.hub.gcp - NA
nva.us.hub.gcp - NA
app1.eu.mgt.hub.gcp - OK 2.414 ms
app1.us.mgt.hub.gcp - OK 138.039 ms
1.4 Run the bucketz
script to test access to selected Google Cloud Storage buckets.
bucketz
Sample output
admin_cloudtuple_com@d-site1-vm:~$ bucketz
hub : <--- HUB EU --->
spoke1 : <--- SPOKE 1 --->
spoke2 : <--- SPOKE 2 --->
1.5 On your local terminal or Cloud Shell, run the discoverz.py
script to test access to all Google API endpoints.
gcloud compute ssh d-site1-vm \
--project $TF_VAR_project_id_onprem \
--zone europe-west2-b \
-- 'python3 /usr/local/bin/discoverz.py' | tee _output/site1-api-discovery.txt
The script save the output to the file _output/site1-vm-api-discoverz.sh`.
Login to the instance d-spoke1-eu-ilb4-vm
using the SSH-in-Browser from the Google Cloud console.
playz
Sample output
admin_cloudtuple_com@d-spoke1-eu-ilb4-vm:~$ playz
apps ...
200 (0.011195s) - 10.10.1.9 - app1.site1.onprem:8080/
200 (0.293233s) - 10.20.1.9 - app1.site2.onprem:8080/
200 (0.009018s) - 10.1.11.70 - ilb4.eu.hub.gcp:8080/
200 (0.296125s) - 10.1.21.70 - ilb4.us.hub.gcp:8080/
200 (0.035121s) - 10.1.11.80 - ilb7.eu.hub.gcp/
000 (2.002006s) - - ilb7.us.hub.gcp/
200 (0.006833s) - 10.11.11.30 - ilb4.eu.spoke1.gcp:8080/
200 (0.290049s) - 10.22.21.30 - ilb4.us.spoke2.gcp:8080/
200 (0.031387s) - 10.11.11.40 - ilb7.eu.spoke1.gcp/
200 (0.720946s) - 10.22.21.40 - ilb7.us.spoke2.gcp/
000 (2.002498s) - - nva.eu.hub.gcp:8001/
000 (2.002360s) - - nva.eu.hub.gcp:8002/
000 (2.001437s) - - nva.us.hub.gcp:8001/
000 (2.002094s) - - nva.us.hub.gcp:8002/
200 (0.008975s) - 10.2.11.30 - app1.eu.mgt.hub.gcp:8080/
200 (0.288935s) - 10.2.21.30 - app1.us.mgt.hub.gcp:8080/
psc4 ...
000 (0.007546s) - - psc4.consumer.spoke2-us-svc.psc.hub.gcp:8080
000 (0.007480s) - - psc4.consumer.spoke2-us-svc.psc.spoke1.gcp:8080
apis ...
204 (0.002770s) - 216.58.204.74 - www.googleapis.com/generate_204
204 (0.003043s) - 10.11.0.2 - storage.googleapis.com/generate_204
204 (0.033513s) - 10.11.11.40 - europe-west2-run.googleapis.com/generate_204
000 (2.002162s) - - us-west2-run.googleapis.com/generate_204
204 (0.012686s) - 10.11.11.40 - europe-west2-run.googleapis.com/generate_204
000 (2.002355s) - - us-west2-run.googleapis.com/generate_204
200 (0.032460s) - 10.11.0.2 - https://d-hub-us-run-httpbin-i6ankopyoa-nw.a.run.app/
200 (0.036764s) - 10.11.0.2 - https://d-spoke1-eu-run-httpbin-2zcsnlaqcq-nw.a.run.app/
200 (0.184121s) - 10.11.0.2 - https://d-spoke2-us-run-httpbin-bttbo6m6za-wl.a.run.app/
000 (0.015626s) - - dhuball.p.googleapis.com/generate_204
204 (0.005460s) - 10.11.0.2 - dspoke1sec.p.googleapis.com/generate_204
000 (2.002927s) - - dspoke2sec.p.googleapis.com/generate_204
1.3 Run the pingz
script to test ICMP reachability to all VM instances.
pingz
Sample output
admin_cloudtuple_com@d-spoke1-eu-ilb4-vm:~$ pingz
ping ...
app1.site1.onprem - OK 1.826 ms
app1.site2.onprem - OK 137.088 ms
ilb4.eu.hub.gcp - NA
ilb4.us.hub.gcp - NA
ilb7.eu.hub.gcp - NA
ilb7.us.hub.gcp - NA
ilb4.eu.spoke1.gcp - OK 0.031 ms
ilb4.us.spoke2.gcp - NA
ilb7.eu.spoke1.gcp - NA
ilb7.us.spoke2.gcp - NA
nva.eu.hub.gcp - OK 0.452 ms
nva.us.hub.gcp - NA
app1.eu.mgt.hub.gcp - OK 0.984 ms
app1.us.mgt.hub.gcp - OK 135.195 ms
1.4 Run the bucketz
script to test access to selected Google Cloud Storage buckets.
bucketz
Sample output
admin_cloudtuple_com@d-spoke1-eu-ilb4-vm:~$ bucketz
hub : <--- HUB EU --->
spoke1 : <--- SPOKE 1 --->
spoke2 : <--- SPOKE 2 --->
1.5 On your local terminal or Cloud Shell, run the discoverz.py
script to test access to all Google API endpoints.
gcloud compute ssh d-spoke1-eu-ilb4-vm \
--project $TF_VAR_project_id_spoke1 \
--zone europe-west2-b \
-- 'python3 /usr/local/bin/discoverz.py' | tee _output/spoke1-api-discovery.txt
The script save the output to the file _output/spoke1-api-discovery.txt`.
1. (Optional) Navigate back to the lab directory (if you are not already there).
cd gcp-network-terraform/1-blueprints-d-nva-peering
2. Run terraform destroy twice.
The second run is required to delete the the null_resource resource that could not be deleted on teh first run due to race conditions.
terraform destroy -auto-approve
terraform destroy -auto-approve
No requirements.
Name | Description | Type | Default | Required |
---|---|---|---|---|
bgp_range | bgp interface ip cidr ranges. | map(string) |
{ |
no |
disk_size | disk size | string |
"20" |
no |
disk_type | disk type | string |
"pd-ssd" |
no |
gre_range | gre interface ip cidr ranges. | map(string) |
{ |
no |
image_cos | container optimized image | string |
"cos-cloud/cos-stable" |
no |
image_debian | vm instance image | string |
"debian-cloud/debian-12" |
no |
image_panos | palo alto image from gcp marketplace | string |
"https://www.googleapis.com/compute/v1/projects/paloaltonetworksgcp-public/global/images/vmseries-bundle1-810" |
no |
image_ubuntu | vm instance image | string |
"ubuntu-os-cloud/ubuntu-2404-lts-amd64" |
no |
image_vyos | vyos image from gcp marketplace | string |
"https://www.googleapis.com/compute/v1/projects/sentrium-public/global/images/vyos-1-3-0" |
no |
machine_type | vm instance size | string |
"e2-micro" |
no |
organization_id | organization id | any |
null |
no |
project_id_host | host project id | any |
n/a | yes |
project_id_hub | hub project id | any |
n/a | yes |
project_id_onprem | onprem project id (for onprem site1 and site2) | any |
n/a | yes |
project_id_spoke1 | spoke1 project id (service project id attached to the host project | any |
n/a | yes |
project_id_spoke2 | spoke2 project id (standalone project) | any |
n/a | yes |
shielded_config | Shielded VM configuration of the instances. | map |
{ |
no |
No outputs.