Doing Infrastucture-as-Code (IaC) with Ansible has given me a headache – so I’ve recently been playing around with Terraform as an alternative to Ansible for certain tasks that require Cloud IaaS interactions.
The goal of this blog post is to build an HA-VPN solution between GCP and an on-premises Cisco IOS-XE device (CSR) using Terraform. BGP will be established over the VPN in order to exchange routes dynamically. GCE compute instances will be deployed in GCP for testing connectivity over the VPN.
Let’s get started.
Table of contents:
- GCP HA-VPN Overview
- The Topology
- GCP Preparation
- Terraform Installation
- Terraform Overview
- Terraform Configuration for Deploying HA-VPN on GCP
- Terraform In Action
- Configuring the Cisco IOS-XE Device
- Verification
- Terraform State
- Making Live Changes with Terraform
- Terraform Destroy
Assumptions
This article assumes that you have already created a GCP account and have access to the GCP console. You should also have an understanding of GCP fundamentals and routing fundamentals.
GCP HA-VPN Overview
Google cloud offers numerous types of hybrid connectivity, and two (2) types of VPN connectivity – Classic or HA. The HA Cloud VPN is the recommended approach. When deploying the HA VPN solution you’ll achieve an SLA of 99.99% service availability out-of-the-box. This is because the HA Cloud VPN solution is multi-zonal and can handle failure within a region.
In a production environment, you’d likely have at least two VPN gateways on-premises to provide maximum redundancy. However, I only have a single IP address at my house, and a single router to play with. I’ll be choosing a redundancy type of “single IP, internally redundant”, meaning that GCP will allocate two separate public IP addresses attached to two separate interfaces on the HA VPN Gateway and will build tunnels from each to a single remote VPN gateway. My on-prem router will in turn build a tunnel to each of these.
A GCP Cloud Router will handle the BGP adjacencies and routing across the tunnel and internally within the GCP VPC.
For more detailed information about Google’s solution, please reference the documentation here:
Cloud VPN overview | Google Cloud
The Topology
Here is the topology I’ll be following for the remainder of this blog post:

GCP Preparation
There are a few things we must do manually in GCP to prepare for deploying Infrastructure-as-Code. First, we must create a project in GCP. Projects are at the highest level of organization in GCP and this is where billing occurs. Every resource in GCP must belong to a project. If you already have a project you want to use – great! If not, please create one now – I called mine “terraform-testing”. Go ahead and copy/paste the project ID somewhere for future use with the Terraform configuration script.

After choosing or creating a project, go over to IAM & Admin > Service accounts
You should see a single default service account

Click “Create Service Account” and give your new service account a name, then click “Create”

Next, add the following roles to the service account:
- Project Editor
- Compute Admin
- Compute Network Admin
You can be more prescriptive here if you like but these roles work for me.

Click “Continue” then click “Create Key”

Choose “JSON” and click “Create”

This will download the JSON key file to your local machine. Keep this secure since it contains your private key. We’ll be referencing this file when running the Terraform script.

Terraform Installation
If you haven’t done so already, you need to install Terraform. You’ll need to update your $PATH as mentioned in the link provided, or you can move the binary to a directory that’s already in your path, which is what I did on my Mac:
[13:28:57] dvarnum:~ $ sudo mv /Users/dvarnum/terraform/terraform /usr/local/bin Password: [13:30:23] dvarnum:~ $ which terraform /usr/local/bin/terraform [13:30:28] dvarnum:~ $ terraform --version Terraform v0.12.24
Terraform Overview
Straight from Terraform’s website:
“Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.”
You create a Terraform configuration, which can either be a single file or set of files that describe the infrastructure to deploy and manage. The Terraform documentation is really good so I won’t repeat everything here but I’d like to talk about a few basics so you understand the script.
Terraform Providers
Providers are used for creating and managing resources. They handle the API interactions with IaaS providers. Since some of the services we’re deploying in GCP are considered “beta” we’ll need to use two (2) providers: google and google-beta. It is here that we specify top-level parameters such as the project, default region, default zone, and credentials.
provider "google" { project = var.gcp_project region = var.gcp_region zone = var.gcp_zone credentials = var.gcp_credentials_file } provider "google-beta" { project = var.gcp_project region = var.gcp_region zone = var.gcp_zone credentials = var.gcp_credentials_file }
More information on the Google provider: Provider: Google Cloud Platform – Terraform by HashiCorp
Terraform Variables
The var.
references seen in the provider
above are pointing to static variables. Static variables can be stored in the same file as the main Terraform configuration or in a separate file. A sample variable looks like this:
variable gcp_region { description = "Default to US Central." default = "us-central1" }
Terraform Resources
Resources are where everything happens in Terraform. A resource defines a piece of infrastructure to be deployed or configured. When declaring a resource block, you specify the resource type
such as ”google_compute_network”
, and a local resource name
name such as ”tf_vpc_net1”
, followed by the block of code. The name has no significance outside of the scope of the configuration script.
Example resource:
resource "google_compute_network" "tf_vpc_net1" { name = "tf-vpc-net-1" routing_mode = "GLOBAL" auto_create_subnetworks = false }
This same resource can be referenced later using the given name. For example, if I created the VPC network above, I could then create a subnet inside of that VPC reference the resource name:
resource "google_compute_subnetwork" "tf_vpc_net1_subnet1" { name = "ha-vpn-subnet-1" ip_cidr_range = "10.0.1.0/24" region = "us-central1" network = google_compute_network.tf_vpc_net1.self_link }
Terraform uses interpolation to handle complex references and dependencies between objects. I’m blown away at how much simpler it is to handle complexity in Terraform versus Ansible when it comes to dynamic variables. Not only that, it makes writing the script much easier since the order doesn’t really matter. Terraform knows what is dependent upon each other and executes in an intelligent way, abstracting that painful logic away from you.
Terraform Data
Although I’m not using data
objects in the configuration script outlined in this blog post, it’s important to know that data
can be used to retrieve information from a remote system.
Terraform Output Values
You can output various values pulled during configuration. You’ll see an example of this during the configuration execution below. More on outputs here: Output Values – Configuration Language – Terraform by HashiCorp
Terraform Configuration Overview for Deploying HA VPN on GCP
If you’d like to follow along, you can pull the code from the GitHub repo here:
git clone https://github.com/varnumd/terraform-gcp-cisco-vpn
In this directory you’ll see two (2) terraform files:
[17:11:35] dvarnum:gcp-cisco-vpn $ tree . ├── gcp_variables.tf └── main.tf
The gcp_variables
file needs to be updated with the values of your choice. The only two that are absolutely required for you to update is gcp_project
and on_prem_ip1
. Update the others as you deem necessary.
Note that the first variable here is a prompt that will ask you for your credential file when you run terraform apply
. You could instead remove the “type = string” and use “default = file(“account.json”)” where “account.json” is the name and location of the key file you downloaded earlier.
variable "gcp_credentials_file" { description = "Locate the GCP credentials file." type = string } variable gcp_project { description = "GCP Project" default = "[UPDATE-WITH-YOUR-PROJECT-ID]" } variable gcp_region { description = "Default to US Central." default = "us-central1" } variable gcp_zone { description = "Default to US Central1c." default = "us-central1-c" } variable on_prem_ip1 { description = "The IP of the on-prem VPN gateway" default = "[UPDATE-WITH-YOUR-VPN-PUB-IP]" } variable gcp_asn { description = "BGP ASN or GCP Cloud Router" default = 64997 } variable on_prem_asn { description = "BGP ASN or On-Prem Router" default = 65000 } variable gcp_shared_secret { description = "VPN shared secret" default = "d0v3r1a1d" }
The next file called main.tf
is where all of the Terraform configuration happens. Let’s take a look.
First are the providers:
provider "google" { project = var.gcp_project region = var.gcp_region zone. = var.gcp_zone credentials = var.gcp_credentials_file } provider "google-beta" { project = var.gcp_project region. = var.gcp_region zone = var.gcp_zone credentials = var.gcp_credentials_file }
Next we’ll create the HA VPN Gateway by calling the google_compute_ha_vpn_gateway
resource. This is a beta feature so I must specify the google-beta provider at the time of this writing. Documentation for this resource can be found here: Google: google_compute_ha_vpn_gateway – Terraform by HashiCorp
resource "google_compute_ha_vpn_gateway" "ha_gateway1" { provider = google-beta region = "us-central1" name = "ha-vpn-1" network = google_compute_network.tf_vpc_net1.self_link }
Don’t worry about the network statement just yet – we’ll get to that.
Next we’ll specify the external gateway that GCP will be connecting to (my on-prem router). Note the redundancy type that I chose since I only have a single on-prem router but still want to achieve SLA 99.99% on the GCP-side. Other options can be found here: Creating an HA VPN gateway to a Peer VPN gateway | Cloud VPN
Documentation for this Terraform resource google_compute_external_vpn_gateway
can be found here: Google: google_compute_external_vpn_gateway – Terraform by HashiCorp
resource "google_compute_external_vpn_gateway" "external_gateway" { provider = google-beta name = "hq-cisco" redundancy_type = "SINGLE_IP_INTERNALLY_REDUNDANT" description = "An externally managed VPN gateway" interface { id = 0 ip_address = var.on_prem_ip1 } }
Configure a custom VPC network. Unlike AWS and Azure, VPCs in Google are global. We still need to set the routing mode to global to make sure that routes are propagated appropriately.
resource “google_compute_network” “tf_vpc_net1” { name = “tf-vpc-net-1” routing_mode = “GLOBAL” auto_create_subnetworks = false }
Configure subnets in the VPC. Notice the network value is google_compute_network.tf_vpc_net1.self_link
. This is a special pointer to the VPC that is being created as a part of this configuration. This is the same pointer that was used when configuring the HA VPN Gateway resource.
resource “google_compute_subnetwork” “tf_vpc_net1_subnet1” { name = “ha-vpn-subnet-1” ip_cidr_range = “10.0.1.0/24” region = “us-central1” network = google_compute_network.tf_vpc_net1.self_link } resource “google_compute_subnetwork” “tf_vpc_net1_subnet2” { name = “ha-vpn-subnet-2” ip_cidr_range = “10.0.2.0/24” region = “us-west1” network = google_compute_network.tf_vpc_net1.self_link }
Configure a Cloud Router
resource "google_compute_router" "router1" { name = "ha-vpn-router-1" network = google_compute_network.tf_vpc_net1.name bgp { asn = var.gcp_asn } }
Configure two (2) VPN tunnels. The first tunnel will map to interface 0 on the VPN gateway, the next will map to interface 1 on the VPN gateway. More information about this resource can be found here: Google: google_compute_vpn_tunnel – Terraform by HashiCorp
resource "google_compute_vpn_tunnel" "tunnel1" { provider = google-beta name = "ha-vpn-tunnel1" region = "us-central1" vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.self_link peer_external_gateway_interface = 0 shared_secret = var.gcp_shared_secret router = google_compute_router.router1.self_link vpn_gateway_interface = 0 } resource "google_compute_vpn_tunnel" "tunnel2" { provider = google-beta name = "ha-vpn-tunnel2" region = "us-central1" vpn_gateway = google_compute_ha_vpn_gateway.ha_gateway1.self_link peer_external_gateway = google_compute_external_vpn_gateway.external_gateway.self_link peer_external_gateway_interface = 0 shared_secret = var.gcp_shared_secret router = google_compute_router.router1.self_link vpn_gateway_interface = 1 }
Configure two (2) router interfaces, with one mapped to each tunnel. You must use /30 prefixes in the 169.254.0.0/16 CIDR block for peering.
# Router1 Interface 1 - Tunnel 1 resource "google_compute_router_interface" "router1_interface1" { name = "router1-interface1" router = google_compute_router.router1.name region = "us-central1" ip_range = "169.254.0.1/30" vpn_tunnel = google_compute_vpn_tunnel.tunnel1.name } resource "google_compute_router_peer" "router1_peer1" { name = "router1-peer1" router = google_compute_router.router1.name region = "us-central1" peer_ip_address = "169.254.0.2" peer_asn = var.on_prem_asn advertised_route_priority = 100 interface = google_compute_router_interface.router1_interface1.name } # Router1 Interface 2 - Tunnel 2 resource "google_compute_router_interface" "router1_interface2" { name = "router1-interface2" router = google_compute_router.router1.name region = "us-central1" ip_range = "169.254.1.1/30" vpn_tunnel = google_compute_vpn_tunnel.tunnel2.name } resource "google_compute_router_peer" "router1_peer2" { name = "router1-peer2" router = google_compute_router.router1.name region = "us-central1" peer_ip_address = "169.254.1.2" peer_asn = var.on_prem_asn advertised_route_priority = 100 interface = google_compute_router_interface.router1_interface2.name }
Configure a GCE VM instance in each subnet which will be used for testing connectivity across the tunnel.
resource "google_compute_instance" "vm_instance1" { name = "terraform-intance-1" machine_type = "f1-micro" zone = "us-central1-a" boot_disk { initialize_params { image = "debian-cloud/debian-9" } } network_interface { # A default network is created for all GCP projects # Using terraform interpolation, we'll reference the self_link here pointing to the newly created network subnetwork = google_compute_subnetwork.tf_vpc_net1_subnet1.self_link access_config { } } } resource "google_compute_instance" "vm_instance2" { name = "terraform-intance-2" machine_type = "f1-micro" zone = "us-west1-b" boot_disk { initialize_params { image = "debian-cloud/debian-9" } } network_interface { # A default network is created for all GCP projects # Using terraform interpolation, we'll reference the self_link here pointing to the newly created network subnetwork = google_compute_subnetwork.tf_vpc_net1_subnet2.self_link access_config { } } }
By default GCP custom networks do not have any firewall rules attached to them. Here’s a sample of a generic firewall rules that will allow access via some common protocols to the VM instances. The firewall rules are defined at the VPC level but enforced for each instance.
resource "google_compute_firewall" "tf_firewall" { name = "terraform-firewall-base" network = google_compute_network.tf_vpc_net1.self_link allow { protocol = "icmp" } allow { protocol = "tcp" ports = ["22", "80", "443"] } source_ranges = ["0.0.0.0/0"] }
Lastly, we have some output values to display certain pertinent information to the terminal after the terraform configuration is applied.
output "gcp_external_vpn_address_1" { value = google_compute_ha_vpn_gateway.ha_gateway1.vpn_interfaces.0.ip_address } output "gcp_external_vpn_address_2" { value = google_compute_ha_vpn_gateway.ha_gateway1.vpn_interfaces.1.ip_address } output "gcp_instance1_external_ip" { value = google_compute_instance.vm_instance1.network_interface.0.access_config.0.nat_ip } output "gcp_instance1_internal_ip" { value = google_compute_instance.vm_instance1.network_interface.0.network_ip } output "gcp_instance2_external_ip" { value = google_compute_instance.vm_instance2.network_interface.0.access_config.0.nat_ip } output "gcp_instance2_internal_ip" { value = google_compute_instance.vm_instance2.network_interface.0.network_ip }
Terraform in Action
A Terraform workflow looks like this:
- Initialize Terraform by executing
terraform init
in the directory where the Terraform configuration exists. This will initialize the provider plugins. - (Optional) Validate your configuration syntax by running
terraform validate
. This will make sure everything checks-out from a syntax perspective. - (Optional) Execute
terraform plan
to get a view of everything that will change in the environment. This is a preview and does not actually make any changes. Although it’s optional, it’s recommended to do this. - Apply the configuration by executing
terraform apply
. This command will apply the configuration that was detailed in the plan.
Once in the directory, run terraform init
to initialize the plugins.
[14:29:04] dvarnum:gcp-cisco-vpn $ terraform init Initializing the backend... Initializing provider plugins... - Checking for available provider plugins... - Downloading plugin for provider "google" (hashicorp/google) 3.15.0... - Downloading plugin for provider "google-beta" (terraform-providers/google-beta) 3.15.0... The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below. * provider.google: version = "~> 3.15" * provider.google-beta: version = "~> 3.15" Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Validate the syntax
[20:23:21] dvarnum:terraform-gcp-cisco-vpn git:(master*) $ terraform validate Success! The configuration is valid.
Now when we run terraform plan
we’ll see a preview of the changes. This step logs in to GCP via the API using the service account created earlier. You’ll need to pass in the key when running this command. There are many ways you can bypass having to enter the credentials key. You can export a variable like export GOOGLE_CLOUD_KEYFILE_JSON=“/opt/terraform/account.json”
or explicitly call it out in the gcp_variables.tf file. For more information see this document: Google Provider Configuration Reference – Terraform by HashiCorp
[20:29:21] dvarnum:terraform-gcp-cisco-vpn git:(master*) $ terraform plan var.gcp_credentials_file Locate the GCP credentials file. Enter a value: /Users/dvarnum/Downloads/terraform-testing-272920-df6f321fa2bf.json An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # google_compute_external_vpn_gateway.external_gateway will be created + resource "google_compute_external_vpn_gateway" "external_gateway" { + description = "An externally managed VPN gateway" + id = (known after apply) + name = "hq-cisco" + project = (known after apply) + redundancy_type = "SINGLE_IP_INTERNALLY_REDUNDANT" + self_link = (known after apply) + interface { + id = 0 ...truncated Plan: 15 to add, 0 to change, 0 to destroy. ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run. [20:30:45] dvarnum:terraform-gcp-cisco-vpn git:(master*) $
If you get the error similar to the following when applying, it means your permissions haven’t been set properly. Revisit the role assignments earlier in this blog for more information.
Error: Error creating Network: googleapi: Error 403: Required 'compute.networks.create' permission for 'projects/terraform-testing-272920/global/networks/tf-vpc-net-1', forbidden on main.tf line 35, in resource "google_compute_network" "tf_vpc_net1": 35: resource "google_compute_network" "tf_vpc_net1" {
If everything looks good – time to run terraform apply
20:36:11] dvarnum:terraform-gcp-cisco-vpn git:(master*) $ terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: …truncated Plan: 15 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes google_compute_external_vpn_gateway.external_gateway: Creating... google_compute_network.tf_vpc_net1: Creating... google_compute_external_vpn_gateway.external_gateway: Still creating... [10s elapsed] google_compute_network.tf_vpc_net1: Still creating... [10s elapsed] google_compute_external_vpn_gateway.external_gateway: Creation complete after 12s [id=projects/terraform-testing-272920/global/externalVpnGateways/hq-cisco] google_compute_network.tf_vpc_net1: Still creating... [20s elapsed] google_compute_network.tf_vpc_net1: Creation complete after 23s [id=projects/terraform-testing-272920/global/networks/tf-vpc-net-1] google_compute_ha_vpn_gateway.ha_gateway1: Creating... google_compute_subnetwork.tf_vpc_net1_subnet2: Creating... google_compute_router.router1: Creating... google_compute_subnetwork.tf_vpc_net1_subnet1: Creating... google_compute_firewall.tf_firewall: Creating... google_compute_ha_vpn_gateway.ha_gateway1: Still creating... [10s elapsed] google_compute_subnetwork.tf_vpc_net1_subnet2: Still creating... [10s elapsed] google_compute_router.router1: Still creating... [10s elapsed] google_compute_subnetwork.tf_vpc_net1_subnet1: Still creating... [10s elapsed] google_compute_firewall.tf_firewall: Still creating... [10s elapsed] google_compute_subnetwork.tf_vpc_net1_subnet2: Creation complete after 12s [id=projects/terraform-testing-272920/regions/us-west1/subnetworks/ha-vpn-subnet-2] google_compute_instance.vm_instance2: Creating... google_compute_firewall.tf_firewall: Creation complete after 13s [id=projects/terraform-testing-272920/global/firewalls/terraform-firewall-base] google_compute_router.router1: Creation complete after 13s [id=projects/terraform-testing-272920/regions/us-central1/routers/ha-vpn-router-1] google_compute_ha_vpn_gateway.ha_gateway1: Creation complete after 13s [id=projects/terraform-testing-272920/regions/us-central1/vpnGateways/ha-vpn-1] google_compute_vpn_tunnel.tunnel2: Creating... google_compute_vpn_tunnel.tunnel1: Creating... google_compute_subnetwork.tf_vpc_net1_subnet1: Still creating... [20s elapsed] google_compute_instance.vm_instance2: Still creating... [10s elapsed] google_compute_vpn_tunnel.tunnel1: Still creating... [10s elapsed] google_compute_vpn_tunnel.tunnel2: Still creating... [10s elapsed] google_compute_subnetwork.tf_vpc_net1_subnet1: Creation complete after 23s [id=projects/terraform-testing-272920/regions/us-central1/subnetworks/ha-vpn-subnet-1] google_compute_instance.vm_instance1: Creating... google_compute_instance.vm_instance2: Still creating... [20s elapsed] google_compute_vpn_tunnel.tunnel2: Still creating... [20s elapsed] google_compute_vpn_tunnel.tunnel1: Still creating... [20s elapsed] google_compute_instance.vm_instance1: Still creating... [10s elapsed] google_compute_vpn_tunnel.tunnel1: Creation complete after 23s [id=projects/terraform-testing-272920/regions/us-central1/vpnTunnels/ha-vpn-tunnel1] google_compute_router_interface.router1_interface1: Creating... google_compute_vpn_tunnel.tunnel2: Creation complete after 24s [id=projects/terraform-testing-272920/regions/us-central1/vpnTunnels/ha-vpn-tunnel2] google_compute_router_interface.router1_interface2: Creating... google_compute_instance.vm_instance2: Creation complete after 25s [id=projects/terraform-testing-272920/zones/us-west1-b/instances/terraform-intance-2] google_compute_instance.vm_instance1: Creation complete after 14s [id=projects/terraform-testing-272920/zones/us-central1-a/instances/terraform-intance-1] google_compute_router_interface.router1_interface1: Still creating... [10s elapsed] google_compute_router_interface.router1_interface2: Still creating... [10s elapsed] google_compute_router_interface.router1_interface1: Creation complete after 13s [id=us-central1/ha-vpn-router-1/router1-interface1] google_compute_router_peer.router1_peer1: Creating... google_compute_router_interface.router1_interface2: Still creating... [20s elapsed] google_compute_router_peer.router1_peer1: Still creating... [10s elapsed] google_compute_router_interface.router1_interface2: Creation complete after 26s [id=us-central1/ha-vpn-router-1/router1-interface2] google_compute_router_peer.router1_peer2: Creating... google_compute_router_peer.router1_peer1: Still creating... [20s elapsed] google_compute_router_peer.router1_peer2: Still creating... [10s elapsed] google_compute_router_peer.router1_peer1: Still creating... [30s elapsed] google_compute_router_peer.router1_peer2: Still creating... [20s elapsed] google_compute_router_peer.router1_peer1: Creation complete after 38s [id=projects/terraform-testing-272920/regions/us-central1/routers/ha-vpn-router-1/router1-peer1] google_compute_router_peer.router1_peer2: Still creating... [30s elapsed] google_compute_router_peer.router1_peer2: Still creating... [40s elapsed] google_compute_router_peer.router1_peer2: Creation complete after 47s [id=projects/terraform-testing-272920/regions/us-central1/routers/ha-vpn-router-1/router1-peer2] Apply complete! Resources: 15 added, 0 changed, 0 destroyed. Outputs: gcp_external_vpn_address_1 = 35.242.126.28 gcp_external_vpn_address_2 = 35.220.87.218 gcp_instance1_external_ip = 34.71.134.246 gcp_instance1_internal_ip = 10.0.1.2 gcp_instance2_external_ip = 34.83.193.162 gcp_instance2_internal_ip = 10.0.2.2
Configuring the Cisco IOS-XE device
In my case I’m just using a CSR. The output values at the end of the terraform apply
execution help us with building the IKEv2 configuration on our on-prem router. You can see the output at any time by running:
[20:38:58] dvarnum:terraform-gcp-cisco-vpn git:(master*) $ terraform output gcp_external_vpn_address_1 = 35.242.126.28 gcp_external_vpn_address_2 = 35.220.87.218 gcp_instance1_external_ip = 34.71.134.246 gcp_instance1_internal_ip = 10.0.1.2 gcp_instance2_external_ip = 34.83.193.162 gcp_instance2_internal_ip = 10.0.2.2
Here is the sample Cisco IOS-XE configuration I’m using. Replace the values with “<<<<<<<<“ next to them with the appropriate values from the terraform output.
crypto ikev2 proposal VPN_IKEV2_PROPOSAL encryption aes-cbc-256 aes-cbc-192 aes-cbc-128 integrity sha256 group 16 ! crypto ikev2 policy VPN_IKEV2_POLICY proposal VPN_IKEV2_PROPOSAL ! crypto ikev2 keyring VPN_KEYRING peer GCP1 address 35.242.126.28 <<<<<<<< ! Update this with gcp_external_vpn_address_1 pre-shared-key d0v3r1a1d ! Update this with your PSK ! peer GCP2 address 35.220.87.218 <<<<<<<< ! Update this with gcp_external_vpn_address_2 pre-shared-key d0v3r1a1d ! Update this with your PSK ! crypto ikev2 profile VPN_IKEV2_PROFILE match address local interface GigabitEthernet3 match identity remote any identity local address X.X.X.X ! I'm using NAT-T so I needed to specify my public NAT here authentication local pre-share authentication remote pre-share keyring local VPN_KEYRING lifetime 36000 dpd 60 5 periodic ! crypto ipsec security-association lifetime seconds 3600 crypto ipsec security-association replay window-size 1024 ! crypto ipsec transform-set VPN_TS esp-aes 256 esp-sha-hmac mode tunnel ! crypto ipsec profile VPN_VTI set security-association lifetime seconds 3600 set transform-set VPN_TS set pfs group16 set ikev2-profile VPN_IKEV2_PROFILE ! interface Tunnel1 ip address 169.254.0.2 255.255.255.252 ip mtu 1400 ip tcp adjust-mss 1360 tunnel source GigabitEthernet3 tunnel mode ipsec ipv4 tunnel destination 35.242.126.28 <<<<<<<< ! Update this with gcp_external_vpn_address_1 tunnel protection ipsec profile VPN_VTI ! interface Tunnel2 ip address 169.254.1.2 255.255.255.252 ip mtu 1400 ip tcp adjust-mss 1360 tunnel source GigabitEthernet3 tunnel mode ipsec ipv4 tunnel destination 35.220.87.218 <<<<<<<< ! Update this with gcp_external_vpn_address_1 tunnel protection ipsec profile VPN_VTI ! router bgp 65000 bgp log-neighbor-changes neighbor 169.254.0.1 remote-as 64997 neighbor 169.254.0.1 timers 20 60 60 neighbor 169.254.0.1 description BGP session over Tunnel1 neighbor 169.254.1.1 remote-as 64997 neighbor 169.254.1.1 timers 20 60 60 neighbor 169.254.1.1 description BGP session over Tunnel2 ! address-family ipv4 network 10.200.0.1 mask 255.255.255.255 neighbor 169.254.0.1 activate neighbor 169.254.1.1 activate exit-address-family
Once applied, we can start verifying.
Verification
VPC and subnets are created:

VPN Tunnels are up and BGP sessions established:

Cloud Router shows both peers up:

We’re looking the advertised 10.200.0.1/32 network advertised from the CSR:

Both VM instances have been deployed.

From the CSR, both crypto sessions are up.
R1#show crypto session Crypto session current status Interface: Tunnel1 Profile: VPN_IKEV2_PROFILE Session status: UP-ACTIVE Peer: 35.242.126.28 port 4500 Session ID: 3295 IKEv2 SA: local X.X.X.X/4500 remote 35.242.126.28/4500 Active IPSEC FLOW: permit ip 0.0.0.0/0.0.0.0 0.0.0.0/0.0.0.0 Active SAs: 2, origin: crypto map Interface: Tunnel2 Profile: VPN_IKEV2_PROFILE Session status: UP-ACTIVE Peer: 35.220.87.218 port 4500 Session ID: 3293 IKEv2 SA: local X.X.X.X/4500 remote 35.220.87.218/4500 Active IPSEC FLOW: permit ip 0.0.0.0/0.0.0.0 0.0.0.0/0.0.0.0 Active SAs: 2, origin: crypto map
Both Tunnel interfaces are up
R1#sh ip int b | i T Tunnel1 169.254.0.2 YES manual up up Tunnel2 169.254.1.2 YES manual up up
BGP is up and learning the GCP prefixes of both subnets – one in the central1 region and one from the west1 region:
R1#show ip bgp summ Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 169.254.0.1 4 64997 67 73 20 0 0 00:21:02 2 169.254.1.1 4 64997 67 75 20 0 0 00:21:05 2 R1#sh ip route bgp 10.0.0.0/8 is variably subnetted, 5 subnets, 2 masks B 10.0.1.0/24 [20/100] via 169.254.1.1, 00:22:16 B 10.0.2.0/24 [20/336] via 169.254.1.1, 00:22:16
Validate with some pings
R1#show ip int brie loopback 200 Interface IP-Address OK? Method Status Protocol Loopback200 10.200.0.1 YES manual up up R1#ping 10.0.1.2 source loopback 200 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.1.2, timeout is 2 seconds: Packet sent with a source address of 10.200.0.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 61/70/105 ms R1#ping 10.0.2.2 source loopback 200 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.2.2, timeout is 2 seconds: Packet sent with a source address of 10.200.0.1 !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 91/93/99 ms
Terraform State
Terraform created a terraform.tfstate file after applying the configuration. This state file is critical; it keeps track of the IDs of created resources so that Terraform knows what it is managing. This is unlike some other automation platforms such as Ansible where there really isn’t state tracking. For production environments, this state file should be on a shared repo and shared with anyone else who might need to manage or change the environment managed by Terraform. You can inspect the current state using terraform show
Making live changes with Terraform
Say I wanted to change something I deployed or add something new. All I’d need to do is make the changes in the terraform configuration files and then execute another terraform apply
.
I’ll update the firewall policy to remove icmp and I’ll create a new vm instance.
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create ~ update in-place Terraform will perform the following actions: # google_compute_firewall.tf_firewall will be updated in-place ~ resource "google_compute_firewall" "tf_firewall" { creation_timestamp = "2020-04-02T20:37:08.949-07:00" destination_ranges = [] direction = "INGRESS" disabled = false enable_logging = false id = "projects/terraform-testing-272920/global/firewalls/terraform-firewall-base" name = "terraform-firewall-base" network = "https://www.googleapis.com/compute/v1/projects/terraform-testing-272920/global/networks/tf-vpc-net-1" priority = 1000 project = "terraform-testing-272920" self_link = "https://www.googleapis.com/compute/v1/projects/terraform-testing-272920/global/firewalls/terraform-firewall-base" source_ranges = [ "0.0.0.0/0", ] source_service_accounts = [] source_tags = [] target_service_accounts = [] target_tags = [] allow { ports = [ "22", "80", "443", ] protocol = "tcp" } - allow { - ports = [] -> null - protocol = "icmp" -> null } } # google_compute_instance.vm_instance3 will be created ...truncated Plan: 1 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes google_compute_firewall.tf_firewall: Modifying... [id=projects/terraform-testing-272920/global/firewalls/terraform-firewall-base] google_compute_instance.vm_instance3: Creating... google_compute_firewall.tf_firewall: Still modifying... [id=projects/terraform-testing-272920/global/firewalls/terraform-firewall-base, 10s elapsed] google_compute_instance.vm_instance3: Still creating... [10s elapsed] google_compute_firewall.tf_firewall: Modifications complete after 12s [id=projects/terraform-testing-272920/global/firewalls/terraform-firewall-base] google_compute_instance.vm_instance3: Creation complete after 14s [id=projects/terraform-testing-272920/zones/us-west1-b/instances/terraform-intance-3] Apply complete! Resources: 1 added, 1 changed, 0 destroyed.
We now have the new instance:

And I can no longer ping:
R1#ping 10.0.2.2 source loopback 200 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.0.2.2, timeout is 2 seconds: Packet sent with a source address of 10.200.0.1 ..... Success rate is 0 percent (0/5)
Terraform Destroy
When you’re done playing around (if that is all you’re doing) you can destroy everything that the terraform configuration is managing by using terraform destroy
.
[21:30:50] dvarnum:terraform-gcp-cisco-vpn git:(master*) $ terraform destroy Plan: 0 to add, 0 to change, 16 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes ..truncated Destroy complete! Resources: 16 destroyed.