Terraform an HA-VPN between GCP and Cisco

Doing Infrastucture-as-Code (IaC) with Ansible has given me a headache – so I’ve recently been playing around with Terraform as an alternative to Ansible for certain tasks that require Cloud IaaS interactions.

The goal of this blog post is to build an HA-VPN solution between GCP and an on-premises Cisco IOS-XE device (CSR) using Terraform. BGP will be established over the VPN in order to exchange routes dynamically. GCE compute instances will be deployed in GCP for testing connectivity over the VPN.

Let’s get started.


Using Ansible and NetBox to deploy EVPN on Arista

Ansible, Nornir, and other automation frameworks are excellent for generating and deploying configurations in an automated fashion. In Ansible, you can run a playbook, loop through hosts in your inventory file, and deploy configurations with host-specific information by leveraging host_vars and group_vars. Unfortunately, as your automation environment starts to grow and become more critical, you’ll find that managing inventory files and host variables in multiple tools becomes cumbersome and prone to errors. Is ServiceNow correct or Ansible? Is SolarWinds correct or Cloud Vision Portal? Does my Ansible inventory include all of my Data Center switches or did we add any new ones since I last executed this playbook? Was this spreadsheet ever merged with our IPAM, and which is accurate? Why is my production switch configured for a VLAN not documented anywhere else? Inconsistencies in critical data like this is a thorn in an engineers life – causing issues, wasting time and resources, and resulting in inefficiencies by requiring duplicate manually-entered data across disparate systems. Enter NetBox.


Arista BGP EVPN – Ansible Lab

In the previous two blog posts, I covered the concepts of EVPN and shared a detailed configuration example on Arista EOS. In this blog post, I’ll be covering how to automate the deployment of EVPN in a lab environment. After deployment, I want to run validations to make sure my intent is being met. Lastly, I’ll play around with a few scripts to deploy L2 and L3 VXLAN services. 

When studying this technology and demonstrating it to clients, I chose to use GNS3 because it’s nice to visualize the topology, easily perform packet captures, and I can share the project file with fellow co-workers using a GNS3 Server. I could have chosen Vagrant for this, but since my topology has 10 vEOS devices, I found the boot time to be too long (although I hear if I use KVM I can boot the nodes in parallel). I chose 8 leafs because it gave me the most flexibility to demonstrate VXLAN Bridging, VLAN Routing, Border Services (such as segmentation or traffic engineering), and so on. You could probably get away with fewer leafs depending on your preference. That said, my topology in GNS3 looks like this:

Continue reading