Arista BGP EVPN – Ansible Lab

In the previous two blog posts, I covered the concepts of EVPN and shared a detailed configuration example on Arista EOS. In this blog post, I’ll be covering how to automate the deployment of EVPN in a lab environment. After deployment, I want to run validations to make sure my intent is being met. Lastly, I’ll play around with a few scripts to deploy L2 and L3 VXLAN services. 

When studying this technology and demonstrating it to clients, I chose to use GNS3 because it’s nice to visualize the topology, easily perform packet captures, and I can share the project file with fellow co-workers using a GNS3 Server. I could have chosen Vagrant for this, but since my topology has 10 vEOS devices, I found the boot time to be too long (although I hear if I use KVM I can boot the nodes in parallel). I chose 8 leafs because it gave me the most flexibility to demonstrate VXLAN Bridging, VLAN Routing, Border Services (such as segmentation or traffic engineering), and so on. You could probably get away with fewer leafs depending on your preference. That said, my topology in GNS3 looks like this:

TL;DR

Clone this GitHub repo and use Ansible to automate an EVPN lab:

https://github.com/varnumd/ansible-arista-evpn-lab

Notes on GNS3

The Lab interfaces are connected as below:
  • spine1:et1 <> et11:leaf1
  • spine1:et2 <> eth11:leaf2
  • spine1:et3 <> eth11:leaf3
  • spine1:et4 <> eth11:leaf4
  • spine1:et5 <> et11:leaf5
  • spine1:et6 <> eth11:leaf6
  • spine1:et7 <> eth11:leaf7
  • spine1:et8 <> eth11:leaf8
  • spine2:et1 <> et12:leaf1
  • spine2:et2 <> eth12:leaf2
  • spine2:et3 <> eth12:leaf3
  • spine2:et4 <> eth12:leaf4
  • spine2:et5 <> et12:leaf5
  • spine2:et6 <> eth12:leaf6
  • spine2:et7 <> eth12:leaf7
  • spine2:et8 <> eth12:leaf8
  • leaf1:et10 <> et10:leaf2
  • leaf3:et10 <> et10:leaf4
  • leaf5:et10 <> et10:leaf6
  • leaf7:et10 <> et10:lea8
Configure each vEOS with a base configuration that adds an IP address and user credentials so we can remotely manage the host. This could technically be handled with a ZTP server but that is beyond the scope of this blog. Here is the base configuration loaded on each node, just replace the IPs with whichever you are using on your network. For simplicity, use the same IPs!
  • spine1: 10.0.0.140
  • spine2: 10.0.0.141
  • leaf1: 10.0.0.151
  • leaf2: 10.0.0.152
  • leaf3: 10.0.0.153
  • leaf4: 10.0.0.154
  • leaf5: 10.0.0.155
  • leaf6: 10.0.0.156
  • leaf7: 10.0.0.157
  • leaf8: 10.0.0.158
hostname <hostname>
ip routing
!
vrf def mgmt
int Man1
no switchport
vrf for mgmt
ip add <ip_address>
!
username ansible secret automation priv 15
aaa authorization exec default local
!
management api http-commands
no shutdown
!
vrf mgmt
no shutdown

Now, make sure your “Network Automation” container is configured with an IP address that can reach each of the vEOS boxes. Once we have that in place, we should be able to reach each of the nodes from the container.

Ansible - Getting Started

I won’t be covering all of the specifics of Ansible in this blog post, but I will be providing a brief breakdown of the Ansible playbook, tasks, and templates I used for this project. I’m still learning Ansible so keep in mind that the notes below are not “best practice” – I’m just a dude burning midnight oil to make my work day easier!

On the container, let’s upgrade Ansible and install git:

# upgrade Ansible
apt-add-repository ppa:ansible/ansible
apt update
apt upgrade
ansible --version
# install tree and git
apt install tree apt install git
Next, clone my Ansible repo onto the Network Automation container:

Let’s take a look at the repo

root@NetworkAutomation-1:~# cd ansible-arista-evpn-lab/
root@NetworkAutomation-1:~/ansible-arista-evpn-lab# tree
.
|-- README.md
|-- ansible.cfg
|-- deploy_evpn.yml
|-- deploy_l2vxlan.yml
|-- deploy_vrf.yml
|-- filter_plugins
| |-- custom_plugins.py
| `-- custom_plugins.pyc
|-- gen_config.py
|-- generate_host_vars.yml
|-- group_vars
| `-- eos.yml
|-- host_vars
| |-- leaf1.yaml
| |-- leaf2.yaml
| |-- leaf3.yaml
| |-- leaf4.yaml
| |-- leaf5.yaml
| |-- leaf6.yaml
| |-- leaf7.yaml
| |-- leaf8.yaml
| |-- spine1.yaml
| `-- spine2.yaml
|-- hosts
|-- roles
| `-- evpn
| |-- README.md
| |-- defaults
| | `-- main.yml
| |-- handlers
| | `-- main.yml
| |-- meta
| | `-- main.yml
| |-- tasks
| | |-- bgp.yml
| | |-- evpn.yml
| | |-- interfaces.yml
| | |-- main.yml
| | `-- mlag.yml
| |-- templates
| | |-- bgp.j2
| | |-- evpn.j2
| | |-- interfaces.j2
| | `-- mlag.j2
| |-- tests
| | |-- inventory
| | `-- test.yml
| `-- vars
| `-- main.yml
|-- templates
| |-- l2vxlan.j2
| |-- single_vrf.j2
| `-- vrf.j2
`-- validate_lab.yml
If using different IP addresses, you’ll want to update the hosts files:
# cat hosts
---
all:
children:
eos:
children:
leafs:
children:
vtep1:
hosts:
leaf1:
ansible_host: 10.0.0.151
leaf2:
ansible_host: 10.0.0.152
vtep2:
hosts:
leaf3:
ansible_host: 10.0.0.153
leaf4:
ansible_host: 10.0.0.154
vtep3:
hosts:
leaf5:
ansible_host: 10.0.0.155
leaf6:
ansible_host: 10.0.0.156
vtep4:
hosts:
leaf7:
ansible_host: 10.0.0.157
leaf8:
ansible_host: 10.0.0.158
vars:
ansible_network_os: eos
spines:
hosts:
spine1:
ansible_host: 10.0.0.140
spine2:
ansible_host: 10.0.0.141
vars:
ansible_network_os: eos
And in my lab I’m using simple password-based authentication. In the real world I would store this in the script, but hey, it’s a lab!
# cat group_vars/eos.yml
---
ansible_network_os: eos
ansible_user: ansible
ansible_ssh_pass: automation
ansible_connection: network_cli
The host_vars were generated using a Python3 script which you can download here. I’ve also included the script in the GitHub repo for this lab, and it can be ran using “ansible-playbook generate_host_vars.yml”. The script has a few variables below which are already set at the top of the script. Feel free to change these if you want to use different values.  Note, I am NOT a developer, so while my code may get the job done, it’s not the cleanest, haha!  
 
The variables I used are:
  • Number of Spines = 2
  • Number of Leafs = 8
  • P2P IP Range = ‘10.0.0.0/16’
  • Loopback0 Range = ‘10.0.250.0/24’
  • Loopback1 Range = ‘10.0.255.0/24’
  • iBGP Range = P2P Range, 3rd octet is 1+# Spines
  • iBGP VLAN = ‘4091’
  • ASN Range = 65000-65535
 
All host_vars are created from only the values above. Here is a host_vars example of a Leaf and Spine:
# cat host_vars/leaf1.yaml
asn: 65001
bgp_neighbors:
- neighbor: 10.0.3.1
remote_as: 65001
state: present
- neighbor: 10.0.1.0
remote_as: 65000
state: present
- neighbor: 10.0.2.0
remote_as: 65000
state: present
evpn_neighbors:
- neighbor: 10.0.250.1
remote_as: 65000
state: present
- neighbor: 10.0.250.2
remote_as: 65000
state: present
hostname: leaf1
interfaces:
- address: 10.0.1.1
description: spine1
interface: Ethernet11
mask: /31
- address: 10.0.2.1
description: spine2
interface: Ethernet12
mask: /31
- address: 10.0.3.0
interface: Vlan4091
mask: /31
loopback0_ip: 10.0.250.11
loopback1_ip: 10.0.255.11
side: left

# cat host_vars/spine1.yaml

asn: 65000
bgp_neighbors:
- neighbor: 10.0.1.1
remote_as: 65001
state: present
- neighbor: 10.0.1.3
remote_as: 65001
state: present
- neighbor: 10.0.1.5
remote_as: 65002
state: present
- neighbor: 10.0.1.7
remote_as: 65002
state: present
- neighbor: 10.0.1.9
remote_as: 65003
state: present
- neighbor: 10.0.1.11
remote_as: 65003
state: present
- neighbor: 10.0.1.13
remote_as: 65004
state: present
- neighbor: 10.0.1.15
remote_as: 65004
state: present
evpn_neighbors:
- neighbor: 10.0.250.11
remote_as: 65001
state: present
- neighbor: 10.0.250.12
remote_as: 65001
state: present
- neighbor: 10.0.250.13
remote_as: 65002
state: present
- neighbor: 10.0.250.14
remote_as: 65002
state: present
- neighbor: 10.0.250.15
remote_as: 65003
state: present
- neighbor: 10.0.250.16
remote_as: 65003
state: present
- neighbor: 10.0.250.17
remote_as: 65004
state: present
- neighbor: 10.0.250.18
remote_as: 65004
state: present
hostname: spine1
interfaces:
- address: 10.0.1.0
description: leaf1
interface: Ethernet1
mask: /31
- address: 10.0.1.2
description: leaf2
interface: Ethernet2
mask: /31
- address: 10.0.1.4
description: leaf3
interface: Ethernet3
mask: /31
- address: 10.0.1.6
description: leaf4
interface: Ethernet4
mask: /31
- address: 10.0.1.8
description: leaf5
interface: Ethernet5
mask: /31
- address: 10.0.1.10
description: leaf6
interface: Ethernet6
mask: /31
- address: 10.0.1.12
description: leaf7
interface: Ethernet7
mask: /31
- address: 10.0.1.14
description: leaf8
interface: Ethernet8
mask: /31
loopback0_ip: 10.0.250.1
Let’s make sure we can ping each of the nodes from Ansible:
# ansible all -m ping
leaf7 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf8 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf6 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf5 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf4 | SUCCESS => {
"changed": false,
"ping": "pong"
}
spine1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
spine2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
leaf2 | SUCCESS => {
"changed": false,
"ping": "pong"
Cool – now let’s check to see which version of code these are running.  I’ll limit this to just the spines:
# ansible spines -m eos_command -a "commands='show version'"     
spine2 | SUCCESS => {
...
"stdout_lines": [
[
"Arista vEOS",
"Hardware version: ",
"Serial number: ",
"System MAC address: 0c56.3380.5843",
"",
"Software image version: 4.21.1.1F",
"Architecture: i386",
"Internal build version: 4.21.1.1F-10146868.42111F",
"Internal build ID: ed3973a9-79db-4acc-b9ac-19b9622d23e2",
"",
"Uptime: 0 weeks, 1 days, 22 hours and 31 minutes",
"Total memory: 2016548 kB",
"Free memory: 1206552 kB"
]
]
}
spine1 | SUCCESS => {
...
"stdout_lines": [
[
"Arista vEOS",
"Hardware version: ",
"Serial number: ",
"System MAC address: 0c56.33aa.b444",
"",
"Software image version: 4.21.1.1F",
"Architecture: i386",
"Internal build version: 4.21.1.1F-10146868.42111F",
"Internal build ID: ed3973a9-79db-4acc-b9ac-19b9622d23e2",
"",
"Uptime: 0 weeks, 1 days, 22 hours and 38 minutes",
"Total memory: 2016548 kB",
"Free memory: 1200324 kB"
]
]
}

Ansible Role - EVPN

I created a role in Ansible called “evpn”. This role will deploy the configuration of the interfaces, MLAG, BGP, and EVPN. In the roles directory, we can see a list of tasks:
# ll roles/evpn/tasks/
total 28
drwxr-xr-x 2 root root 4096 Feb 18 22:48 ./
drwxr-xr-x 9 root root 4096 Feb 18 22:48 ../
-rw-r--r-- 1 root root 73 Feb 18 22:48 bgp.yml
-rw-r--r-- 1 root root 57 Feb 18 22:48 evpn.yml
-rw-r--r-- 1 root root 94 Feb 18 22:48 interfaces.yml
-rw-r--r-- 1 root root 163 Feb 18 22:48 main.yml
-rw-r--r-- 1 root root 77 Feb 18 22:48 mlag.yml
The “main.yml” tasks is what is executed when calling a role.
# cat roles/evpn/tasks/main.yml
---
# tasks file for evpn
- import_tasks: interfaces.yml
- import_tasks: bgp.yml
- import_tasks: mlag.yml
when: "'leafs' in group_names”
- import_tasks: evpn.yml
This role will first run interfaces.yml, followed by bgp.yml, then mlag.yml (if the device is in the ‘leafs’ group), and finally evpn.yml.

Interfaces

This task calls interfaces.j2 template
# cat roles/evpn/tasks/interfaces.yml
---
- name: Configure Interfaces
eos_config:
src: interfaces.j2
tags:
- interfaces
The templates iterates over the interfaces in the host’s host_vars file and generates the configuration
# cat roles/evpn/templates/interfaces.j2 
{% for intf in interfaces %}
interface {{ intf.interface }}
{% if intf.description is defined %}
description {{ intf.description }}
{% endif %}
{% if 'Ethernet' in intf.interface %}
no switchport
{% endif %}
ip address {{ intf.address }}{{ intf.mask }}
mtu 9214
no shutdown
exit
!
{% endfor %}
interface Loopback0

BGP

Likewise, bgp.yml task runs for each node using the bgp.j2 Jinja2 template
# cat roles/evpn/tasks/bgp.yml              
---
- name: Configure BGP
eos_config:
src: bgp.j2
tags:
- bgp
# cat roles/evpn/templates/bgp.j2           
router bgp {{ asn }}
router-id {{ loopback0_ip }}
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
maximum-paths 4 ecmp 64
{% for item in bgp_neighbors %}
neighbor {{ item.neighbor }} remote-as {{ item.remote_as }}
{% if item.remote_as == asn %}
neighbor {{ item.neighbor }} next-hop-self
{% endif %}
{% endfor %}
!
address-family ipv4
network {{ loopback0_ip }}/32
{% for item in bgp_neighbors %}
neighbor {{ item.neighbor }} activate
{% endfor %}

MLAG

Same for MLAG

# cat roles/evpn/tasks/mlag.yml       
---
- name: Configure MLAG
eos_config:
src: mlag.j2
tags:
- mlag
I could have abstracted more of this but didn’t do it yet. Next time…
# cat roles/evpn/templates/mlag.j2    
{% if 'left' in side %}
{% set mlag_octet = '254' %}
{% elif 'right' in side %}
{% set mlag_octet = '255' %}
{% endif %}
vlan 4090
name mlag-peer
trunk group mlag-peer
!
interface vlan 4090
ip address 10.0.99.{{ mlag_octet }}/31
no autostate
no shutdown
!
interface Ethernet 10
description mlag peer link
channel-group 999 mode active
!
interface port-channel 999
description MLAG Peer
switchport mode trunk
spanning-tree link-type point-to-point
switchport trunk group mlag-peer
exit
!
no spanning-tree vlan 4090
!
mlag configuration
domain-id leafs
peer-link port-channel 999
local-interface vlan 4090
{% if '254' in mlag_octet %}
peer-address 10.0.99.255
{% elif '255' in mlag_octet %}
peer-address 10.0.99.254
{% endif %}
no shutdown
!
vlan 4091
name mlag-ibgp
trunk group mlag-peer
no spanning-tree vlan 4091

EVPN

# cat roles/evpn/tasks/evpn.yml        
---
- name: Configure EVPN
eos_config:
src: evpn.j2
Jinja2 template. Note that I’m only deploying a VTEP if the device is a leaf.
# cat roles/evpn/templates/evpn.j2     
service routing protocols model multi-agent
router bgp {{ asn }}
neighbor evpn peer-group
neighbor evpn next-hop-unchanged
neighbor evpn update-source Loopback0
neighbor evpn ebgp-multihop 3
neighbor evpn send-community extended
neighbor evpn maximum-routes 12000
{% for item in evpn_neighbors %}
neighbor {{ item.neighbor }} peer-group evpn
neighbor {{ item.neighbor }} remote-as {{ item.remote_as }}
{% endfor %}
!
address-family evpn
neighbor evpn activate
{% if 'leaf' in hostname %}
network {{ loopback1_ip }}/32
interface Vxlan1
vxlan source-interface Loopback1
vxlan udp-port 4789
vxlan learn-restrict any
interface Loopback1
ip address {{ loopback1_ip }}/32
{% endif %}

Running the Playbook

All you need to do is run the following command to fully deploy the environment.
ansible-playbook deploy_evpn.yml
Let’s see it in action. I’m going to increase the parallelization to 10 forks to speed up the deployment.
# ansible-playbook deploy_evpn.yml -f 10
PLAY [all] ******************************************************************************************************************************************************************************
TASK [evpn : Configure Interfaces] ******************************************************************************************************************************************************
changed: [leaf1]
changed: [leaf7]
changed: [leaf8]
changed: [spine1]
changed: [spine2]
changed: [leaf2]
changed: [leaf6]
changed: [leaf5]
changed: [leaf3]
changed: [leaf4]
TASK [evpn : Configure BGP] *************************************************************************************************************************************************************
changed: [leaf1]
changed: [spine1]
changed: [leaf7]
changed: [leaf8]
changed: [spine2]
changed: [leaf2]
changed: [leaf5]
changed: [leaf4]
changed: [leaf6]
changed: [leaf3]
TASK [evpn : Configure MLAG] ************************************************************************************************************************************************************
skipping: [spine1]
skipping: [spine2]
changed: [leaf2]
changed: [leaf1]
changed: [leaf8]
changed: [leaf7]
changed: [leaf5]
changed: [leaf3]
changed: [leaf6]
changed: [leaf4]
TASK [evpn : Configure EVPN] ************************************************************************************************************************************************************
ok: [spine1]
changed: [leaf1]
changed: [leaf8]
changed: [leaf7]
changed: [spine2]
changed: [leaf2]
changed: [leaf5]
changed: [leaf4]
changed: [leaf3]
changed: [leaf6]
PLAY RECAP ******************************************************************************************************************************************************************************
leaf1 : ok=4 changed=4 unreachable=0 failed=0
leaf2 : ok=4 changed=4 unreachable=0 failed=0
leaf3 : ok=4 changed=4 unreachable=0 failed=0
leaf4 : ok=4 changed=4 unreachable=0 failed=0
leaf5 : ok=4 changed=4 unreachable=0 failed=0
leaf6 : ok=4 changed=4 unreachable=0 failed=0
leaf7 : ok=4 changed=4 unreachable=0 failed=0
leaf8 : ok=4 changed=4 unreachable=0 failed=0
spine1 : ok=3 changed=2 unreachable=0 failed=0
spine2 : ok=3 changed=3 unreachable=0 failed=0
root@NetworkAutomation-1:~/ansible-arista-evpn-lab#
I can quickly see that EVPN is deployed by running “show bgp evpn summary” on the spines:
# ansible spines -m eos_command -a "commands='show bgp evpn summary'"       
spine2 | SUCCESS => {
...
" Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc",
" 10.0.250.11 4 65001 4 4 0 0 00:00:29 Estab 0 0",
" 10.0.250.12 4 65001 4 4 0 0 00:00:19 Estab 0 0",
" 10.0.250.13 4 65002 4 4 0 0 00:00:10 Estab 0 0",
" 10.0.250.14 4 65002 4 4 0 0 00:00:11 Estab 0 0",
" 10.0.250.15 4 65003 4 4 0 0 00:00:24 Estab 0 0",
" 10.0.250.16 4 65003 4 4 0 0 00:00:28 Estab 0 0",
" 10.0.250.17 4 65004 4 4 0 0 00:00:20 Estab 0 0",
" 10.0.250.18 4 65004 4 4 0 0 00:00:19 Estab 0 0"
]
]
}
spine1 | SUCCESS => {
...
" Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc",
" 10.0.250.11 4 65001 4 4 0 0 00:00:33 Estab 0 0",
" 10.0.250.12 4 65001 4 4 0 0 00:00:22 Estab 0 0",
" 10.0.250.13 4 65002 4 6 0 0 00:00:20 Estab 0 0",
" 10.0.250.14 4 65002 4 4 0 0 00:00:20 Estab 0 0",
" 10.0.250.15 4 65003 4 4 0 0 00:00:25 Estab 0 0",
" 10.0.250.16 4 65003 4 4 0 0 00:00:29 Estab 0 0",
" 10.0.250.17 4 65004 4 4 0 0 00:00:20 Estab 0 0",
" 10.0.250.18 4 65004 4 4 0 0 00:00:20 Estab 0 0"
]
]
}
Looks great!
 

Validation Playbook

If you want to validate more than this, such as LLDP, MLAG, underlay bgp, and EVPN, feel free to execute the ‘validate_lab.yml” playbook. Here I’ll run it but limit it to just the spines (-l spines) and only evpn (-t evpn):

# ansible-playbook validate_lab.yml -l spines -t evpn
PLAY [leafs] ***********************************************************************************************************************************************************************************
skipping: no hosts matched
PLAY [leafs, spines] ***************************************************************************************************************************************************************************
TASK [Gather EVPN Summary] *********************************************************************************************************************************************************************
ok: [spine1]
ok: [spine2]
TASK [Get EVPN peer list] **********************************************************************************************************************************************************************
ok: [spine2]
ok: [spine1]
TASK [Validate each leaf has 2 EVPN peers] *****************************************************************************************************************************************************
skipping: [spine1]
skipping: [spine2]
TASK [Validate each spine has 6 EVPN peers] ****************************************************************************************************************************************************
ok: [spine2] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [spine1] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [Validate all BGP EVPN sessions are established] ******************************************************************************************************************************************
ok: [spine2] => (item=10.0.250.11) => {
"changed": false,
"item": "10.0.250.11",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.12) => {
"changed": false,
"item": "10.0.250.12",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.13) => {
"changed": false,
"item": "10.0.250.13",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.14) => {
"changed": false,
"item": "10.0.250.14",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.15) => {
"changed": false,
"item": "10.0.250.15",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.16) => {
"changed": false,
"item": "10.0.250.16",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.17) => {
"changed": false,
"item": "10.0.250.17",
"msg": "All assertions passed"
}
ok: [spine2] => (item=10.0.250.18) => {
"changed": false,
"item": "10.0.250.18",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.11) => {
"changed": false,
"item": "10.0.250.11",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.12) => {
"changed": false,
"item": "10.0.250.12",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.13) => {
"changed": false,
"item": "10.0.250.13",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.14) => {
"changed": false,
"item": "10.0.250.14",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.15) => {
"changed": false,
"item": "10.0.250.15",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.16) => {
"changed": false,
"item": "10.0.250.16",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.17) => {
"changed": false,
"item": "10.0.250.17",
"msg": "All assertions passed"
}
ok: [spine1] => (item=10.0.250.18) => {
"changed": false,
"item": "10.0.250.18",
"msg": "All assertions passed"
}
PLAY RECAP *************************************************************************************************************************************************************************************
spine1 : ok=4 changed=0 unreachable=0 failed=0
spine2 : ok=4 changed=0 unreachable=0 failed=0

Deploy L2 and L3 VXLANs Services

I’m still playing around with these, but what I’d ultimately like to do is automate deployments of L2 or L3 VXLAN services. Below are a couple of basic examples, included in the repo:

Deploying a L2VXLAN Service

Here I’m deploying a L2 VXLAN Service for VLAN 40 on VTEPs 1 and 2.

# ansible-playbook deploy_l2vxlan.yml -l 'vtep1,vtep2' -e "vlan_name=ansible_test vlan_id=40"
PLAY [leafs] ******************************************************************************************************************************************************************
TASK [Configure L2 VXLAN] *****************************************************************************************************************************************************
ok: [leaf1]
ok: [leaf4]
ok: [leaf2]
ok: [leaf3]
PLAY RECAP ********************************************************************************************************************************************************************
leaf1 : ok=1 changed=1 unreachable=0 failed=0
leaf2 : ok=1 changed=1 unreachable=0 failed=0
leaf3 : ok=1 changed=1 unreachable=0 failed=0
leaf4 : ok=1 changed=1 unreachable=0 failed=0

Just checking one of the Leafs, we can see it was successfully deployed:

leaf1#show vlan 40
VLAN Name Status Ports
----- -------------------------------- --------- -------------------------------
40 ansible_test active Po999, Vx1
leaf1#show vxlan vni
VNI to VLAN Mapping for Vxlan1
VNI VLAN Source Interface 802.1Q Tag
------------ ---------- ------------ --------------- ----------
110040 40 static Vxlan1 40
Note: * indicates a Dynamic VLAN

Deploy a tenant VRF to all Leafs

# ansible-playbook deploy_vrf.yml -l leafs -e "vrf_name=ansible vrf_id=1"
PLAY [leafs] ******************************************************************************************************************************************************************
TASK [Configure L3 VXLAN VRF] *************************************************************************************************************************************************
changed: [leaf1]
changed: [leaf8]
changed: [leaf7]
changed: [leaf6]
changed: [leaf5]
changed: [leaf4]
changed: [leaf2]
changed: [leaf3]
PLAY RECAP ********************************************************************************************************************************************************************
leaf1 : ok=1 changed=1 unreachable=0 failed=0
leaf2 : ok=1 changed=1 unreachable=0 failed=0
leaf3 : ok=1 changed=1 unreachable=0 failed=0
leaf4 : ok=1 changed=1 unreachable=0 failed=0
leaf5 : ok=1 changed=1 unreachable=0 failed=0
leaf6 : ok=1 changed=1 unreachable=0 failed=0
leaf7 : ok=1 changed=1 unreachable=0 failed=0
leaf8 : ok=1 changed=1 unreachable=0 failed=0

Checking a random leaf, we can see it got deployed

leaf1#show vrf
Maximum number of vrfs allowed: 256
Vrf RD Protocols State Interfaces
------------ ------------------ -------------- -------------------- -----------
mgmt <not set> ipv4,ipv6 v4:no routing, Management1
v6:no routing
ansible 10.0.250.11:1 ipv4,ipv6 v4:routing, Vlan1008
v6:no routing
leaf1#
leaf1#show run sec bgp | b vrf
vrf ansible
rd 10.0.250.11:1
route-target import evpn 1:100001
route-target export evpn 1:100001
redistribute connected
leaf1#sh run int vxlan1
interface Vxlan1
vxlan source-interface Loopback1
vxlan udp-port 4789
vxlan vlan 40 vni 110040
vxlan vrf ansible vni 100001
vxlan learn-restrict any
leaf1#

Closing Thoughts

I spent a lot of time copy/pasting, so decided to try an automate my EVPN lab using Ansible. Although there is much room for improvement, and many features I’d like to add in the future, I feel accomplished with this initial build. Numerous hours were spent learning Ansible, troubleshooting YAML formats, debugging Jinja2 templates, and constantly blowing up my own lab. Like anything, it’s a learning experience. I’m no expert at network automation (one day), and these are mostly notes-to-self. In other words, please be careful using any of this code on your own equipment. I’m hopeful my next version will be much more abstracted with integrated application services, ZTP, and a lot more error handling. Stay tuned…

David Varnum

here

You may also like...

7 Responses

  1. Can you post the next updates on I’m hopeful my next version will be much more abstracted with integrated application services, ZTP ,will be great and thanks for great post.

  2. Chris Stamm says:

    What purpose is…

    state: present

    …serving in the BGP section of the host_vars file? I don’t see reference below when you build the configuration through the template.

    • David Varnum says:

      Good question – that is an artifact from some other playbooks outside of this project where I check the state to determine whether a configuration parameter should exist. Apologies for the confusion.

  3. Damien says:

    Hi David,

    Great lab to discover VxLAN & EVPN !
    I have downloaded vEOS 4.23.0.1F for this lab from ARISTA Web site.
    I successfully configured the spines and leafs thanks to your playbooks.
    The validation playbook failed but anyway the L2VXLAN lab works fine :
    ** I plugged 2 Ubuntu Docker Guest on leaf1 and leaf3 and configured the associated interfaces on those 2 boxes (VLAN 40)
    ** the 2 linux machines can ping each other through VXLAN

    Anyway I’m stuck with the L3 configuration :
    ** the deploy_vrf playbook works fine
    ** I plugged 2 other Ubuntu Docker Guest on leaf5 (eth0 : 10.34.34.34/24) and leaf7 (eth0 : 10.78.78.78/24)
    ** then I configured Vlan34 on leaf5 and Vlan78 on leaf7 :
    interface Vlan34
    vrf ansible
    ip address 10.34.34.2/24
    ip virtual-router address 10.34.34.1
    interface Vlan78
    vrf ansible
    ip address 10.78.78.2/24
    ip virtual-router address 10.78.78.1

    Ping 10.78.78.78 from 10.34.34.34 does not work.

    At first the mac address of the gateway was not resolved on the Ubuntu Docket Guests.
    I setup the virtual router mac address on both leafs :
    ip virtual-router mac-address 34:34:34:34:34:34 (leaf5)
    ip virtual-router mac-address 78:78:78:78:78:78 (leaf7)

    Now the arp gets resolved but the ping does not work :
    ? (10.34.34.1) at 34:34:34:34:34:34 [ether] on eth0

    I went throught various check but so far I did not find out what was wrong (example on leaf5) :

    leaf5#sh vrf
    Maximum number of vrfs allowed: 1024
    VRF RD Protocols State Interfaces
    ——— ————— ———– —————– ———————–
    ansible 10.0.250.15:1 ipv4,ipv6 v4:routing, Vlan34, Vlan4094
    v6:no routing

    default ipv4,ipv6 v4:routing, Ethernet11, Ethernet12,
    v6:no routing Loopback0, Loopback1,
    Vlan4090, Vlan4091
    mgmt ipv4,ipv6 v4:no routing, Management1
    v6:no routing

    leaf5#sh ip rou vrf ansible

    VRF: ansible
    Codes: C – connected, S – static, K – kernel,
    O – OSPF, IA – OSPF inter area, E1 – OSPF external type 1,
    E2 – OSPF external type 2, N1 – OSPF NSSA external type 1,
    N2 – OSPF NSSA external type2, B – BGP, B I – iBGP, B E – eBGP,
    R – RIP, I L1 – IS-IS level 1, I L2 – IS-IS level 2,
    O3 – OSPFv3, A B – BGP Aggregate, A O – OSPF Summary,
    NG – Nexthop Group Static Route, V – VXLAN Control Service,
    DH – DHCP client installed default route, M – Martian,
    DP – Dynamic Policy Route, L – VRF Leaked

    Gateway of last resort is not set

    C 10.34.34.0/24 is directly connected, Vlan34
    B E 10.78.78.0/24 [200/0] via VTEP 10.0.255.17 VNI 100001 router-mac 0c:34:bc:10:b0:50

    leaf5#sh run int vxlan1
    interface Vxlan1
    vxlan source-interface Loopback1
    vxlan udp-port 4789
    vxlan vrf ansible vni 100001
    vxlan learn-restrict any
    leaf5#sh run sec bgp | b vrf
    vrf ansible
    rd 10.0.250.15:1
    route-target import evpn 1:100001
    route-target export evpn 1:100001
    redistribute connected
    leaf5#

    Any advice to troubleshoot my configuration ?

    BR,

    Damien.

    • David Varnum says:

      The good thing is you are receiving the route via EVPN in the ‘ansible’ VRF. Are you receiving the 10.34.34.0/24 prefix on leaf7? How are you initiating your pings? Is it from the containers or from the SVI? If from the container, are you able to ping the gateway? If from the SVI, are you sourcing from the ‘ansible’ VRF?

      Is leaf5 in an MLAG with leaf6, and is leaf7 in an MLAG with leaf8? If so, are you sharing a VTEP IP between leaf5/6 and between leaf7/8? If so, have you tried configuring the SVI on the other leaf in the pair?

  4. Dennis says:

    Have you thought about idempotency here?
    If you need to remove some config it’s not possible… unless you write something very specific to remove it.
    Correct me if I’m wrong?

    • David Varnum says:

      Hey Dennis – that is a good point and something that is actually tough to do with Ansible. This playbook is idempotent to the extent that configurations are not pushed if they already exist. However, this playbook is not intent-based. For this blog I kept it fairly simple with default uni-directional idempotency built into Ansible but you could absolutely extend the logic to make it more intent-based. The best way I’ve found to do this would be to completely replace the configuration or portions of the configuration with each play execution. Would love to hear your thoughts if you have any other ideas. Cheers!

Leave a Reply

%d bloggers like this: