/overlaid

Arista BGP EVPN – Configuration Example

This is a follow-up to my previous article, Arista BGP EVPN Overiew and Concepts. In the previous article, I discussed some terminologies and behavior of EVPN and the reason why EVPN is valuable in Data Center and Campus networks. Since then, I’ve learned how valuable it is in Service Provider networks as well, but I’ll save that for another day. In this article, I want to walk through a configuration example.

In this topology, we have 2 Spines and 8 Leafs. Each pair of Leafs will form a VXLAN Tunnel Endpoint (VTEP). We will start with the initial configuration of underlay components, such as MLAG and underlay BGP. Next, we’ll configure the EVPN overlay and VTEPs. Lastly, I’ll give an example configuration of L2VXLAN (EVPN Type-2) and L3VXLAN (EVPN Type-5). While most of this configuration will function in production networks, I highly advise first building something out virtually to do testing (GNS3, Vagrant, what-have-you). I won’t be covering special use cases or every possible configuration parameter, but hopefully this is a good start to get you going on to super deep dives.

I’ll have a complete configuration workbook attached at the end of this blog.

UPDATE October 18, 2019:
Arista has changed some of the syntax in EOS in newer versions of code. I’ve updated this post to include those changes.

  1. ‘vrf definition’ is now ‘vrf instance’
  2. ‘vrf forwarding’ is now just ‘vrf’
  3. ‘peer-group’ is now ‘peer group’
  4. ‘route-target [import/export]’ is now ‘route-target [import/export] evpn’ when referenced for evpn

Configure Multi-Chassis Link Aggregation (MLAG)

  1. Configure MLAG Peering VLAN and place it in a trunk group (VLANs in trunk groups are only allowed on trunks that have the trunk group configured)
  2. Configure an SVI for the peering VLAN using a /30 or /31 peer-to-peer network that will only exist between the two switches
  3. Configure the physical interface(s) connected to the peer switch, placing them in a LAG
    Configure the Port-channel LAG as a trunk, add the trunk group, and set the spanning-tree link-type to point-to-point for fast transitioning
  4. Disable Spanning-Tree on the peering VLAN
  5. Configure MLAG >
  6. Set the domain ID (used to represent the MLAG pair)
  7. Specify the MLAG peer-link (the port-channel interface)
  8. Specify the point-to-point SVI to communicate to the other peer
  9. Specify the peer’s IP address
  10. Configure a virtual MAC to represent the MLAG (used later when deploying SVIs)
# leaf1
vlan 4090
 name mlag-peer
 trunk group mlag-peer
!
int vlan 4090
 ip add 10.0.199.254/31
 no autostate
 no shut
!
int et10
 desc mlag peer link
 channel-group 999 mode active
int po999
 desc MLAG Peer
 switchport mode trunk
 spanning-tree link-type point-to-point
 switchport trunk group mlag-peer
!
exit
 no spanning-tree vlan 4090
!
mlag configuration
 domain-id leafs
 peer-link port-chann 999
 local-interface vlan 4090
 peer-address 10.0.199.255
 no shut
!
ip virtual-router mac-address c001.cafe.babe
# leaf2
vlan 4090
 name mlag-peer
 trunk group mlag-peer
!
int vlan 4090
 ip add 10.0.199.255/31
 no autostate
 no shut
!
int et10
 desc mlag peer link
 channel-group 999 mode active
int po999
 desc MLAG Peer
 switchport mode trunk
 spanning-tree link-type point-to-point
 switchport trunk group mlag-peer
!
exit
 no spanning-tree vlan 4090
!
mlag configuration
 domain-id leafs
 peer-link port-chann 999
 local-interface vlan 4090
 peer-address 10.0.199.254
 no shut
!
ip virtual-router mac-address c001.cafe.babe

Configure MLAG – Dual-Active Detection (aka Peer Keepalive)

By default, the MLAG will enter into a dual-active state when the peer-link is severed. Everything will function just fine for existing flows, but new flows may cause issues as the MAC tables are no longer synced.

To overcome this, you can configure dual-active detection. Dual-active detection requires the separate peer-address heartbeat over some other interface (e.g. Mgmt)

#leaf1


int management1
vrf mgmt
ip add 172.16.0.25/24
!
mlag configuration
peer-address heartbeat 172.16.0.50 vrf mgmt
dual-primary detection delay 10 action errdisable all-interfaces
!
errdisable recovery cause mlagdualprimary

#leaf2


int management1
vrf mgmt
ip add 172.16.0.50/24
!
mlag configuration
peer-address heartbeat 172.16.0.25 vrf mgmt
dual-primary detection delay 10 action errdisable all-interfaces
!
errdisable recovery cause mlagdualprimary

Verify:

leaf1(config-mlag)#sh mlag det | i ual-p
dual-primary detection : Configured
Dual-primary detection delay : 10
Dual-primary action : errdisable-all

If the MLAG Peer-link fails, a 60 second timer will start. After 60 seconds, the MLAG peer times out and both enter the primary state:

leaf1(config)#sh mlag det | in State
State                           :             primary
State changes                   :                   2

leaf2(config)#sh mlag det | i State
State                           :             primary
State changes :                   2

After an additional 10 seconds, dual-active detection kicks in and disables the ports on the “secondary” switch. Without that extra heartbeat, dual-active detection wouldn’t work.

This is best used in environments where everything is dual-connected to both switches. Once the peer link is restored, the interfaces will be restored as long as we have configured errdisable recovery.

Configure Underlay Point-to-Point Interfaces

Every leaf connects to every spine. Each interface will be setup as a /31 point-to-point

# spine1

int Et1
desc leaf1
ip add 10.0.1.0/31
mtu 9214
int Et2
desc leaf2
ip add 10.0.1.2/31
mtu 9214
int Et3
desc leaf3
ip add 10.0.1.4/31
mtu 9214
int Et4
desc leaf4
ip add 10.0.1.6/31
mtu 9214
int Et5
desc leaf5
ip add 10.0.1.8/31
mtu 9214
int Et6
desc leaf6
ip add 10.0.1.10/31
mtu 9214
int Et7
desc leaf7
ip add 10.0.1.12/31
mtu 9214
int Et8
desc leaf8
ip add 10.0.1.14/31
mtu 9214
# spine2

int Et1
desc leaf1
ip add 10.0.2.0/31
mtu 9214
int Et2
desc leaf2
ip add 10.0.2.2/31
mtu 9214
int Et3
desc leaf3
ip add 10.0.2.4/31
mtu 9214
int Et4
desc leaf4
ip add 10.0.2.6/31
mtu 9214
int Et5
desc leaf5
ip add 10.0.2.8/31
mtu 9214
int Et6
desc leaf6
ip add 10.0.2.10/31
mtu 9214
int Et7
desc leaf7
ip add 10.0.2.12/31
mtu 9214
int Et8
desc leaf8
ip add 10.0.2.14/31
mtu 9214
# leaf1
int Et11
description spine1
no switchport
ip add 10.0.1.1/31
mtu 9214

int Et12
description spine2
no switchport
ip add 10.0.2.1/31
mtu 9214
# leaf2
int Et11
description spine1
no switchport
ip add 10.0.1.3/31
mtu 9214

int Et12
description spine2
no switchport
ip add 10.0.2.3/31
mtu 9214
# leaf3
int Et11
description spine1
no switchport
ip add 10.0.1.5/31
mtu 9214
!
int Et12
description spine2
no switchport
ip add 10.0.2.5/31
mtu 9214
# leaf4
int Et11
description spine1
no switchport
ip add 10.0.1.7/31
mtu 9214
!
int Et12
description spine2
no switchport
ip add 10.0.2.7/31
mtu 9214
# leaf5
int Et11
description spine1
no switchport
ip add 10.0.1.9/31
mtu 9214
!
int Et12
description spine2
no switchport
ip add 10.0.2.9/31
mtu 9214
# leaf6
int Et11
description spine1
no switchport
ip add 10.0.1.11/31
mtu 9214
!
int Et12
description spine2
no switchport
ip add 10.0.2.11/31
mtu 9214
# leaf7
int Et11
description spine1
no switchport
ip add 10.0.1.13/31
mtu 9214
!
int Et12
description spine2
no switchport
ip add 10.0.2.13/31
mtu 9214
# leaf8
int Et11
description spine1
no switchport
ip add 10.0.1.15/31
mtu 9214
!
int Et12
description spine2
no switchport
ip add 10.0.2.15/31
mtu 9214

Configure Underlay Point-to-Point Interfaces - Leaf-to-Leaf

Each Leaf will establish an IBGP relationship with its peer Leaf (e.g., leaf1 <> leaf2, leaf3 <> leaf4, etc.). This will ultimately allow EVPN to stay up in the event that links between a single Leaf and both Spines are severed.

# leaf1
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.0/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf2
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.1/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf3
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.2/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf4
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.3/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf5
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.4/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf6
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.5/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf7
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.6/31
mtu 9214
!
no spanning-tree vlan 4091
# leaf8
vlan 4091
name mlag-ibgp
trunk group mlag-peer
!
int vlan 4091
ip add 10.0.3.7/31
mtu 9214
!
no spanning-tree vlan 4091

Configure Loopbacks for BGP Peering

A /32 Loopback interface will be configured on each leaf and spine. These Loopback IP addresses will be used as the router-id in the BGP process on each switch.

# spine1
interface loopback0
ip add 10.0.250.1/32
# spine2
interface loopback0
ip add 10.0.250.2/32
# leaf1
interface loopback0
ip add 10.0.250.11/32
# leaf2
interface loopback0
ip add 10.0.250.12/32
# leaf3
interface loopback0
ip add 10.0.250.13/32
# leaf4
interface loopback0
ip add 10.0.250.14/32
# leaf5
interface loopback0
ip add 10.0.250.15/32
# leaf6
interface loopback0
ip add 10.0.250.16/32
# leaf7
interface loopback0
ip add 10.0.250.17/32
# leaf8
interface loopback0
ip add 10.0.250.18/32

Configure BGP Process

  • Configure the BGP Process, assigning an AS number for each pair of devices
  • Set the router-id to the IP address of the Loopback0 interface configured
  • On Arista switches, the default BGP administrative distance is set to 200, regardless if it’s eBGP or iBGP. As best practice, we will set the eBGP distance to 20, and keep iBGP distance at 200, giving preference to the eBGP routes
# spine1
router bgp 65000
router-id 10.0.250.1
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# spine2
router bgp 65000
router-id 10.0.250.2
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf1
router bgp 65001
router-id 10.0.250.11
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf2
router bgp 65001
router-id 10.0.250.12
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf3
router bgp 65002
router-id 10.0.250.13
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf4
router bgp 65002
router-id 10.0.250.14
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf5
router bgp 65003
router-id 10.0.250.15
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf6
router bgp 65003
router-id 10.0.250.16
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf7
router bgp 65004
router-id 10.0.250.17
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200
# leaf8
router bgp 65004
router-id 10.0.250.18
no bgp default ipv4-unicast
bgp log-neighbor-changes
distance bgp 20 200 200

Configure Underlay EBGP Neighbors

  • Each Spine will peer with each Leaf over each L3 point-to-point interface
  • On the Leafs, we’re setting the maximum routes to 12,000, which is plenty sufficient for this environment. We can always scale this up as needed
  • On the Leafs, we’re using a peer-group called “underlay” for repeat configuration parameters that would be applied to both Spine adjacencies
# spine1
router bgp 65000
neighbor 10.0.1.1 remote-as 65001
neighbor 10.0.1.3 remote-as 65001
neighbor 10.0.1.5 remote-as 65002
neighbor 10.0.1.7 remote-as 65002
neighbor 10.0.1.9 remote-as 65003
neighbor 10.0.1.11 remote-as 65003
neighbor 10.0.1.13 remote-as 65004
neighbor 10.0.1.15 remote-as 65004
# spine2
router bgp 65000
neighbor 10.0.2.1 remote-as 65001
neighbor 10.0.2.3 remote-as 65001
neighbor 10.0.2.5 remote-as 65002
neighbor 10.0.2.7 remote-as 65002
neighbor 10.0.2.9 remote-as 65003
neighbor 10.0.2.11 remote-as 65003
neighbor 10.0.2.13 remote-as 65004
neighbor 10.0.2.15 remote-as 65004
# leaf1
router bgp 65001
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.0 peer group underlay
 neighbor 10.0.2.0 peer group underlay
# leaf2
router bgp 65001
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.2 peer group underlay
 neighbor 10.0.2.2 peer group underlay
# leaf3
router bgp 65002
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.4 peer group underlay
 neighbor 10.0.2.4 peer group underlay
# leaf4
router bgp 65002
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.6 peer group underlay
 neighbor 10.0.2.6 peer group underlay
# leaf5
router bgp 65003
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.8 peer group underlay
 neighbor 10.0.2.8 peer group underlay
# leaf6
router bgp 65003
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.10 peer group underlay
 neighbor 10.0.2.10 peer group underlay
# leaf7
router bgp 65004
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.12 peer group underlay
 neighbor 10.0.2.12 peer group underlay
# leaf8
router bgp 65004
 neighbor underlay peer group
 neighbor underlay remote-as 65000
 neighbor underlay maximum-routes 12000 warning-only
 neighbor 10.0.1.14 peer group underlay
 neighbor 10.0.2.14 peer group underlay

Configure Underlay IBGP Neighbors

  • iBGP sessions are configured between each MLAG Leaf Pair to handle certain failure scenarios. The iBGP Session will allow the traffic to flow over the peer-link between the MLAG peers in case of the failure scenario in which the Leaf switch loses its links to the Spines
  • To ensure that routes get properly advertise and installed, the next-hop-self command will be configured on the iBGP neighbor to set the switch’s local address as the next hop in the routes it advertises to the iBGP neighbor. This is required because routes learned via eBGP will have a next-hop that the iBGP neighbor doesn’t know how to get to. When an iBGP neighbor receives the route that has a next-hop it doesn’t know how to get to, the switch will not be able to forward packets to it. Setting next-hop-self ensure that the route the iBGP neighbor receives will have a next-hop that the neighbor can reach
# leaf1
router bgp 65001
 neighbor underlay_ibgp remote-as 65001
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.1 peer group underlay_ibgp
# leaf2
router bgp 65001
 neighbor underlay_ibgp remote-as 65001
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.0 peer group underlay_ibgp
# leaf3
router bgp 65002
 neighbor underlay_ibgp remote-as 65002
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.3 peer group underlay_ibgp
# leaf4
router bgp 65002
 neighbor underlay_ibgp remote-as 65002
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.2 peer group underlay_ibgp
# leaf5
router bgp 65003
 neighbor underlay_ibgp remote-as 65003
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.5 peer group underlay_ibgp
# leaf6
router bgp 65003
 neighbor underlay_ibgp remote-as 65003
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.4 peer group underlay_ibgp
# leaf7
router bgp 65004
 neighbor underlay_ibgp remote-as 65004
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.7 peer group underlay_ibgp
# leaf8
router bgp 65004
 neighbor underlay_ibgp remote-as 65004
 neighbor underlay_ibgp maximum-routes 12000 warning-only
 neighbor underlay_ibgp next-hop-self
 neighbor 10.0.3.6 peer group underlay_ibgp

Activating BGP

Under the address family (AFI) for IPv4, we now:

  • Activate each of the neighbors configured
  • Advertise the Loopback0 subnet into BGP
  • Set maximum-paths to 4 with ECMP of 64. The BGP parameter “maximum-paths” is used to control the maximum number of parallel eBGP routes that a switch supports. The default is one route, which is insufficient for our topology relying on dual equal-cost routes. Setting this configuration parameter to 4 paths will provide us with adequate paths between our leaf to each spine, and spine to each leaf-pair
# spine1
router bgp 65000
address-family ipv4
neighbor 10.0.1.1 activate
neighbor 10.0.1.3 activate
neighbor 10.0.1.5 activate
neighbor 10.0.1.7 activate
neighbor 10.0.1.9 activate
neighbor 10.0.1.11 activate
neighbor 10.0.1.13 activate
neighbor 10.0.1.15 activate
network 10.0.250.1/32
maximum-paths 4 ecmp 64
# spine2
router bgp 65000
address-family ipv4
neighbor 10.0.2.1 activate
neighbor 10.0.2.3 activate
neighbor 10.0.2.5 activate
neighbor 10.0.2.7 activate
neighbor 10.0.2.9 activate
neighbor 10.0.2.11 activate
neighbor 10.0.2.13 activate
neighbor 10.0.2.15 activate
network 10.0.250.2/32
maximum-paths 4 ecmp 64
# leaf1
router bgp 65001
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.11/32
maximum-paths 4 ecmp 64
# leaf2
router bgp 65001
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.12/32
maximum-paths 4 ecmp 64
# leaf3
router bgp 65002
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.13/32
maximum-paths 4 ecmp 64
# leaf4
router bgp 65002
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.14/32
maximum-paths 4 ecmp 64
# leaf5
router bgp 65003
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.15/32
maximum-paths 4 ecmp 64
# leaf6
router bgp 65003
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.16/32
maximum-paths 4 ecmp 64
# leaf7
router bgp 65004
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.17/32
maximum-paths 4 ecmp 64
# leaf8
router bgp 65004
address-family ipv4
neighbor underlay activate
neighbor underlay_ibgp activate
network 10.0.250.18/32
maximum-paths 4 ecmp 64

Enable EVPN Capability

On each switch, run the following command to enable the EVPN capability:

service routing protocols model multi-agent

Since we will be performing VXLAN Routing, certain switch models require VXLAN Routing to be enabled in the TCAM profile. Expect this on the 7280s and on other platforms with the Jericho chipset

hardware tcam profile vxlan-routing 

Certain switch models require recirculation for L3 VXLAN to function. The recirculation creates an array of internal connections so packets can do multiple passes through the forwarding pipeline at high speeds. Expect to enable this on single-chip T2 platforms such as 7050QX, 7050TX, and 7050SX. (https://eos.arista.com/eos-4-15-2f/recirculation-channel/)

Int x/x
channel-group recirculation 1
traffic-loopback source system device mac
Int recirc1
switchport recirculation features vxlan

Configure BGP EVPN Overlays - Leaf-to-Spine

On each Leaf, configure a peer group with:

  • Neighbor to the Loopback IP address of each Spine using the Loopback0 interface as the source
  • Configure ebgp-multihop 3 to account for possibility of a Leaf needing to establish an EVPN BGP adjacency with a Spine through it’s peer link
  • The send-community extended command is required for attributes to be sent between EVPN peers
  • Activate the evpn peer-group
# leaf1
router bgp 65001
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate
# leaf2
router bgp 65001
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate
# leaf3
router bgp 65002
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate
# leaf4
router bgp 65002
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate
# leaf5
router bgp 65003
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate
# leaf6
router bgp 65003
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate
# leaf7
router bgp 65004
neighbor evpn peer-group
neighbor evpn remote-as 65000
neighbor evpn update-source Loopback0
neighbor evpn ebgp-multihop 3
neighbor evpn send-community extended
neighbor evpn maximum-routes 12000 warning-only
neighbor 10.0.250.1 peer-group evpn
neighbor 10.0.250.2 peer-group evpn
!
address-family evpn
neighbor evpn activate
# leaf8
router bgp 65004
 neighbor evpn peer group
 neighbor evpn remote-as 65000
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.1 peer group evpn
 neighbor 10.0.250.2 peer group evpn
 !
 address-family evpn
  neighbor evpn activate

Configure BGP EVPN Overlay - Spines-to-Leafs

On each Spine, configure a peer group with:

  • Neighbor to the Loopback IP address of each Leaf using the Loopback0 interface as the source
  • Configure ebgp-multihop 3 to account for possibility of a Leaf needing to establish an EVPN BGP adjacency with a Spine through it’s peer link
  • The send-community extended command is required for attributes to be sent between EVPN peers
  • By default, an eBGP speaker changes the next hop to itself when sending learned routes to eBGP neighbors. This is normal behavior used in most networks to ensure a recursion of routes, such as in the Underlay fabric. However, in the EVPN Overlay fabric, route recursion is possible without having to change the next-hop (e.g. leafs already know how to get to each other in the Underlay). Optimal routing tables can be achieved by setting next-hop-unchanged on the Spines facing the Leaf peers
  • Activate the evpn peer-group
# spine1

router bgp 65000
 neighbor evpn peer group
 neighbor evpn next-hop-unchanged
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.11 peer group evpn
 neighbor 10.0.250.11 remote-as 65001
 neighbor 10.0.250.12 peer group evpn
 neighbor 10.0.250.12 remote-as 65001
 neighbor 10.0.250.13 peer group evpn
 neighbor 10.0.250.13 remote-as 65002
 neighbor 10.0.250.14 peer group evpn
 neighbor 10.0.250.14 remote-as 65002
 neighbor 10.0.250.15 peer group evpn
 neighbor 10.0.250.15 remote-as 65003
 neighbor 10.0.250.16 peer group evpn
 neighbor 10.0.250.16 remote-as 65003
 neighbor 10.0.250.17 peer group evpn
 neighbor 10.0.250.17 remote-as 65004
 neighbor 10.0.250.18 peer group evpn
 neighbor 10.0.250.18 remote-as 65004
 !
 address-family evpn
  neighbor evpn activate
# spine2

router bgp 65000
 neighbor evpn peer group
 neighbor evpn next-hop-unchanged
 neighbor evpn update-source Loopback0
 neighbor evpn ebgp-multihop 3
 neighbor evpn send-community extended
 neighbor evpn maximum-routes 12000 warning-only
 neighbor 10.0.250.11 peer group evpn
 neighbor 10.0.250.11 remote-as 65001
 neighbor 10.0.250.12 peer group evpn
 neighbor 10.0.250.12 remote-as 65001
 neighbor 10.0.250.13 peer group evpn
 neighbor 10.0.250.13 remote-as 65002
 neighbor 10.0.250.14 peer group evpn
 neighbor 10.0.250.14 remote-as 65002
 neighbor 10.0.250.15 peer group evpn
 neighbor 10.0.250.15 remote-as 65003
 neighbor 10.0.250.16 peer group evpn
 neighbor 10.0.250.16 remote-as 65003
 neighbor 10.0.250.17 peer group evpn
 neighbor 10.0.250.17 remote-as 65004
 neighbor 10.0.250.18 peer group evpn
 neighbor 10.0.250.18 remote-as 65004
 !
 address-family evpn
  neighbor evpn activate

Validating EVPN Neighbors

At this point, you should have an EVPN neighbor relationship with between Leafs and Spines. The EVPN Network Virtualization Overlay (NVO) is up and ready for transporting VXLAN traffic.

EVPN Summary - Spine perspective

spine1#show bgp evpn summary 
BGP summary information for VRF default
Router identifier 10.0.250.1, local AS number 65000
Neighbor Status Codes: m - Under maintenance
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
10.0.250.11 4 65001 8 8 0 0 00:04:09 Estab 0 0
10.0.250.12 4 65001 8 8 0 0 00:03:54 Estab 0 0
10.0.250.13 4 65002 8 8 0 0 00:04:08 Estab 0 0
10.0.250.14 4 65002 8 8 0 0 00:04:09 Estab 0 0
10.0.250.15 4 65003 8 8 0 0 00:04:07 Estab 0 0
10.0.250.16 4 65003 9 8 0 0 00:04:12 Estab 0 0
10.0.250.17 4 65004 8 8 0 0 00:03:57 Estab 0 0
10.0.250.18 4 65004 8 8 0 0 00:03:57 Estab 0 0
spine2#show bgp evpn summary
BGP summary information for VRF default
Router identifier 10.0.250.2, local AS number 65000
Neighbor Status Codes: m - Under maintenance
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
10.0.250.11 4 65001 8 8 0 0 00:04:15 Estab 0 0
10.0.250.12 4 65001 8 8 0 0 00:04:04 Estab 0 0
10.0.250.13 4 65002 8 8 0 0 00:04:15 Estab 0 0
10.0.250.14 4 65002 9 8 0 0 00:04:15 Estab 0 0
10.0.250.15 4 65003 8 8 0 0 00:04:12 Estab 0 0
10.0.250.16 4 65003 9 8 0 0 00:04:16 Estab 0 0
10.0.250.17 4 65004 8 8 0 0 00:04:01 Estab 0 0
10.0.250.18 4 65004 8 8 0 0 00:04:01 Estab 0 0

EVPN Summary - Leaf Perspective

Showing just a couple of examples here but you will see the exact same output on each leaf – an EVPN neighbor relationship to each Spine

leaf1#show bgp evpn summary
BGP summary information for VRF default
Router identifier 10.0.250.11, local AS number 65001
Neighbor Status Codes: m - Under maintenance
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
10.0.250.1 4 65000 8 9 0 0 00:04:20 Estab 0 0
10.0.250.2 4 65000 8 8 0 0 00:04:20 Estab 0 0
leaf2#show bgp evpn summar
BGP summary information for VRF default
Router identifier 10.0.250.12, local AS number 65001
Neighbor Status Codes: m - Under maintenance
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
10.0.250.1 4 65000 9 8 0 0 00:04:13 Estab 0 0
10.0.250.2 4 65000 9 8 0 0 00:04:16 Estab 0 0

Configure VXLAN Tunnel Endpoints (VTEP)

Each Leaf-Pair will form a single VTEP

  • Configure a Loopback interface and IP that will be shared among the VTEP Leaf-pair
  • Advertise the Loopback into BGP
  • Configure the VTEP interface which will be used to communicate between Leafs in the EVPN fabric
# leaf1
int loop1
ip add 10.0.255.11/32
!
router bgp 65001
address-family ipv4
network 10.0.255.11/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf2
int loop1
ip add 10.0.255.11/32
!
router bgp 65001
address-family ipv4
network 10.0.255.11/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf3
int loop1
ip add 10.0.255.12/32
!
router bgp 65002
address-family ipv4
network 10.0.255.12/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf4
int loop1
ip add 10.0.255.12/32
!
router bgp 65002
address-family ipv4
network 10.0.255.12/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf5
int loop1
ip add 10.0.255.13/32
!
router bgp 65003
address-family ipv4
network 10.0.255.13/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf6
int loop1
ip add 10.0.255.13/32
!
router bgp 65003
address-family ipv4
network 10.0.255.13/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf7
int loop1
ip add 10.0.255.14/32
!
router bgp 65004
address-family ipv4
network 10.0.255.14/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any
# leaf8
int loop1
ip add 10.0.255.14/32
!
router bgp 65004
address-family ipv4
network 10.0.255.14/32
!
int vxlan1
vxlan source-int lo1
vxlan udp-port 4789
vxlan learn-restrict any

Example - Transporting L2VXLAN with EVPN

The use case here is extending Layer 2 across a data center or campus network via EVPN’s network virtualization overlay. For example,

  • We can configure VLAN 40
  • Map it to some arbitrary VXLAN Network Identifier (VNI) 110040
  • Apply BGP Route Distinquishers and router targets (NOTE: I’m statically configuring the RD below, but you can simply specify “auto” in the latest versions of EOS – yay!)
  • Redistribute learned MAC addresses into the overlay
# leaf1
vlan 40
 name test-l2-vxlan
!
int vxlan1
 vxlan vlan 40 vni 100040
!
router bgp 65001
 !
 vlan 40
  rd 65001:100040
  route-target both evpn 40:100040
  redistribute learned
# leaf2
vlan 40
 name test-l2-vxlan
!
int vxlan1
 vxlan vlan 40 vni 100040
!
router bgp 65001
 !
 vlan 40
  rd 65001:100040
  route-target both evpn 40:100040
  redistribute learned
# leaf5
vlan 40
 name test-l2-vxlan
!
int vxlan1
 vxlan vlan 40 vni 100040
!
router bgp 65003
 !
 vlan 40
  rd 65003:100040
  route-target both evpn 40:100040
  redistribute learned
# leaf6
vlan 40
 name test-l2-vxlan
!
int vxlan1
 vxlan vlan 40 vni 100040
!
router bgp 65003
 !
 vlan 40
  rd 65003:100040
  route-target both evpn 40:100040
  redistribute learned

Now let us attach a host into VLAN 40 on VTEP1 and VTEP3

Validation

A few helpful commands to validate operations

  • “show interface vxlan1” for a quick glance at the VTEP
  • “show vxlan vtep” will show remote vteps
  • “show vxlan address-table” will show MACs learned via VXLAN
  • “show bgp evpn route-type mac-ip” will show the Type-2 EVPN routes, which are the MAC addresses transported over the IP fabric as L2 VXLAN packets
leaf1#show interface vxlan1
Vxlan1 is up, line protocol is up (connected)
Hardware is Vxlan
Source interface is Loopback1 and is active with 10.0.255.11
Replication/Flood Mode is headend with Flood List Source: EVPN
Remote MAC learning via EVPN
VNI mapping to VLANs
Static VLAN to VNI mapping is
[40, 110040]
Note: All Dynamic VLANs used by VCS are internal VLANs.
Use 'show vxlan vni' for details.
Static VRF to VNI mapping is not configured
Headend replication flood vtep list is:
40 10.0.255.13
VTEP address mask is None

10.0.255.13 is the Loopback1 address that represents remote VTEP 3 (leaf5 and leaf6)”

leaf1#show vxlan vtep
Remote VTEPS for Vxlan1:
10.0.255.13
Total number of remote VTEPS: 1

Notice we are learning the MAC address of our locally attached via interface Po1 and learning the MAC address of the remote host via interface Vxlan1.

leaf1#show vxlan address-table 
Vxlan Mac Address Table
----------------------------------------------------------------------
VLAN Mac Address Type Prt VTEP Moves Last Move
---- ----------- ---- --- ---- ----- ---------
40 4e54.1df2.d1ef EVPN Vx1 10.0.255.13 1 0:00:27 ago
Total Remote Mac Addresses for this criterion: 1
!
leaf1#show mac address-table vlan 40
Mac Address Table
------------------------------------------------------------------
Vlan Mac Address Type Ports Moves Last Move
---- ----------- ---- ----- ----- ---------
40 4e08.b3a1.09d4 DYNAMIC Po1 1 0:00:38 ago
40 4e54.1df2.d1ef DYNAMIC Vx1 1 0:00:37 ago
Total Mac Addresses for this criterion: 2

Below we can check out the EVPN routing table. Notice that we have two routes for MAC address ending in d1ef. These are ECMP paths via each Spine to destination 10.0.255.13, which is VTEP3 in our topology (leaf5 and leaf6). The Route Distinquisher is attached to each route to signify its uniqueness in the fabric.

The MAC in green is the locally attached host.

leaf1#show bgp evpn route-type mac-ip 
BGP routing table information for VRF default
Router identifier 10.0.250.11, local AS number 65001
Route status codes: s - suppressed, * - valid, > - active, # - not installed, E - ECMP head, e - ECMP
S - Stale, c - Contributing to ECMP, b - backup
% - Pending BGP convergence
Origin codes: i - IGP, e - EGP, ? - incomplete
AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop
_
Network Next Hop Metric LocPref Weight Path
* > RD: 65001:110040 mac-ip 4e08.b3a1.09d4
- - - 0 i
* >Ec RD: 65003:110040 mac-ip 4e54.1df2.d1ef
10.0.255.13 - 100 0 65000 65003 i
* ec RD: 65003:110040 mac-ip 4e54.1df2.d1ef
10.0.255.13 - 100 0 65000 65003 i

Example - Transporting L3VXLAN with EVPN

In this example, we will isolate traffic into a VRF and transport that VRF over the EVPN network virtualization overlay using EVPN Type-5 routes. Rather than having VRFs peerings configured all over the place, we only need to configure the VRFs and the fabric will handle the isolation for us without countless numbers of BGP peerings.

  • Configure a VRF (I named it “gold”)
  • Map the VRF to a VNI (I used 100001)
  • Configure the VRF under BGP
# leaf3
vrf instance gold
!
ip routing vrf gold
!
int vxlan1
  vxlan vrf gold vni 100001
!
router bgp 65002
 vrf gold
    rd 10.0.250.13:1
    route-target both evpn 1:100001
    redistribute connected
# leaf4
vrf instance gold
!
ip routing vrf gold
!
int vxlan1
  vxlan vrf gold vni 100001
!
router bgp 65002
 vrf gold
    rd 10.0.250.14:1
    route-target both evpn 1:100001
    redistribute connected
# leaf7
vrf instance gold
!
ip routing vrf gold
!
int vxlan1
  vxlan vrf gold vni 100001
!
router bgp 65004
 vrf gold
    rd 10.0.250.17:1
    route-target both evpn 1:100001
    redistribute connected
# leaf8
vrf instance gold
!
ip routing vrf gold
!
int vxlan1
  vxlan vrf gold vni 100001
!
router bgp 65004
 vrf gold
    rd 10.0.250.17:1
    route-target both evpn 1:100001
    redistribute connected

I’ll configure a couple of test networks

  • VTEP2 will get VLAN 34 (10.34.34.0/24)
  • VTEP4 will get VLAN 78 (10.78.78.0/24)
  • The networks will reside inside VRF “gold”
# leaf3
vlan 34
int vlan 34
 vrf gold
 ip address 10.34.34.2/24
 ip virtual-router address 10.34.34.1
# leaf4
vlan 34
int vlan 34
 vrf gold
 ip address 10.34.34.3/24
 ip virtual-router address 10.34.34.1
# leaf7
vlan 78
int vlan 78
 vrf gold
 ip address 10.78.78.2/24
 ip virtual-router address 10.78.78.1
# leaf8
vlan 78
int vlan 78
 vrf gold
 ip address 10.78.78.3/24
 ip virtual-router address 10.78.78.1

Validation

A few helpful commands to validate operations

  • “show vxlan vtep” will show remote vteps
  • “show bgp evpn route-type ip-prefix ipv4” will show the Type-5 EVPN routes, which are the VRFs we’re transporting across the EVPN fabric
  • “show ip route vrf gold” should show us the routes learned from the remote VTEP
Notice from VTEP2 (leaf4) we are learning 4 possible ECMP paths to prefix 10.78.78.0/24, via each Spine and to each leaf in VTEP4 (leaf7 and leaf8):
leaf4#show bgp evpn route-type ip-prefix ipv4
BGP routing table information for VRF default
Router identifier 10.0.250.14, local AS number 65002
Route status codes: s - suppressed, * - valid, > - active, # - not installed, E - ECMP head, e - ECMP
S - Stale, c - Contributing to ECMP, b - backup
% - Pending BGP convergence
Origin codes: i - IGP, e - EGP, ? - incomplete
AS Path Attributes: Or-ID - Originator ID, C-LST - Cluster List, LL Nexthop - Link Local Nexthop
_
Network Next Hop Metric LocPref Weight Path
* > RD: 10.0.250.14:1 ip-prefix 10.34.34.0/24
- - - 0 i
* >Ec RD: 10.0.250.17:1 ip-prefix 10.78.78.0/24
10.0.255.14 - 100 0 65000 65004 i
* ec RD: 10.0.250.17:1 ip-prefix 10.78.78.0/24
10.0.255.14 - 100 0 65000 65004 i
* >Ec RD: 10.0.250.18:1 ip-prefix 10.78.78.0/24
10.0.255.14 - 100 0 65000 65004 i
* ec RD: 10.0.250.18:1 ip-prefix 10.78.78.0/24
10.0.255.14 - 100 0 65000 65004 i

10.0.255.14 is the Loopback1 address that represents remote VTEP4 (leaf7 and leaf8)

leaf4#show vxlan vtep
Remote VTEPS for Vxlan1:
10.0.255.14
Total number of remote VTEPS: 1

The routing table for VRF gold show the prefix learned from the remote VTEP. You will see multiple entries depending on how many Spines you have and how you have ECMP configured.

The “router-mac” address you see attached to the prefix is the advertising leaf’s burned-in MAC address, unique per leaf.

leaf4#show ip route vrf gold
_
VRF: gold
Gateway of last resort is not set
_
C 10.34.34.0/24 is directly connected, Vlan34
B E 10.78.78.0/24 [200/0] via VTEP 10.0.255.14 VNI 100001 router-mac 0c:56:33:db:cd:63
via VTEP 10.0.255.14 VNI 100001 router-mac 0c:56:33:50:8b:4a

We have reachability:

leaf4#ping vrf gold 10.78.78.78
PING 10.78.78.78 (10.78.78.78) 72(100) bytes of data.
80 bytes from 10.78.78.78: icmp_seq=1 ttl=62 time=183 ms
80 bytes from 10.78.78.78: icmp_seq=2 ttl=62 time=372 ms
80 bytes from 10.78.78.78: icmp_seq=3 ttl=62 time=531 ms
80 bytes from 10.78.78.78: icmp_seq=4 ttl=62 time=535 ms
80 bytes from 10.78.78.78: icmp_seq=5 ttl=62 time=536 ms
-
--- 10.78.78.78 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 48ms
rtt min/avg/max/mdev = 183.905/431.890/536.475/139.034 ms, pipe 5, ipg/ewma 12.117/315.449 ms

Important to recall that the Spines are simply routing the encapsulated traffic. The Spines are unaware of any VRFs or MACs being learned via VXLAN.

spine1#show vrf
Maximum number of vrfs allowed: 256
Vrf RD Protocols State Interfaces
---------- --------------- --------------- -------------------- ----------
mgmt ipv4,ipv6 v4:no routing, Ethernet12
v6:no routing
_
spine1#
spine1#show ip route vrf gold
% IP Routing table for VRF gold does not exist.

Example - Border BGP Peering into EVPN-enabled VRF

In this example we’ll attach a router to VTEP4. The router will peer via BGP with VRF gold. The router will inject a default route which will be transported throughout the EVPN fabric in VRF gold.

# leaf7
vlan 900
interface Vlan900
   vrf gold
   ip address 10.90.90.2/29
!
router bgp 65004
   vrf gold
      rd 10.0.250.18:1
      route-target import evpn 1:100001
      route-target export evpn 1:100001
      neighbor 10.90.90.1 remote-as 64999 
      redistribute connected
      !
      address-family ipv4
         neighbor 10.90.90.1 activate
# leaf8
vlan 900
interface Vlan900
   vrf gold
   ip address 10.90.90.3/29
!
router bgp 65004
   vrf gold
      rd 10.0.250.19:1
      route-target import evpn 1:100001
      route-target export evpn 1:100001
      neighbor 10.90.90.1 remote-as 64999 
      redistribute connected
      !
      address-family ipv4
         neighbor 10.90.90.1 activate

VTEP4 has a neighbor established in VRF “gold” at 10.90.90.1 and is learning a single prefix 0.0.0.0/0

leaf7#show ip bgp summary vrf gold
BGP summary information for VRF gold
Router identifier 10.78.78.2, local AS number 65004
Neighbor Status Codes: m - Under maintenance
Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
10.90.90.1 4 64999 24 25 0 0 00:16:55 Estab 1 1
leaf7#
leaf7#show ip route vrf gold
_
VRF: gold
Gateway of last resort:
B E 0.0.0.0/0 [20/0] via 10.90.90.1, Vlan900

B E 10.34.34.0/24 [200/0] via VTEP 10.0.255.12 VNI 100001 router-mac 0c:56:33:2e:d4:f0
via VTEP 10.0.255.12 VNI 100001 router-mac 0c:56:33:57:2e:bd
C 10.78.78.0/24 is directly connected, Vlan78
C 10.90.90.0/29 is directly connected, Vlan900

0.0.0.0/0 is in turn advertised to any other VTEP configured for VNI 100001 which maps to VRF “gold”

leaf3#show ip route vrf gold 0.0.0.0
VRF: gold
Gateway of last resort:
B E 0.0.0.0/0 [200/0] via VTEP 10.0.255.14 VNI 100001 router-mac 0c:56:33:db:cd:63
via VTEP 10.0.255.14 VNI 100001 router-mac 0c:56:33:50:8b:4a

Conclusion

This blog covers an example configuration of EVPN on Arista EOS. Some topics I didn’t get to discuss include BFD, IPv6, general troubleshooting, and more. Maybe another day. I hope you enjoyed it and visit my blog again!

If interested, here is a config book you can use that includes the same configurations we covered in the blog: evpn-demo-config-book. If you download the config book, please reference the updates listed at the top of this post on October 18, 2019 and make sure you find/replace all syntax changes made in EOS with the new versions of code.

Exit mobile version