Ansible, Nornir, and other automation frameworks are excellent for generating and deploying configurations in an automated fashion. In Ansible, you can run a playbook, loop through hosts in your inventory file, and deploy configurations with host-specific information by leveraging host_vars and group_vars. Unfortunately, as your automation environment starts to grow and become more critical, you’ll find that managing inventory files and host variables in multiple tools becomes cumbersome and prone to errors. Is ServiceNow correct or Ansible? Is SolarWinds correct or Cloud Vision Portal? Does my Ansible inventory include all of my Data Center switches or did we add any new ones since I last executed this playbook? Was this spreadsheet ever merged with our IPAM, and which is accurate? Why is my production switch configured for a VLAN not documented anywhere else? Inconsistencies in critical data like this is a thorn in an engineers life – causing issues, wasting time and resources, and resulting in inefficiencies by requiring duplicate manually-entered data across disparate systems. Enter NetBox.(more…)
In the previous two blog posts, I covered the concepts of EVPN and shared a detailed configuration example on Arista EOS. In this blog post, I’ll be covering how to automate the deployment of EVPN in a lab environment. After deployment, I want to run validations to make sure my intent is being met. Lastly, I’ll play around with a few scripts to deploy L2 and L3 VXLAN services.
When studying this technology and demonstrating it to clients, I chose to use GNS3 because it’s nice to visualize the topology, easily perform packet captures, and I can share the project file with fellow co-workers using a GNS3 Server. I could have chosen Vagrant for this, but since my topology has 10 vEOS devices, I found the boot time to be too long (although I hear if I use KVM I can boot the nodes in parallel). I chose 8 leafs because it gave me the most flexibility to demonstrate VXLAN Bridging, VLAN Routing, Border Services (such as segmentation or traffic engineering), and so on. You could probably get away with fewer leafs depending on your preference. That said, my topology in GNS3 looks like this:
Traditionally, Data Centers used lots of Layer 2 links that spanned entire racks, rows, cages, floors, for as far as the eye could see. These large L2 domains were not ideal for a data center, due to the slow convergence, unnecessary broadcasts, and difficulty in administering. To optimize the data center network, we needed to reduce the use of and reliance on layer 2 protocols such as Spanning Tree. The challenge, however, is the fact that Data Centers need Layer 2 stretching from rack to rack, row to row, sometimes from data center to data center, not only for application requirements but also for fault tolerance and workload mobility. Numerous technologies have come forth to battle this limitation, such as TRILL, FabricPath, and VXLAN. Of these three, it is Virtual Extensible LAN (VXLAN) that has seen rapid adoption in modern data centers. (more…)
When you hear a specific term pedestaled as a discussion point from a wide variety of industry leaders, it’s probably something of importance, and something you should know about. During the recent Networking Field Day (NFD10), I noticed a handful of these persisting terms and technologies repeated throughout the event.
Terms like agility and elasticity make me cringe, yet they’re entirely relevant in todays conversations around design and delivery of information services, networking included. (more…)