Using Ansible and NetBox to deploy EVPN on Arista

Ansible, Nornir, and other automation frameworks are excellent for generating and deploying configurations in an automated fashion. In Ansible, you can run a playbook, loop through hosts in your inventory file, and deploy configurations with host-specific information by leveraging host_vars and group_vars. Unfortunately, as your automation environment starts to grow and become more critical, you’ll find that managing inventory files and host variables in multiple tools becomes cumbersome and prone to errors. Is ServiceNow correct or Ansible? Is SolarWinds correct or Cloud Vision Portal? Does my Ansible inventory include all of my Data Center switches or did we add any new ones since I last executed this playbook? Was this spreadsheet ever merged with our IPAM, and which is accurate? Why is my production switch configured for a VLAN not documented anywhere else? Inconsistencies in critical data like this is a thorn in an engineers life – causing issues, wasting time and resources, and resulting in inefficiencies by requiring duplicate manually-entered data across disparate systems. Enter NetBox.

NetBox is a “Source of Truth” application for network information that has an API for interaction. In NetBox I can store device information, IP addresses, VLANs, interfaces, custom attributes, and much more, while also providing API access to this data from anywhere. NetBox is essentially a fancy database with a GUI and API, made by Network Engineers with Automation in mind. Bingo!

What is a Source of Truth?

A Source of Truth (SoT) is an authoritative source for a particular set of data that always wins – meaning that if conflicting data exists in two or more systems, the SoT is only one that is accurate. Given this level of authoritative responsibility, it is up to the users of this data to make sure that the SoT is updated and accurate. It’s important that other systems are able to trust the SoT. Numerous SoTs can exist in an environment but there should only be one per each domain. For example, you may have a SoT for your compute environment separate from your Network environment. From a NetBox perspective, it can act as your SoT for all types of data, including:

  • Sites
  • Racks
  • Devices
  • Interfaces
  • IP Addresses
  • Virtualization
  • Circuits
  • Power
  • And more…

Refer to the fantastic NetBox documentation for details about data models, installation, and much more. It wouldn’t be worth repeating here since it’s so well documented by the creators and maintainers.

Getting Data into NetBox

Ok – NetBox sounds great – and I want to try it out with my Arista EVPN topology that I’ve detailed in the past. Here is the sample topology:

Check out those blog posts here if interested:

  1. Arista BGP EVPN – Overview and Concepts
  2. Arista BGP EVPN – Configuration Example
  3. Arista BGP EVPN – Ansible Lab

Picking up from the last blog, my intent is to eliminate reliance on static host_vars files in Ansible and remove the need to maintain a hosts file. Instead, this data should be pulled from and/or added to NetBox programmatically.

All variables in the Jinja template will be pulled from NetBox – nothing stored in Ansible. With the data no longer chained to Ansible, I can access the NetBox data from other tools such as ServiceNow, InfoBlox, and SolarWinds, and write Python scripts that pull data dynamically from NetBox rather than having to populate that data manually.

After installation, the first thing I needed to do is get data into NetBox. The NetBox documentation does a great job demonstrating the traditional Web UI-based way of adding information into NetBox – I suggest you go through that the first time to make sure you understand NetBox and how objects are stored. After my bare-bones installation was complete, I performed the following actions manually using the GUI.

  • Add Manufacturers
  • Add Platforms
  • Add Device Types
  • Add RIRs
  • Add Aggregates

Create a manufacturer called ‘Arista’

Create a platform called ‘eos’ and place under the ‘Arista’ manufacturer. This will be used by Ansible and will be mapped to the ansible_network_os.

Create a device-type specific to the type of device that will be configured in NetBox. Here I’m using a vEOS template with 12 Ethernet interfaces and 1 Management interface.

Create a generic regional internet registry for RFC-1918 addresses. You’ll want to add any other public RIRs for your specific environment here.

Create Aggregate prefixes and map to respective RIR

Overview of NetBox URL and API Structure

NetBox’s URL structure is: https://<netbox-url>/<app>/<model>
NetBox’s API structure is: http://<netbox-url>/api/<app>/<model>

<app> refers to the application
<model> refers to the model within that application

For example, to access the sites configured within NetBox, you’d go here: http://<netbox-url>/dcim/sites/

A helpful way to see what options are available for applications , just go to the API root: http://<netbox-url>/api/

Here you can see the following options:

  • circuits
  • dcim
  • extras
  • ipam
  • secrets
  • tenancy
  • virtualization

You can then click into one of these to see what the options are for the data models within that application. After clicking into “dcim” I’m presented with the following models:

Clicking further into an endpoint gives you details about that endpoint object. For example, clicking a ‘manufacturers’ provides the following JSON response that tells me the number of entries (count), along with the resulting entries field names and values.

Keep this in mind when building and troubleshooting your API integrations as you’ll need to pay close attention to available fields and how those fields are formatted (e.g., are they dictionaries, nested dictionaries, lists, etc.).

Another very helpful source for API information is the Swagger API navigator which can be accessed by clicking the “API” button at the bottom-right of the NetBox Web UI:

In here you can view every API endpoint, valid request types (GET, PUT, POST, PATCH, DELETE) for those endpoints, and the required and optional fields within those requests. This is super helpful when building your requests as you’ll need to know what fields are required. For example, I can click on the POST for /dcim/devices/ and this will tell me what field are required in order for me to create an object in NetBox using this API call.

You can even test right from here if you want by clicking the “Try it out” button.

All REST API operations are performed with HTTP meaning you can use standard raw HTTP calls via Curl or Python requests library. Alternatively, you can use wrappers for the API such as pynetbox (for Python) and go-netbox (for Go).

Loading NetBox with data using pynetbox

At this stage we don’t have any data at all in NetBox aside from the basic data mentioned earlier such as Manufacturer and Device Type. Next I’m going to use the NetBox API to load NetBox up with data necessary for my EVPN demo environment. To use the API, I first need to get a token from NetBox. You do that by going to the top-right of NetBox and clicking ‘Admin’ then going down to ‘API Tokens’ and clicking ‘+Add a token’. After generating one, you get a token string that can be used for authenticating against the API.

Leveraging pynetbox, a Python wrapper for the NetBox API, I wrote a simple Python script that takes a 2 Spine 8 Leaf topology and auto-populates NetBox with the data I need in order to test out. You’ll need to install pynetbox on your system, preferably in a a virtual environment, using pip install pynetbox.

Refer to the pynetbox usage documentation here: https://pynetbox.readthedocs.io/en/latest/

Here is the script I used to get my initial data into NetBox from a bare-bones installation aside from the data I added manually mentioned earlier. NetBox tasks start happening around line 80.

import pynetbox
from pprint import pprint

def search_dict_list(a_val, b_val, c_list):
    for line in c_list:
        if line['hostname'] == a_val:
            return line[b_val]

# Define Parent Variables that will be used to auto-generate and assign values
num_spines = 2
num_leafs = 8
p2p_range = '10.0.'
lo0_range = '10.0.250.'
lo1_range = '10.0.255.'
ibgp_range = '10.0.3.'
mgmt_range = '10.4.2.'
ibgp_vlan = '4091'
asn_start = 65000
leaf_int_mlag_peer = 'Ethernet10'

# Generate key/val for Leafs
leaf_list = []
asn = asn_start+1
n = 0
for i in range(num_leafs):
    id = i+1
    device = {}
    device['device_role'] = {'name': 'Leaf'}
    device['interfaces'] = []
    device['bgp_neighbors'] = []
    device['evpn_neighbors'] = []
    device['hostname'] = f'leaf{id}'
    if id%2 == 1:
        device['side'] = 'left'
        device['asn'] = asn
        asn +=1
        device['bgp_neighbors'].append({'neighbor':f'{ibgp_range}{id}', 'remote_as': device['asn'], 'state': 'present'})
        device['interfaces'].append({'interface': 'Loopback1', 'address':f'{lo1_range}{id+10}', 'mask':'/32', 'description':'Loopback1 Underlay'})
    else:
        device['side'] = 'right'
        device['asn'] = asn-1
        device['bgp_neighbors'].append({'neighbor':f'{ibgp_range}{id-2}', 'remote_as': device['asn'], 'state': 'present'})
        device['interfaces'].append({'interface': 'Loopback1', 'address':f'{lo1_range}{id+9}', 'mask':'/32', 'description':'Loopback1 Underlay'})
    for j in range(num_spines):
        device['interfaces'].append({'interface':f'Ethernet{j+11}', 'address':f'{p2p_range}{j+1}.{n+1}', 'mask':'/31', 'description':f'spine{j+1}'})
        device['bgp_neighbors'].append({'neighbor':f'{p2p_range}{j+1}.{n}', 'remote_as': asn_start, 'state': 'present'})
        device['evpn_neighbors'].append({'neighbor':f'{lo0_range}{j+1}', 'remote_as': asn_start, 'state': 'present'})
    device['interfaces'].append({'interface': f'Vlan{ibgp_vlan}', 'address':f'{ibgp_range}{i}', 'mask':'/31', 'description':'IBGP Underlay SVI'})
    device['interfaces'].append({'interface': 'Loopback0', 'address':f'{lo0_range}{id+10}', 'mask':'/32', 'description':'Loopback0 Underlay'})
    device['interfaces'].append({'interface': 'Management1', 'address':f'{mgmt_range}{id+22}', 'mask':'/24', 'description':'OOB Management'})
    leaf_list.append(device)
    n+=2

# Generate key/val for Spines
spine_list = []
asn = asn_start
for i in range(num_spines):
    id = i+1
    device = {}
    device['device_role'] = {'name': 'Spine'}
    device['interfaces'] = []
    device['bgp_neighbors'] = []
    device['evpn_neighbors'] = []
    device['hostname'] = f'spine{id}'
    device['asn'] = asn
    n = 0
    for j in range(num_leafs):
        leaf_asn = search_dict_list(f'leaf{j+1}','asn',leaf_list)
        device['interfaces'].append({'interface':f'Ethernet{j+1}', 'address':f'{p2p_range}{id}.{n}', 'mask':'/31', 'description':f'leaf{j+1}'})
        device['bgp_neighbors'].append({'neighbor':f'{p2p_range}{id}.{n+1}', 'remote_as': leaf_asn, 'state': 'present'})
        device['evpn_neighbors'].append({'neighbor':f'{lo0_range}{j+11}', 'remote_as': leaf_asn, 'state': 'present'})
        n += 2
    device['interfaces'].append({'interface': 'Loopback0', 'address':f'{lo0_range}{id}', 'mask':'/32', 'description':'Loopback0 Underlay'})
    device['interfaces'].append({'interface': 'Management1', 'address':f'{mgmt_range}{id+20}', 'mask':'/24', 'description':'OOB Management'})
    spine_list.append(device)

# Combine both Leaf and Spine lists into a single list
all_devices = leaf_list + spine_list

############################################################################
# NetBox Tasks
############################################################################

# Connect to NetBox
nb = pynetbox.api(
    'http://netbox.dnet.local',
    token='d15fcf828dbd8bbcaef6da4c41b6e33ded1a7065'
)

# Create new Site
new_site = nb.dcim.sites.get(slug='gns3-evpn')
if not new_site:
    new_site = nb.dcim.sites.create(
        name='GNS3 EVPN Topology',
        slug='gns3-evpn',
    )

# Create Device Role 'Leaf'
nb.dcim.device_roles.create(
    name='Leaf',
    slug='leaf',
    color='2196f3'
)

# Create Device Role 'Spine'
nb.dcim.device_roles.create(
    name='Spine',
    slug='spine',
    color='3f51b5'
)

# NetBox Prefixes - Underlay P2P
for j in range(num_spines):
    nb.ipam.prefixes.create(
        prefix=f'{p2p_range}{j+1}.0/24',
        description=f'spine-{j+1} P2P'
    )
    n = 0
    for h in range(num_leafs):
        nb.ipam.prefixes.create(
            prefix=f'{p2p_range}{j+1}.{n}/31',
            description=f'spine-{j+1}-leaf-{h+1} P2P'
        )  
        n+=2

# NetBox Prefixes - Loopback0
nb.ipam.prefixes.create(
        prefix=f'{lo0_range}0/24',
        description=f'Underlay Loopbacks'
    )

# NetBox Prefixes - Loopback1
nb.ipam.prefixes.create(
        prefix=f'{lo1_range}0/24',
        description=f'Overlay Loopbacks'
    )

# NetBox Prefixes - IBGP Underlay
nb.ipam.prefixes.create(
        prefix=f'{ibgp_range}0/24',
        description=f'Underlay IBGP'
    )

# Iterate over each Device
for dev in all_devices:

    # Create Device in NetBox
    new_device = nb.dcim.devices.create(
        name=dev['hostname'],
        site=new_site.id,
        device_type={
            'model': 'vEOS-Lab'
        },
        platform={
            'name': 'eos'
        },
        device_role=dev['device_role'],
        custom_fields={
            'bgp_asn': dev['asn']
        },
        local_context_data = {}
    )

    # Assign IP Addresses to Interfaces
    for intf in dev['interfaces']:

        if 'Ethernet' not in intf['interface'] and 'Management1' not in intf['interface']:
            nbintf = nb.dcim.interfaces.create(
                name=intf['interface'],
                form_factor=0,
                description=intf['description'],
                device=new_device.id
            )

        else:
            # Get interface id from NetBox
            nbintf = nb.dcim.interfaces.get(device=dev['hostname'], name=intf['interface'])
            nbintf.description = intf['description']
            nbintf.save()


        # Add IP to interface to NetBox
        intip = nb.ipam.ip_addresses.create(
            address=f"{intf['address']}{intf['mask']}",
            status=1,
            interface=nbintf.id,
            )
        
        # Assign Primary IP to device
        if intf['interface'] is 'Management1':
            new_device.primary_ip4 = {'address': intip.address }
            new_device.save()

    # Assign local config context data
    for k,v in dev.items():
        if 'side' in k:
            new_device.local_context_data.update({k:v})
    new_device.save()

    # Build MLAG Interfaces on Leaf switches
    if 'leaf' in dev['hostname']:

        # Create MLAG Port-channel interface
        mlag_po_intf = nb.dcim.interfaces.create(
            name='Port-Channel999',
            form_factor=200,
            description='mlag_peer_link',
            device=new_device.id
        )

        mlag_intf = nb.dcim.interfaces.get(
                name=leaf_int_mlag_peer,
                device=new_device.name,
        )
        # Add interface to port-channel
        try:
            mlag_intf.lag = {'id':mlag_po_intf.id}
            mlag_intf.description = 'mlag_peer_link'
            mlag_intf.save()

        except pynetbox.RequestError as e:
            print(e.error)
        
# Iterate over each Leaf
for dev in leaf_list:
    # Iterate over intefaces
    for s in range(num_spines):
        # Attach cables
        intf_leaf_spine = nb.dcim.interfaces.get(device=dev['hostname'], name=f"Ethernet{s+11}")
        intf_spine_leaf = nb.dcim.interfaces.get(device=f'spine{s+1}', name=f"Ethernet{dev['hostname'].split('leaf')[1]}")
        new_cable = nb.dcim.cables.create(
            termination_a_type="dcim.interface",
            termination_a_id=intf_leaf_spine.id,
            termination_b_type="dcim.interface",
            termination_b_id=intf_spine_leaf.id,
        )
    if dev['side'] is 'left':
        # Attach cables
        leafr = 'leaf' + str(int(dev['hostname'].split('leaf')[1]) + 1)
        intf_leafl = nb.dcim.interfaces.get(device=dev['hostname'], name=leaf_int_mlag_peer)
        intf_leafr = nb.dcim.interfaces.get(device=leafr, name=leaf_int_mlag_peer)
        new_cable = nb.dcim.cables.create(
            termination_a_type="dcim.interface",
            termination_a_id=intf_leafl.id,
            termination_b_type="dcim.interface",
            termination_b_id=intf_leafr.id,
        )

After running the script, all of the data is now populated in NetBox according to our spec – ready for use by Ansible!

Here we have our devices added via pynetbox:

Here we have our prefixes add via pynetbox:

Clicking into a device, we can see detail about the device

Scroll down and we can see the interfaces of the device, which IP addresses they were assigned, and how they’re connected.

From the cables view we can see every cable that we added between the devices

Note: I added a custom field in NetBox for my specific environment. Since I use BGP ASN quite extensively, I felt compelled to add a custom field to the /dcim/devices/ model. To do so, I went to the top right or NetBox and clicked ‘Admin’. I then clicked +Add next to Custom fields. I want the parameter to exist on device objects, so I chose dcim>devices, gave it a name and a label, then saved it.

Now, under the Device view, I can see my custom field:

From a JSON perspective via the API, it looks like this:

At this stage you start getting a picture of what your Source of Truth can look like and how valuable this data can be when building automated and intentional network.

Using Ansible to pull dynamic inventory from NetBox

So we have our inventory in NetBox and now need to use NetBox as our inventory source in Ansible. I’m using an example provided by the folks over at networktocode – creating a yaml file that will be called by ansible as our inventory source.

plugin: netbox
api_endpoint: http://<netbox_host>
token: <netbox_token>
validate_certs: False
config_context: True
group_by:
  - device_roles
compose:
  ansible_network_os: platform.slug

Make sure you update with your netbox host and API token. In this example, I’m grouping by the role of the Device (e.g., Leaf, Spine) but you can choose any option that works best for you. The compose portion is important here because this is what maps the device’s platorm (e.g. ios, eos, junos) to the ansible_network_os used by Ansible to interact appropriately with the remote devices based on their OS.

We can now run a few test to see what happens. First, let’s see what we get back when polling our inventory:

(ansible29) [14:38:12] dvarnum:netbox $ ansible-inventory -v --list -i netbox_inventory.yml                       
No config file found; using defaults
Fetching: http://netbox.dnet.local/api/dcim/sites/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/regions/?limit=0
Fetching: http://netbox.dnet.local/api/tenancy/tenants/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/racks/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/device-roles/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/platforms/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/device-types/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/manufacturers/?limit=0
Fetching: http://netbox.dnet.local/api/dcim/devices/?limit=0
Fetching: http://netbox.dnet.local/api/virtualization/virtual-machines/?limit=0
{
    "_meta": {
        "hostvars": {
            "leaf1": {
                "ansible_host": "10.4.2.23",
                "ansible_network_os": "eos",
                "config_context": [
                    {
                        "evpn_neighbors": [
                            {
                                "neighbor": "10.0.250.1",
                                "remote_as": 65000,
                                "state": "present"
                            },
                            {
                                "neighbor": "10.0.250.2",
                                "remote_as": 65000,
                                "state": "present"
                            }
                        ],
                        "side": "left"
                    }
                ],
                "device_roles": [
                    "Leaf"
                ],
                "device_types": [
                    "vEOS-Lab"
                ],
                "manufacturers": [
                    "Arista"
                ],
                "platforms": [
                    "eos"
                ],
                "primary_ip4": "10.4.2.23",
                "sites": [
                    "GNS3 EVPN Topology"
                ]
            },
            "leaf2": {
                "ansible_host": "10.4.2.24",
                "ansible_network_os": "eos",
                "config_context": [
                    {
                        "evpn_neighbors": [
                            {
                                "neighbor": "10.0.250.1",
                                "remote_as": 65000,
                                "state": "present"
                            },
                            {
                                "neighbor": "10.0.250.2",
                                "remote_as": 65000,
                                "state": "present"
                            }
                        ],
                        "side": "right"
                    }
                ],
                "device_roles": [
                    "Leaf"
                ],
                "device_types": [
                    "vEOS-Lab"
                ],
                "manufacturers": [
                    "Arista"
                ],
                "platforms": [
                    "eos"
                ],
                "primary_ip4": "10.4.2.24",
                "sites": [
                    "GNS3 EVPN Topology"
                ]
            },
            ...truncated

    "all": {
        "children": [
            "device_roles_Leaf",
            "device_roles_Spine",
            "ungrouped"
        ]
    },
    "device_roles_Leaf": {
        "hosts": [
            "leaf1",
            "leaf2",
            "leaf3",
            "leaf4",
            "leaf5",
            "leaf6",
            "leaf7",
            "leaf8"
        ]
    },
    "device_roles_Spine": {
        "hosts": [
            "spine1",
            "spine2"
        ]
    }
}

Notice each device listed in hostvars and each device grouped under their respective roles.

Next, let’s run a simple playbook that performs a show version on the remote devices.

---
- hosts: all
  connection: network_cli
  become: no
  gather_facts: no

  tasks:
    - name: run show version
      eos_command:
        commands: show version
      register: version

    - debug: msg={{version}}
(ansible29) [12:23:36] dvarnum:netbox $ ansible-playbook -i netbox_inventory.yml -l spine1 show_ver.yaml -u ansible -k

SSH password: 

PLAY [all] *******************************************************************************************************************************************

TASK [run show version] *********************************************************************************************************************************************

ok: [spine1]

TASK [debug] ********************************************************************************************************************************************************
ok: [spine1] => {
    "msg": {
        "ansible_facts": {
            "discovered_interpreter_python": "/usr/bin/python"
        },
        "changed": false,
        "failed": false,
        "stdout": [
            "vEOS\nHardware version:    \nSerial number:       \nSystem MAC address:  0cb3.1830.e62b\n\nSoftware image version: 4.22.3M\nArchitecture:           i686\nInternal build version: 4.22.3M-14418192.4223M\nInternal build ID:      413991dd-4451-4406-a16c-f1c6ac19d1f3\n\nUptime:                 0 weeks, 0 days, 15 hours and 58 minutes\nTotal memory:           2014520 kB\nFree memory:            1280452 kB"
        ],
        "stdout_lines": [
            [
                "vEOS",
                "Hardware version:    ",
                "Serial number:       ",
                "System MAC address:  0cb3.1830.e62b",
                "",
                "Software image version: 4.22.3M",
                "Architecture:           i686",
                "Internal build version: 4.22.3M-14418192.4223M",
                "Internal build ID:      413991dd-4451-4406-a16c-f1c6ac19d1f3",
                "",
                "Uptime:                 0 weeks, 0 days, 15 hours and 58 minutes",
                "Total memory:           2014520 kB",
                "Free memory:            1280452 kB"
            ]
        ],
        "warnings": [
            "Platform darwin on host spine1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information."
        ]
    }
}

PLAY RECAP **********************************************************************************************************************************************************
spine1                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Looking good!

Using Ansible and NetBox to build configurations

If you noticed in the previous example where we listed out the inventory – some critical pieces of data were missing, such as the interfaces and IP addresses. This is because NetBox is split into the various applications and data models. Annoyingly, when you pull in the inventory, none of the interfaces or IP address assignments are included. You have to perform separate API calls in order to get this information. Important to note that each object in NetBox is assigned an ‘id‘ which is unique throughout NetBox. The ‘id’ is a required field in many of the calls – for example when creating interfaces on a device or attaching IP addresses to an interface. A result of this is an awkwardly painful (at least for me) Jinja template for building configurations as you’ll see below.

Here is a sample playbook where we perform the following tasks:

  1. Get all devices from NetBox and register as variable ‘nb_all_devices
  2. Get device data about the specific device in the play and map to ‘nb_device‘ for each device the play loops through
  3. Get interface data about the specific device in the play and map to ‘nb_interfaces‘ for each device the play loops through
  4. Get IP address data about the specific device in the play and map to ‘nb_ips‘ for each device the play loops through
  5. Create a temporary folder for each device on the local system where this playbook is executed
  6. Generate configuration files for each device using the supplied Jinja2 template
---
# Run with ansible-playbook -i netbox_inventory.yml deploy_fabric.yaml -u ansible -k
- hosts: "all"
  connection: network_cli
  become: no
  gather_facts: no
  vars:
    netbox_url: http://netbox.dnet.local
    netbox_token: d15fcf828dbd8bbcaef6da4c41b6e33ded1a7065
    working_folder: results

  tasks:
    - name: Get all devices from NetBox
      uri:
        url: "{{ netbox_url }}/api/dcim/devices/"
        method: GET
        return_content: yes
        headers:
          accept: "application/json"
          Authorization: "Token {{ netbox_token }}"
      register: nb_all_devices

    - name: Get device from NetBox
      uri:
        url: "{{ netbox_url }}/api/dcim/devices/?name={{ inventory_hostname }}"
        method: GET
        return_content: yes
        headers:
          accept: "application/json"
          Authorization: "Token {{ netbox_token }}"
      register: nb_device

    - name: Get device interfaces from NetBox
      uri:
        url: "{{ netbox_url }}/api/dcim/interfaces/?device={{ inventory_hostname }}"
        method: GET
        return_content: yes
        headers:
          accept: "application/json"
          Authorization: "Token {{ netbox_token }}"
      register: nb_interfaces

    - name: Get device IP addresses from NetBox
      uri:
        url: "{{ netbox_url }}/api/ipam/ip-addresses/?device={{ inventory_hostname }}"
        method: GET
        return_content: yes
        headers:
          accept: "application/json"
          Authorization: "Token {{ netbox_token }}"
      register: nb_ips

    - name: Create temporary folder for {{ inventory_hostname }}
      file:
        dest: "{{ working_folder }}/{{ inventory_hostname }}"
        state: directory

    - name: Create configuration file for {{ inventory_hostname }}
      template:
          src: "templates/{{ platforms[0] }}.j2"
          dest: "{{ working_folder }}/{{ inventory_hostname }}/{{ inventory_hostname }}.conf"

#    - name: DEPLOY CONFIGURATION
#      eos_config:
#        src: "{{ working_folder }}/{{ inventory_hostname }}/{{ inventory_hostname }}.conf"

Our Jinja2 template:

{% for intf in nb_interfaces.json.results %}
interface {{ intf.name }}
{%   if intf.description is defined %}
   description {{ intf.description }}
{%   endif %}
{%   for address in nb_ips.json.results %}
{%     if address.interface.name == intf.name %}
   ip address {{ address.address }}
{%       if 'Ethernet' in intf.name %}
   no switchport
{%       endif %}
{%     endif %}
{%   endfor %}
{%   if 'mlag_peer_link' in intf.description %}
{%     if 'Ethernet' in intf.name %}
{%       set channel_id = intf.lag.name.split('Channel') %}
   channel-group {{ channel_id[1] }} mode active
{%     endif %}
{%     if 'Port-channel' in intf.name %}
   switchport mode trunk
   spanning-tree link-type point-to-point
   switchport trunk group mlag-peer   
{%     endif %}
{%   endif %}
   mtu 9214
   no shutdown
   exit
!
{% endfor %}
{#########################################################}
{% if device_roles[0] == 'Leaf' %}
{%   if 'left' in config_context[0].side %}
{%     set mlag_octet = '254' %}
{%   elif 'right' in config_context[0].side %}
{%     set mlag_octet = '255' %}
{%   endif %}
{% endif %}

{% if device_roles[0] == 'Leaf' %}
vlan 4090
   name mlag-peer
   trunk group mlag-peer
!
interface vlan 4090
   ip address 10.0.99.{{ mlag_octet }}/31
   no autostate
   no shutdown
!
no spanning-tree vlan 4090
!
mlag configuration
   domain-id leafs
   peer-link port-channel 999
   local-interface vlan 4090
{% if '254' in mlag_octet %}
   peer-address 10.0.99.255
{% elif '255' in mlag_octet %}
   peer-address 10.0.99.254
{% endif %}
   no shutdown
!
vlan 4091
   name mlag-ibgp
   trunk group mlag-peer
!
no spanning-tree vlan 4091
!
interface Ethernet1
   channel-group 1 mode active
!
interface Port-Channel1
   switchport mode trunk
   mlag 1
!
ip virtual-router mac-address 0000.cafe.babe
ip virtual-router mac-address mlag-peer
{% endif %}

#########################################################
ip routing
{% set bgp_neighbors = [] %}
router bgp {{ nb_device.json.results[0].custom_fields.bgp_asn }}
{% for intf in nb_interfaces.json.results %}
{%   if 'Loopback0' in intf.name %}
{%     for address in nb_ips.json.results %}
{%       if address.interface.name == intf.name %}
   router-id {{ address.address.split('/') | first }}
{%       endif %}
{%     endfor %}
{%   endif %}
{% endfor %}
   no bgp default ipv4-unicast
   bgp log-neighbor-changes
   distance bgp 20 200 200
   maximum-paths 4 ecmp 64
{% for intf in nb_interfaces.json.results %}
{%   if 'Ethernet' in intf.name %}
{%     for address in nb_ips.json.results %}
{%       if address.interface.name == intf.name and 'spine' in intf.description %}
   neighbor {{ address.address | ipaddr('network') }} remote-as 65000 {{ bgp_neighbors.append(address.address | ipaddr('network')) }}
{%       elif address.interface.name == intf.name and 'leaf' in intf.description %}
{%         if device_roles[0] == 'Spine' %}
{%           for d in ansible_play_hosts %}
{%             if hostvars[d].nb_device.json.results[0].name == intf.description %}
   neighbor {{ address.address | ipaddr('next_usable') }} remote-as {{ hostvars[d].nb_device.json.results[0].custom_fields.bgp_asn }} {{ bgp_neighbors.append(address.address | ipaddr('next_usable')) }}
{%             endif %}
{%           endfor %}
{%         endif %}
{%       endif %}
{%     endfor %}
{%   endif %}
{% endfor %}
{% for intf in nb_interfaces.json.results %}
{%   if 'Vlan4091' in intf.name %}
{%     for address in nb_ips.json.results %}
{%       if address.interface.name == intf.name %}
{%         if 'left' in config_context[0].side %}
   neighbor {{ address.address | ipaddr('next_usable') }} remote-as {{ nb_device.json.results[0].custom_fields.bgp_asn }}
   neighbor {{ address.address | ipaddr('next_usable') }} next-hop-self {{ bgp_neighbors.append(address.address | ipaddr('next_usable')) }}
{%         elif 'right' in config_context[0].side %}
   neighbor {{ address.address | ipaddr('network') }} remote-as {{ nb_device.json.results[0].custom_fields.bgp_asn }}
   neighbor {{ address.address | ipaddr('network') }} next-hop-self {{ bgp_neighbors.append(address.address | ipaddr('network')) }}
{%         endif %}
{%       endif %}
{%     endfor %}
{%   endif %}
{% endfor %}
   !
   address-family ipv4
{% for intf in nb_interfaces.json.results %}
{%   if 'Loopback0' in intf.name %}
{%     for address in nb_ips.json.results %}
{%       if address.interface.name == intf.name %}
      network {{ address.address }}
{%       endif %}
{%     endfor %}
{%   endif %}
{% endfor %}
{% for item in bgp_neighbors %}      
      neighbor {{ item }} activate
{% endfor %}

{#########################################################}

service routing protocols model multi-agent
!
{% set evpn_neighbors = [] %}
router bgp {{ nb_device.json.results[0].custom_fields.bgp_asn }}
   neighbor evpn peer-group
{% if 'spine' in inventory_hostname %}
   neighbor evpn next-hop-unchanged
{% endif %}
   neighbor evpn update-source Loopback0
   neighbor evpn ebgp-multihop 3
   neighbor evpn send-community extended
   neighbor evpn maximum-routes 12000
{% for d in ansible_play_hosts %}
{%   if hostvars[d].device_roles[0] == 'Leaf' %}
{%     for address in hostvars[d].nb_ips.json.results %}
{%       if address.interface.name == 'Loopback0' %} {{ evpn_neighbors.append({'address':address.address, 'asn':hostvars[d].nb_device.json.results[0].custom_fields.bgp_asn})}} 
{%       endif %}
{%     endfor %}
{%   endif %}
{% endfor %}
{% if device_roles[0] == 'Leaf' %}
{%   for item in config_context[0].evpn_neighbors %}
   neighbor {{ item.neighbor }} peer group evpn
   neighbor {{ item.neighbor }} remote-as {{ item.remote_as }}
{%   endfor %}
{% elif device_roles[0] == 'Spine' %}
{%   for item in evpn_neighbors %}
   neighbor {{ item.address | ipaddr('address') }} peer group evpn
   neighbor {{ item.address | ipaddr('address') }} remote-as {{ item.asn }}
{%   endfor %}
{% endif %}
   !
   address-family evpn
      neighbor evpn activate
{% if device_roles[0] == 'Leaf' %}
{%   for address in nb_ips.json.results %}
{%     if address.interface.name == 'Loopback0' %}
      network {{ address.address }}
{%     endif %}
{%   endfor %}
!
interface Vxlan1
   vxlan source-interface Loopback1
   vxlan udp-port 4789
   vxlan learn-restrict any
!
{% endif %}

{#########################################################}

Now run the playbook

(ansible29) [16:43:10] dvarnum:netbox $ ansible-playbook -i netbox_inventory.yml  deploy_fabric.yaml

PLAY [all] **********************************************************************************************************************************************************

TASK [Get all devices from NetBox] **********************************************************************************************************************************
ok: [leaf5]
ok: [leaf6]
ok: [leaf3]
ok: [leaf4]
ok: [leaf1]
ok: [leaf2]
ok: [leaf7]
ok: [leaf8]
ok: [spine1]
ok: [spine2]

TASK [Get device from NetBox] ***************************************************************************************************************************************
ok: [leaf2]
ok: [leaf6]
ok: [leaf1]
ok: [leaf7]
ok: [leaf4]
ok: [leaf5]
ok: [leaf3]
ok: [leaf8]
ok: [spine1]
ok: [spine2]

TASK [Get device interfaces from NetBox] ****************************************************************************************************************************
ok: [leaf1]
ok: [leaf2]
ok: [leaf5]
ok: [leaf4]
ok: [leaf3]
ok: [leaf6]
ok: [leaf7]
ok: [leaf8]
ok: [spine1]
ok: [spine2]

TASK [Get device IP addresses from NetBox] **************************************************************************************************************************
ok: [leaf2]
ok: [leaf5]
ok: [leaf3]
ok: [leaf1]
ok: [leaf6]
ok: [leaf7]
ok: [leaf4]
ok: [leaf8]
ok: [spine1]
ok: [spine2]

TASK [Create temporary folder for leaf1] ****************************************************************************************************************************
changed: [leaf3]
changed: [leaf6]
changed: [leaf7]
changed: [leaf5]
changed: [leaf4]
changed: [leaf1]
changed: [leaf2]
changed: [leaf8]
changed: [spine1]
changed: [spine2]

TASK [Create configuration file for leaf1] **************************************************************************************************************************
changed: [leaf3]
changed: [leaf5]
changed: [leaf6]
changed: [leaf1]
changed: [leaf8]
changed: [leaf4]
changed: [leaf2]
changed: [spine1]
changed: [spine2]
changed: [leaf7]

PLAY RECAP **********************************************************************************************************************************************************
leaf1                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf2                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf3                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf4                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf5                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf6                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf7                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
leaf8                      : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
spine1                     : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
spine2                     : ok=6    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

After running the playbook, all configurations are successfully created:

(ansible29) [16:43:06] dvarnum:netbox $ ls -lR results | grep .conf
-rw-r--r--  1 dvarnum  224169620  3311 Feb  5 13:34 leaf1.conf
-rw-r--r--  1 dvarnum  224169620  3311 Feb  5 13:34 leaf2.conf
-rw-r--r--  1 dvarnum  224169620  3311 Feb  5 13:34 leaf3.conf
-rw-r--r--  1 dvarnum  224169620  3311 Feb  5 13:34 leaf4.conf
-rw-r--r--  1 dvarnum  224169620  3311 Feb  5 13:34 leaf5.conf
-rw-r--r--  1 dvarnum  224169620  3317 Feb  5 13:34 leaf6.conf
-rw-r--r--  1 dvarnum  224169620  3317 Feb  5 13:34 leaf7.conf
-rw-r--r--  1 dvarnum  224169620  3317 Feb  5 13:34 leaf8.conf
-rw-r--r--  1 dvarnum  224169620  3375 Feb  5 13:34 spine1.conf
-rw-r--r--  1 dvarnum  224169620  3375 Feb  5 13:34 spine2.conf

Crack open one of these config files and take a look

interface Ethernet1
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet2
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet3
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet4
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet5
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet6
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet7
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet8
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet9
   description 
   mtu 9214
   no shutdown
   exit
!
interface Ethernet10
   description mlag_peer_link
   channel-group 999 mode active
   mtu 9214
   no shutdown
   exit
!
interface Ethernet11
   description spine1
   ip address 10.0.1.1/31
   no switchport
   mtu 9214
   no shutdown
   exit
!
interface Ethernet12
   description spine2
   ip address 10.0.2.1/31
   no switchport
   mtu 9214
   no shutdown
   exit
!
interface Loopback0
   description Loopback0 Underlay
   ip address 10.0.250.11/32
   mtu 9214
   no shutdown
   exit
!
interface Loopback1
   description Loopback1 Underlay
   ip address 10.0.255.11/32
   mtu 9214
   no shutdown
   exit
!
interface Management1
   description OOB Management
   ip address 10.40.2.23/24
   mtu 9214
   no shutdown
   exit
!
interface Port-Channel999
   description mlag_peer_link
   mtu 9214
   no shutdown
   exit
!
interface Vlan4091
   description IBGP Underlay SVI
   ip address 10.0.3.0/31
   mtu 9214
   no shutdown
   exit
!

vlan 4090
   name mlag-peer
   trunk group mlag-peer
!
interface vlan 4090
   ip address 10.0.99.254/31
   no autostate
   no shutdown
!
no spanning-tree vlan 4090
!
mlag configuration
   domain-id leafs
   peer-link port-channel 999
   local-interface vlan 4090
   peer-address 10.0.99.255
   no shutdown
!
vlan 4091
   name mlag-ibgp
   trunk group mlag-peer
!
no spanning-tree vlan 4091
!
interface Ethernet1
   channel-group 1 mode active
!
interface Port-Channel1
   switchport mode trunk
   mlag 1
!
ip virtual-router mac-address 0000.cafe.babe
ip virtual-router mac-address mlag-peer

#########################################################
ip routing
router bgp 65001
   router-id 10.0.250.11
   no bgp default ipv4-unicast
   bgp log-neighbor-changes
   distance bgp 20 200 200
   maximum-paths 4 ecmp 64
   neighbor 10.0.1.0 remote-as 65000 
   neighbor 10.0.2.0 remote-as 65000 
   neighbor 10.0.3.1 remote-as 65001
   neighbor 10.0.3.1 next-hop-self 
   !
   address-family ipv4
      network 10.0.250.11/32
      neighbor 10.0.1.0 activate
      neighbor 10.0.2.0 activate
      neighbor 10.0.3.1 activate


service routing protocols model multi-agent
!
router bgp 65001
   neighbor evpn peer-group
   neighbor evpn update-source Loopback0
   neighbor evpn ebgp-multihop 3
   neighbor evpn send-community extended
   neighbor evpn maximum-routes 12000
   neighbor 10.0.250.1 peer group evpn
   neighbor 10.0.250.1 remote-as 65000
   neighbor 10.0.250.2 peer group evpn
   neighbor 10.0.250.2 remote-as 65000
   !
   address-family evpn
      neighbor evpn activate
      network 10.0.250.11/32
!
interface Vxlan1
   vxlan source-interface Loopback1
   vxlan udp-port 4789
   vxlan learn-restrict any
!

To recap, we just ran an Ansible playbook, pulling all variables in from our Source of Truth – NetBox – and generated full configurations for our Arista EVPN build. NICE!

If I wanted to deploy this, all I need to do is uncomment the last task of the playbook and we’re good-to-go.

Using Ansible to add data into NetBox

Say I wanted to add a new leaf-pair to my data center – fairly standard procedure. However, I don’t want to have to worry about choosing the next available management IP address, or Loopback IP addresses, or choosing the next available point-to-point prefixes and assigning IPs on these prefixes. What a pain! NetBox is especially helpful when dealing with automation procedures like this – allowing you to retrieve and assign the next available prefix in a given range, or the next available IP address in a given subnet. NetBox, where have you been all my life?

Take a look at the playbook below where I run through a series of tasks to add a new pair of leaf switches into NetBox. Here is an overview of the steps:

  1. Provide the minimum necessary fields for each device, such as a name, type, bgp asn, and which interface of the spine will the leaf connect to. I’ve also added a custom attribute called “side” which I use to signify the identity within the leaf-pair.
  2. Create a Site in NetBox
  3. Create a Rack in NetBox
  4. Create Devices in NetBox
  5. Create Loopback0 interfaces and auto-assign IP addresses
  6. Create Loopback1 interfaces and auto-assign IP addresses
  7. Assign Management IP Address
  8. Associate Management IP Address as the “Primary” IP address of the device
  9. Create MLAG Peer-Link LAG and attach interface to LAG
  10. Allocate P2P segment with Spine1 and auto-assign IPs
  11. Allocate P2P segment with Spine2 and auto-assign IPs
  12. Create IBGP Peering VLAN between leaf-pair and auto-assign IPs
  13. Add custom config context called “side” which I use in Ansible/Jinja to build configs

Here is the full playbook:

- name: "Deploy New Leaf Switch Pair in NetBox"
  connection: local
  hosts: localhost
  gather_facts: False
  collections:
    - netbox_community.ansible_modules

  vars:
    netbox_url: http://netbox.dnet.local
    netbox_token: d15fcf828dbd8bbcaef6da4c41b6e33ded1a7065

#############################################################
# Devices to be added to Netbox
#############################################################

    devices:
      - name: "Ansible_Test_Leaf9"
        position: 38
        device_type: "vEOS-Lab"
        spine1_intf: "Ethernet9"
        spine2_intf: "Ethernet9"
        side: left
        bgp_asn: 65100
        custom:
          local_context_data:
            side: left

      - name: "Ansible_Test_Leaf10"
        position: 37
        device_type: "vEOS-Lab"
        spine1_intf: "Ethernet10"
        spine2_intf: "Ethernet10"
        side: right
        bgp_asn: 65100
        custom:
          local_context_data:
            side: right
  tasks:

#############################################################
# Create Site in NetBox (if applicable)
#############################################################

    - name: "Create new site"
      netbox_site:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "Ansible Test Site"
        state: present
      register: "site"

#############################################################
# Create Rack in Netbox (if applicable)
#############################################################

    - name: "Create new rack"
      netbox_rack:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "Ansible - Rack-One"
          site: "{{ site.site.slug }}"
      register: "rack_one"

#############################################################
# Create Device in NetBox
#############################################################

    - name: Create device within Netbox
      netbox_device:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "{{ item.name }}"
          device_type: "{{ item.device_type }}"
          device_role: "Leaf"
          platform: eos
          custom_fields:
            bgp_asn: "{{ item.bgp_asn }}"
          site: "{{ site.site.slug }}"
          rack: "{{ rack_one.rack.name }}"
          position: "{{ item.position }}"
          face: front
          status: Staged
        state: present
      loop: "{{ devices }}"
      register: nb_device

    #- debug: var=vars

#############################################################
# Create Underlay Loopback0 interface and assign IP
#############################################################

    - name: "Create Loopback0 intefaces"
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "Loopback0"
          device: "{{ item.name }}"
          form_factor: Virtual
      loop: "{{ devices }}"
    
    - name: "Add IP to Loopback0"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "10.0.250.0/24"
          interface:
            device: "{{ item.name }}"
            name: "Loopback0"
      loop: "{{ devices }}"
      
#############################################################
# Create Overlay Loopback1 interface and assign IP
#############################################################

    - name: "Create Loopback1 intefaces"
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "Loopback1"
          device: "{{ item.name }}"
          form_factor: Virtual
      loop: "{{ devices }}"

    - name: Get a new available IP inside Loopback1 Pool
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: 10.0.255.0/24
        state: new
      register: loopback1_address

    - name: Delete the temporarily reserved IP address
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          address: "{{ loopback1_address.ip_address.address }}"
        state: absent

    - name: "Add IP to Loopback1"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          address: "{{ loopback1_address.ip_address.address | ipaddr('address') | ipaddr('host')}}"
          interface:
            device: "{{ item.name }}"
            name: "Loopback1"
        state: new
      loop: "{{ devices }}"

#############################################################
# Assign Management IP Address
#############################################################
    
    - name: "Assign IP to Management1"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "10.4.2.0/24"
          interface:
            device: "{{ item.name }}"
            name: "Management1"
      loop: "{{ devices }}"
      register: mgmt_addresses

#############################################################
# Associate Primary IP address to device
#############################################################

    - name: "Set device primary IP"
      netbox_device:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "{{ item.item.name }}"
          primary_ip4: "{{ item.ip_address.address }}"
      loop: "{{ mgmt_addresses.results }}"

#############################################################
# Create MLAG Peer-Link
#############################################################

    - name: Create LAG
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          device: "{{ item.name }}"
          name: Port-channel999
          description: mlag_peer_link
          form_factor: Link Aggregation Group (LAG)
          mtu: 9000
          mgmt_only: false
          mode: Tagged
        state: present
      loop: "{{ devices }}"

    - name: Assign interface to parent LAG
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          device: "{{ item.name }}"
          name: Ethernet10
          enabled: false
          lag:
            name: Port-channel999
          mtu: 9000
          mgmt_only: false
          mode: Tagged
        state: present
      loop: "{{ devices }}"

#############################################################
# Allocate new P2P segment with Spine1 and Assign IPs to interfaces
#############################################################

    - name: "Allocate new P2P segment with spine1"
      netbox_prefix:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          parent: "10.0.1.0/24"
          prefix_length: 31
          description: test
        state: present
        first_available: yes
      loop: "{{ devices }}"
      register: spine1_p2p

    - name: "Add IP to Spine1 Interface"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "{{ item.prefix.prefix }}"
          interface:
            device: "spine1"
            name: "{{ item.item.spine1_intf }}"
      loop: "{{ spine1_p2p.results }}"

    - name: Add description to Spine1 Interface
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          device: "spine1"
          name: "{{ item.spine1_intf }}"
          description: "{{ item.name }}"
        state: present
      loop: "{{ devices }}"

    - name: "Add IP to Leaf Interface"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "{{ item.prefix.prefix }}"
          interface:
            device: "{{ item.item.name }}"
            name: "Ethernet11"
      loop: "{{ spine1_p2p.results }}"

    - name: Add description to Leaf Interface - Left
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          device: "{{ item.name }}"
          name: "Ethernet11"
          description: "spine1"
        state: present
      loop: "{{ devices }}"

#############################################################
# Allocate new P2P segment with Spine2 and Assign IPs to interfaces
#############################################################

    - name: "Allocate new P2P segment with spine2"
      netbox_prefix:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          parent: "10.0.2.0/24"
          prefix_length: 31
          description: test
        state: present
        first_available: yes
      loop: "{{ devices }}"
      register: spine2_p2p
 
    - name: "Add IP to Spine2 Interface"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "{{ item.prefix.prefix }}"
          interface:
            device: "spine2"
            name: "{{ item.item.spine1_intf }}"
      loop: "{{ spine2_p2p.results }}"  

    - name: Add description to Spine2 Interface
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          device: "spine2"
          name: "{{ item.spine2_intf }}"
          description: "{{ item.name }}"
        state: present
      loop: "{{ devices }}"

    - name: "Add IP to Leaf Interface"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "{{ item.prefix.prefix }}"
          interface:
            device: "{{ item.item.name }}"
            name: "Ethernet12"
      loop: "{{ spine2_p2p.results }}"

    - name: Add description to Leaf Interface - Left
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          device: "{{ item.name }}"
          name: "Ethernet12"
          description: "spine2"
        state: present
      loop: "{{ devices }}"

#############################################################
# Create IBGP Peering VLAN between MLAG Peers
#############################################################

    - name: "Create Vlan4091 IBGP peering intefaces"
      netbox_device_interface:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          name: "Vlan4091"
          device: "{{ item.name }}"
          form_factor: Virtual
      loop: "{{ devices }}"

    - name: "Allocate new P2P segment for IBGP between Leaf-Pairs"
      netbox_prefix:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          parent: "10.0.3.0/24"
          prefix_length: 31
          description: test
        state: present
        first_available: yes
      loop: "{{ devices }}"
      when: item.side == "left"
      register: u_ibgp_p2p
    
    - name: "Add IP to Leaf Interface"
      netbox_ip_address:
        netbox_url: "{{ netbox_url }}"
        netbox_token: "{{ netbox_token }}"
        data:
          prefix: "{{ u_ibgp_p2p.results[0].prefix.prefix }}"
          interface:
            device: "{{ item.name }}"
            name: "Vlan4091"
      loop: "{{ devices }}"

######################################################

    - name: "POST: Add Config Context into NetBox"
      uri:
        url: "{{ netbox_url }}/api/dcim/devices/{{ item.1.device.id }}/"
        method: PATCH
        return_content: "no"
        headers:
          accept: "application/json"
          Authorization: "Token {{ netbox_token }}"
        status_code: [200, 201]
        body_format: json
        body: |
          {{ item.0.custom }}
      with_nested: 
        - "{{ devices }}"
        - "{{ nb_device.results }}"
      when: item.0.name == item.1.device.name
            
######################################################

After running the playbook, we can see the change log in NetBox:

The new devices show up in NetBox:

with interface settings and IP assignments:

Prefixes are automatically created and IPs automatically assigned from them. For example, 10.0.1.16/31 did not exist in NetBox prior to executing this playbook. All I did was ask NetBox to give us the next available /31 prefix from the 10.0.1.0/24 block and viola.

For fun, also added a new rack and placed the new devices in U’s 37 and 38

Summary

I’m really just starting to scratch the surface here but have already seen the light that is NetBox. I’ve played around with significantly larger builds of NetBox with thousands of devices, thousands of prefixes, and hundreds of sites – it performs flawlessly and avoids the painful work of static inventory and variables files I was compiling from excel files that are difficult to share and difficult to integrate with other systems.

References

Big kudos to the following folks that shared fantastic content on NetBox – much more in depth with examples that helped me out tremendously. Check out their content and resources below:

Detailed pynetbox article series by ttl255

Documenting your network infrastructure in NetBox, integrating with Ansible over REST API

Netbox Community GitHub – Ansible Modules

NetworktoCode GitHub – NetBox Examples

Lastly, you can find all of the code from this blog post on GitHub at the repo below

https://github.com/varnumd/netbox-blog

Happy automating 🙂

One comment

  1. Hi David,

    Thank you for this excellently written guide on this. Whilst I have no familiarity with Arista, this of course is still applicable to other network device types. I especially really liked your small visual diagrams along each step to quickly give a visual cue of what you were about explain.

    I also appreciate you sharing this code via GitHub repo as well, I am going to take a copy of it and leverage it.

    For a long time I had been wondering how to manage my network devices as I moved to automation and a central point of truth, finally found NetBox as well, and since setting it up as a docker container in my home lab, I am loving it and looking forward to rolling it out to being the SoT for my company.

    Thanks again mate, keep up the great work and never stop learning!

Leave a Reply