Configuring iSLB for CCIE DC

I’ll be going through iSLB, explaining what it is, and showing how to configure it.  A full template is at the bottom of this post.

Part 1 of the series, “Configuring iSCSI for CCIE DC” can be found here.

What is iSLB?

iSLB is iSCSI Server Load-Balancing. It is iSCSI, so don’t confuse it as some other protocol, think of it as iSCSIv2. iSLB introduces a few new features to iSCSI:

– Load-balancing between MDS’s (or ports on the same MDS)
– Cisco Fabric Services (CFS) Distribution
– iSLB Initiators (with Automatic Zone creation)

Load-balancing

iSLB uses VRRP between two MDS switches for high availability and load-balancing. With VRRP you have a master and backup virtual gateway. Typically all traffic is sent to the master active gateway. So how does load-balancing work? A pair of MDSs will run CFS to keep track of an iSLB VRRP table. This table records the current load for each Initiator-to-MDS pair. When an initiator request comes into the VRRP master switch, the table is checked to see the current load on each MDS. The master will take the initiator and create a session if it’s load is lower than the backup switch. If it’s current load is higher, the master switch sends an ICMP redirect back to the initiator and a new session is built to the direct IP of the backup MDS switch. The default weight (load) for each initiator is 1000. This, of course, can be changed to influence path selection.

Although not visible initially, the master of VRRP starts automatically with more load since it has more responsibility. This means that the first session is always going to be redirected and load-balanced to the backup MDS switch. All sessions afterwards will be load-balanced based on load reported in the table.

As an example, say we have 3 initiators. Initiator 1 has a default metric of 1000, Initiator 2 has a configured metric of 900, and Initiator 3 has a default metric of 1000.

iscsi2-top1

The first initiator will get load balanced from the Master (MDS1) to the Backup (MDS2) VRRP gateway.  (The 0+M I’m using to reference equal 0 load, but MDS1 is the Master)

iscsi2-top2

Now MDS2 has a load of 1000. Initiator 2 will get balanced to MDS1 since it has a lower load:

iscsi2-top3

Now MDS1 has a load of 900 and MDS2 has a load of 1000. Initiator 3 sends a discovery and gets load-balanced to MDS1 again because it has less of a load.

iscsi2-top4

Our current load looks like this now, with Initiator1 on MDS2 and Initiators 2 and 3 on MDS1:

iscsi2-top5

iSLB Initiators (AKA Initiator-Targets)

You can now specify targets when creating initiators. Previously, if doing a static configuration, we had to create the initiator and virtual target separately, and then add the initiator to the virtual target.  Gone are those days with iSLB.

Added bonus – You can now automatically create zones! This is on by default, so be careful to specify the operator if you do not want the zone automatically created. One caveat, you must already have an active zoneset configured! Beat myself up for a few minutes on that one.

References

Cisco.com Documentation:
http://www.cisco.com/c/en/us/td/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/ipsvc/nxos/ipsvc/ciscsi.html#wp3000775

General Setup

Our topology for this post will look similar to the above, except I actually only have 1 server, and it’s a UCS blade.  For simplicity, it logically looks like this:

topology

Configure VSAN

vsan database
 vsan 101
 vsan 101 interface fc1/13

Configure FC interfaces

int fc1/1
 switchport mode e
 switchport trunk allowed vsan 101
 no shut
int fc1/13
 switchport mode fl
 no shut

Verify FCNS:

MDS1(config-if)# show fcns database 

VSAN 101:
--------------------------------------------------------------------------
FCID        TYPE  PWWN                    (VENDOR)        FC4-TYPE:FEATURE
--------------------------------------------------------------------------
0x0100da    NL    21:00:00:1d:38:1c:79:0a                 scsi-fcp:target 
0x0100dc    NL    21:00:00:1d:38:1c:6f:24                 scsi-fcp:target 
0x0100e0    NL    21:00:00:1d:38:1c:78:fa                 scsi-fcp:target 
0x0100e1    NL    21:00:00:1d:38:1c:78:d9                 scsi-fcp:target 
0x0100e2    NL    21:00:00:1d:38:0e:d9:5e                 scsi-fcp:target 
0x0100e4    NL    21:00:00:1d:38:1c:76:af                 scsi-fcp:target 
0x0100e8    NL    21:00:00:1d:38:1c:77:04                 scsi-fcp:target 
0x0100ef    NL    21:00:00:1d:38:1c:76:db                 scsi-fcp:target 
0x0200da    NL    22:00:00:1d:38:1c:79:0a                 scsi-fcp:target 
0x0200dc    NL    22:00:00:1d:38:1c:6f:24                 scsi-fcp:target 
0x0200e0    NL    22:00:00:1d:38:1c:78:fa                 scsi-fcp:target 
0x0200e1    NL    22:00:00:1d:38:1c:78:d9                 scsi-fcp:target 
0x0200e2    NL    22:00:00:1d:38:0e:d9:5e                 scsi-fcp:target 
0x0200e4    NL    22:00:00:1d:38:1c:76:af                 scsi-fcp:target 
0x0200e8    NL    22:00:00:1d:38:1c:77:04                 scsi-fcp:target 
0x0200ef    NL    22:00:00:1d:38:1c:76:db                 scsi-fcp:target 

Total number of entries = 16

Configure an Active Zoneset

An active zoneset is required for auto-zoning to function.  Configure an enhanced zoneset with a fake zone just so we can utilize it with auto zoning later on:

zoneset name VSAN101 vsan 101
 zone name FAKE
   member pwwn 33:33:33:33:33:33:33:33
   member pwwn 33:33:33:33:33:33:33:34
   zoneset activate name VSAN101 vsan 101
 zone commit vsan 101

Verify:

MDS1# show zoneset active vsan 101
zoneset name VSAN101 vsan 101
  zone name FAKE vsan 101
    pwwn 33:33:33:33:33:33:33:33
    pwwn 33:33:33:33:33:33:33:34
MDS2# show zoneset active vsan 101
zoneset name VSAN101 vsan 101
  zone name FAKE vsan 101
    pwwn 33:33:33:33:33:33:33:33
    pwwn 33:33:33:33:33:33:33:34

Enable iSCSI on both MDSs

feature iscsi
iscsi enable module 1

Configure iSLB Distribution on both MDSs

islb distribute
islb commit

Kind of an awkward way to verify islb distribution is up, but helpful nonetheless:

MDS1(config)# show cfs peers name islb

Scope      : Physical-fc
-------------------------------------------------------------------------
 Switch WWN              IP Address
-------------------------------------------------------------------------
 20:00:00:0d:ec:54:63:80 10.60.0.53                              [Local]
                         MDS1                                    
 20:00:00:0d:ec:27:4f:40 10.60.0.54                             

Total number of entries = 2

Check our current status, we can see that iSLB Distribution is enabled, and we have no active CFS sessions (this only means we haven’t locked the configuration).

MDS1(config)# show islb status
  iSLB Distribute: Enabled
  iSLB CFS Session: Does not exist
  Number of load balanced VRRP groups: 0
  Number of load-balanced initiators: 0

No shut the iSCSI interface

MDS1:

int iscsi 1/2
 no shut
 
MDS2:

int iscsi 1/2
 no shut

Configure the corresponding physical interfaces in a VRRP group

Note, you could also run local-only VRRP between the 2 physical interfaces on the MDS itself.  This is useful if you just want link redundancy, not MDS redundancy.  The configuration is identical.  In the example here, we are using a single physical interface on two separate MDSs.

MDS1:

interface g1/2
 ip add 10.150.150.5 255.255.255.0
 no shut
  vrrp 150
   ip 10.150.150.254
   no shut
   
MDS2:

interface g1/2
 ip add 10.150.150.6 255.255.255.0
 no shut
  vrrp 150
   ip 10.150.150.254
   no shut

Verify VRRP

MDS1# show vrrp
      Interface  VR IpVersion Pri   Time Pre State   VR IP addr
---------------------------------------------------------------------------
        GigE1/2 150   IPv4    100    1 s     master  10.150.150.254
		
MDS2# show vrrp
      Interface  VR IpVersion Pri   Time Pre State   VR IP addr
---------------------------------------------------------------------------
        GigE1/2 150   IPv4    100    1 s     backup  10.150.150.254

Enable iSLB load-balancing

We will run these commands on MDS1:

islb vrrp 150 load-balance
islb commit

Notice, after enabling iSLB load-balancing, we can see that MDS locked the CFS session:

MDS1(config)# islb vrrp 150 load-balance 
MDS1(config)# 
MDS1(config)# show islb status
  iSLB Distribute: Enabled
  iSLB CFS Session: Exists (Initiated locally)
  Number of load balanced VRRP groups: 0
  Number of load-balanced initiators: 0

Commit the change:

MDS1(config)# islb commit

Notice the change was distributed via CFS to MDS2:

MDS2# sh run | i islb
islb distribute
islb commit
islb vrrp 150 load-balance

We can run a new command now to get further details on iSLB load-balancing. Notice the table for load-balance interfaces, Initiator assignments (currently empty), and the Initiator Load.

MDS1# show islb vrrp summary 

                         -- Groups For Load Balance --
--------------------------------------------------------------------------------
               VR Id             VRRP Address Type             Configured Status
--------------------------------------------------------------------------------
                 150                          IPv4                       Enabled

                       -- Interfaces For Load Balance --
--------------------------------------------------------------------------------
                                                             Initiator  Redirect
 VR Id         VRRP IP              Switch WWN     Interface      Load   Enabled
--------------------------------------------------------------------------------
   150  10.150.150.254 20:00:00:0d:ec:27:4f:40       GigE1/2         0       Yes
M  150  10.150.150.254 20:00:00:0d:ec:54:63:80       GigE1/2         0       Yes

                    -- Initiator To Interface Assignment --
--------------------------------------------------------------------------------
Initiator  VR Id         VRRP IP              Switch WWN               Interface
--------------------------------------------------------------------------------

Configure iSLB Initiator

This is configured almost the same way as an iSCSI initiator, but now we have a few more options. Let’s configure the basic initiator, make sure it’s in VSAN 101, make sure the sytem assigns static WWNs (not dynamic). I’m going to configure the IQN name because I know it, but you could also configure the ip address instead (islb initiator ip-address 10.150.150.10)

islb initiator name iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7
 vsan 101
 static nwwn system-assign
 static pwwn system-assign 1

So far everything looks familiar except we’re using the word islb instead of iscsi.

Configure Target under the Initiator

Notice we can now configure target pWWN or device-alias:

MDS1(config-islb-init)# target ?
  device-alias  Device-alias of the fc-target
  pwwn          PWWN of the fc-target

Let’s choose pWWN, and we have a few more options now:

MDS1(config-islb-init)# target pwwn 22:00:00:1d:38:1c:76:db ?
                   
  fc-lun               Fc-lun of the fc-target
  iqn-name             Name of the target
  no-zone              No automatic zoning
  revert-primary-port  Revert back to primary port when it comes back up
  sec-device-alias     Device-alias of the secondary fc-target
  sec-pwwn             PWWN of the secondary fc-target
  trespass             Enable trespass support
  vsan                 Assign VSAN membership for the initiator target

Let’s give this a common IQN name

MDS1(config-islb-init)# target pwwn 22:00:00:1d:38:1c:76:db iqn-name iqn.2014-08.lab.mds.jbod1-d8-b

Metric

We can set the metric of the initiator if necessary. Default is 1000.

MDS1(config-islb-init)# metric 1000

Auto-Zoning

By default this will automatically create a zone. If we don’t want this, we need to specify “no-zone” when creating the target.

MDS1(config-islb-init)# exit
MDS1(config)# islb commit
2014 Aug 31 08:12:10 MDS1 %IPS-3-ISLB_ZONE_NO_ACTIVE_ZONESET: iSLB zoneset activation returned 0x40200018 for VSAN 0001
2014 Aug 31 08:12:10 MDS1 %IPS-3-ISLB_ZONE_ACTIVATION_RETRY: iSLB zoneset activation returned 0x40200015 for VSAN 0101

MDS1(config)# show zoneset active vsan 101
zoneset name VSAN101 vsan 101
  zone name FAKE vsan 101
    pwwn 33:33:33:33:33:33:33:33
    pwwn 33:33:33:33:33:33:33:34
  
  zone name ips_zone_407edf359961771eff35d24e1254d26e vsan 101
    symbolic-nodename iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7
  * fcid 0x0200ef [pwwn 22:00:00:1d:38:1c:76:db]

MDS2# show zoneset active vsan 101
zoneset name VSAN101 vsan 101
  zone name FAKE vsan 101
    pwwn 33:33:33:33:33:33:33:33
    pwwn 33:33:33:33:33:33:33:34
  
  zone name ips_zone_407edf359961771eff35d24e1254d26e vsan 101
    symbolic-nodename iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7
  * fcid 0x0200ef [pwwn 22:00:00:1d:38:1c:76:db]

Cool, we just automatically created a zone!

Note, that if we decided not to automatically create the zone, we would need to configure this manually.  Also, if we wanted to get more granular with the access control, just like iscsi virtual-targets, we can configure islb virtual-targets.  Keep that in mind.

Point iSCSI Initiator to VRRP IP address

On the initiator (vSphere in my lab), configure a dynamic discovery to 10.150.150.254

target 

Click Ok, and then rescan

Observe

Check this out, MDS1 (The VRRP master) received the iSCSI session request first, then redirected (load-balanced) it to the sWWN of MDS2 (The VRRP backup)

MDS1:

MDS1(config)# 2014 Aug 31 08:16:18 MDS1 %IPS-5-ISCSI_SESSION_CREATE_REDIRECT: GigabitEthernet1/2: iSCSI Session initiator iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7 target  redirected to 20:00:00:0d:ec:27:4f:40/GigabitEthernet1/2

MDS2:
MDS2# 2014 Aug 31 08:16:44 MDS2 %IPS-SLOT1-5-ISCSI_CONN_UP: GigabitEthernet1/2: iSCSI session up from initiator iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7 alias  ip 10.150.150.10 to target Discovery

MDS2# show wwn switch 
Switch WWN is 20:00:00:0d:ec:27:4f:40

We now have an iSLB initiator on MDS2, with FCID assigned.

MDS2# show islb initiator 
iSCSI Node name is iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7 
    Initiator ip addr (s): 10.150.150.10 
    iSCSI alias name:  
    Configured node (iSLB)
    Node WWN is 21:0b:00:0d:ec:54:63:82 (configured) 
    Member of vsans: 101
    Number of Initiator Targets: 1

    Initiator Target: iqn.2014-08.lab.mds.jbod1-d8-b 
      Port WWN 22:00:00:1d:38:1c:76:db 
      Primary PWWN VSAN 101
      Zoning support is enabled
      Trespass support is disabled
      Revert to primary support is disabled

    Number of Virtual n_ports: 1
    Virtual Port WWN is 21:0c:00:0d:ec:54:63:82 (configured)
      Interface iSCSI 1/2, Portal group tag: 0x3001 
      VSAN ID 101, FCID 0x020102

We can now see in the iSLB VRRP table that MDS2 has a current load of 1000 based on the Initiator To Interface Assignment.

MDS2# show islb vrrp summary 

                         -- Groups For Load Balance --
--------------------------------------------------------------------------------
               VR Id             VRRP Address Type             Configured Status
--------------------------------------------------------------------------------
                 150                          IPv4                       Enabled

                       -- Interfaces For Load Balance --
--------------------------------------------------------------------------------
                                                             Initiator  Redirect
 VR Id         VRRP IP              Switch WWN     Interface      Load   Enabled
--------------------------------------------------------------------------------
   150  10.150.150.254 20:00:00:0d:ec:27:4f:40       GigE1/2      1000       Yes
M  150  10.150.150.254 20:00:00:0d:ec:54:63:80       GigE1/2         0       Yes

                    -- Initiator To Interface Assignment --
--------------------------------------------------------------------------------
Initiator  VR Id         VRRP IP              Switch WWN               Interface
--------------------------------------------------------------------------------
iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7
             150  10.150.150.254 20:00:00:0d:ec:27:4f:40      GigabitEthernet1/2

The iSLB session has been created between Initiator and Target

MDS2# show islb session
Initiator iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7
  Initiator ip addr (s): 10.150.150.10 
  Session #1
    Target iqn.2014-08.lab.mds.jbod1-d8-b
    VSAN 101, ISID 00023d000001, Status active, no reservation

MDS2# show islb session detail
Initiator iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7 
  Initiator ip addr (s): 10.150.150.10 
  Session #1 (index 3)
    Target iqn.2014-08.lab.mds.jbod1-d8-b
    VSAN 101, ISID 00023d000001, TSIH 12289, Status active, no reservation
    Type Normal, ExpCmdSN 53, MaxCmdSN 180, Barrier 0
    MaxBurstSize 262144, MaxConn 1, DataPDUInOrder Yes
    DataSeqInOrder Yes, InitialR2T No, ImmediateData Yes
    Registered LUN 0, Mapped LUN 0
    Stats:
      PDU: Command: 53, Response: 53
      Bytes: TX: 7925, RX: 0
    Number of connection: 1
    Connection #1 (index 1)
      Local IP address: 10.150.150.6, Peer IP address: 10.150.150.10
      CID 0, State: Full-Feature
      StatSN 56, ExpStatSN 0
      MaxRecvDSLength 131072, our_MaxRecvDSLength 262144
      CSG 3, NSG 3, min_pdu_size 48 (w/ data 48)
      AuthMethod none, HeaderDigest None (len 0), DataDigest None (len 0)
      Version Min: 0, Max: 0
      FC target: Up, Reorder PDU: No, Marker send: No (int 0)
      Received MaxRecvDSLen key: Yes
      Stats:
        Bytes: TX: 7925, RX: 0

As expected, we have a FLOGI entry from our iSCSI initiator with it’s system-assigned pWWN:

MDS2# show flogi database interface iscsi 1/2
--------------------------------------------------------------------------------
INTERFACE        VSAN    FCID           PORT NAME               NODE NAME       
--------------------------------------------------------------------------------
iscsi1/2         101   0x020102  21:0c:00:0d:ec:54:63:82 21:0b:00:0d:ec:54:63:82

Total number of flogi = 1.


MDS1# show fcns database fcid 0x020102 vsan 101

VSAN 101:
--------------------------------------------------------------------------
FCID        TYPE  PWWN                    (VENDOR)        FC4-TYPE:FEATURE
--------------------------------------------------------------------------
0x020102    N     21:0c:00:0d:ec:54:63:82 (Cisco)         scsi-fcp:init isc..w 

Total number of entries = 1

The disk is also visible in vSphere:

discovered

Unfortunately, I only have one server, so I can’t show adding another. I can assume that since the first iSLB session was successfully load-balanced to MDS2 that the next session would stay on MDS1.

Another highly informative command to be aware of:

MDS2# show islb vrrp
-- Groups For Load Balance --

    VRRP group id 150
        Address type: IPv4
        Configured status: Enabled

-- Interfaces For Load Balance --

    Interface GigabitEthernet1/2
        Switch wwn: 20:00:00:0d:ec:27:4f:40
        VRRP group id: 150, VRRP IP address: 10.150.150.254
            Interface VRRP state: backup
            Interface load: 1000
            Interface redirection: enabled
            Group redirection: enabled
        Number of physical IP address: 1
            (1) 10.150.150.6
        Port vsan: 1
        Forwarding mode: store-and-forward
        Proxy initiator mode: disabled
        iSCSI authentication: CHAP or None

    Interface GigabitEthernet1/2
        Switch wwn: 20:00:00:0d:ec:54:63:80
        VRRP group id: 150, VRRP IP address: 10.150.150.254
            Interface VRRP state: master
            Interface load: 0
            Interface redirection: enabled
            Group redirection: enabled
        Number of physical IP address: 1
            (1) 10.150.150.5
        Port vsan: 1
        Forwarding mode: store-and-forward
        Proxy initiator mode: disabled
        iSCSI authentication: CHAP or None

-- Initiator To Interface Assignment --

    Initiator iqn.1998-01.com.vmware:53de1d20-106c-8c14-070d-0025b500010d-612838b7
        VRRP group id: 150, VRRP IP address: 10.150.150.254
        Assigned to switch wwn: 20:00:00:0d:ec:27:4f:40
            ifindex: GigabitEthernet1/2
        Waiting for the redirected session request: False
        Initiator weighted load: 1000

Quick Template

MDS1		                               MDS2
                                           
feature iscsi                              feature iscsi
iscsi enable module 1                      iscsi enable module 1
int iscsi1/2                               int iscsi1/2
 no shut                                    no shut
                                           
int g1/2                                   int g1/2
 ip add 10.150.150.5 255.255.255.0          ip add 10.150.150.6 255.255.255.0
 no shut                                    no shut
 vrrp 1                                     vrrp 1
  address 10.150.150.254                     address 10.150.150.254
  no shut                                    no shut
                                           
islb distribute                            islb distribute
islb commit                                islb commit

# Changes are now distributed via CFS

islb vrrp 1 load-balance
islb commit

islb initiator name iqn.this-is-the-initiator
 vsan 101
 static nWWN system-assign
 static pWWN system-assign 1
 target [pwwn|device-alias] XXXX vsan 101
 (Optional) metric [1-65535]
 (Optional) username iscsiuser
 (Optional) target pwwn XXXX no-zone
 (Optional) target pwwn XXXX fc-lun X iscsi-lun X
 exit

islb commit

Helpful Show Commands

show zoneset active vsan 101
show islb status
show cfs peers name islb
show islb initiator
show islb session [detail]
show islb pending
show vrrp
show islb vrrp [summary]

In part 3, I’ll be configuring Boot from iSCSI in UCS

David Varnum

here

You may also like...

2 Responses

  1. Mike says:

    Very nice write-up!! By the way, I little off topic in here, which font are you using to write this article? Keep up the good work!

Leave a Reply

Discover more from /overlaid

Subscribe now to keep reading and get access to the full archive.

Continue reading