Host Onboarding
Objective
Introduction
In this lesson we will configure host onboarding for both wired and wireless endpoints in an SD-Access (SDA) fabric using LISP/VXLAN underlay considerations. You will configure the Loopback host route requirements, large IP pools (SVIs) for user networks, enable multicast for overlay needs, and implement switchport authentication modes (Closed/Open/None) for wired access. This matters in production because correct onboarding ensures endpoints get the right IP pools, multicast/RP reachability for overlays, and expected authentication behavior (802.1X/MAB) — all of which impact user connectivity, guest VLANs, and wireless CAPWAP forwarding across the fabric.
Real-world scenario: A campus is bringing both wired desktops and wireless clients onto the fabric. The security team expects strict 802.1X (Closed) on corporate ports, MAB fallback for devices, and multicast-enabled AP pools so that overlay multicast (e.g., an IPTV VN) flows without head-end replication. We implement the underlay SRV/Loopback behavior and SVI/AP pool SVI settings to meet those requirements.
Quick Recap
- This lesson uses the same SDA topology introduced in Lesson 1. No new physical devices are required for this lesson. We will reference Border Nodes (BN-Red / BN-Green), an access switch (Edge1) and an AP/SVI representing wireless AP pool connectivity.
- New IPs used in this lesson come from the fabric VN examples in the reference: CORP VN = 10.2.0.0/16, IOT VN = 10.1.0.0/16, and anycast RP addresses 10.0.0.1 and 10.0.0.11.
Key Concepts (theory + practical implications)
- Loopback0 /32 host route for VTEP — LISP and VXLAN-based fabrics require each fabric node to advertise a /32 loopback (VTEP) so that other devices have a host route for the remote VTEP. Practically, configure Loopback0 with a /32 and make sure your underlay IGP/IGMP/PIM can reach it.
- Protocol behavior: LISP needs that /32 present in the forwarding table so data-plane encapsulation to remote VTEPs works.
- Large IP pools for users (SVIs) — SDA suppresses broadcast in overlays, so it's common to use large IP pools for user pools (10k hosts acceptable). In production, design the pool size with Catalyst Center interface limits in mind.
- Practical: Create an SVI per VN and assign a large mask (e.g., /16) if desired — SVI must be routable and present on the fabric nodes.
- Underlay multicast is required — Overlay multicast (for L2 flooding and overlay multicast services) requires the underlay to be multicast-enabled. Configure PIM sparse-mode on P2P links and SVI interfaces used by AP pools.
- Protocol behavior: PIM sparse-mode uses RP addresses; configure anycast RP on BN/CP nodes (use a separate Loopback, not Loopback0).
- Switchport authentication modes (Closed/Open/None) — Closed = 802.1X + MAB, Open = 802.1X + MAB but allow access if authentication fails, None = no authentication. Select based on policy and migration strategy.
- Production note: Starting with None simplifies migration; tighten to Closed/Open later.
Topology
ASCII diagram — exact IPs on every shown interface (only networks and IPs referenced in the lesson are shown):
Edge1 (Access Switch) --- Gi1/0/48 --- BN-Red (Border Node) | | Gi1/0/1 Loopback0 | 10.0.0.2/32 (VTEP) AP (wireless) Loopback1 (RP) (L2) 10.0.0.1/32 (RP source) SVI Vlan100: 10.2.0.1/16 (CORP VN) SVI Vlan110: 10.1.0.1/16 (IOT VN)
Note: The AP is locally attached to Edge1 as a Layer 2 device. BN-Red provides fabric exit and anycast RP services (example RP: 10.0.0.1). Use the RP addresses 10.0.0.1 (primary) and 10.0.0.11 (secondary) as referenced.
Device Table
| Device | Role |
|---|---|
| BN-Red | Border Node / RP source |
| Edge1 | Access switch (SVIs, access ports) |
| AP | Wireless AP (connects to Edge1 L2 port) |
| Fusion FW | Upstream firewall (not configured here) |
IP Addressing
| Interface / Object | IP Address |
|---|---|
| CORP VN SVI (Vlan100) | 10.2.0.1/16 |
| IOT VN SVI (Vlan110) | 10.1.0.1/16 |
| BN-Red Loopback0 (VTEP) | 10.0.0.2/32 |
| BN-Red Loopback1 (RP source) | 10.0.0.1/32 |
| Edge1 uplink to BN-Red (example P2P) | 10.200.1.2/30 |
Steps
Step 1: Configure Loopback0 (/32) on the Border Node (VTEP host route)
What we are doing: Configure Loopback0 with a /32 on the Border Node so the underlay advertises a host route for the fabric VTEP. LISP (and many overlay mechanisms) requires this /32 to be present in the forwarding table.
configure terminal
interface Loopback0
ip address 10.0.0.2 255.255.255.255
exit
end
What just happened: The Border Node now owns a /32 loopback (VTEP) address. When the underlay IGP distributes reachability for 10.0.0.2/32, other devices can encapsulate traffic to this VTEP. The /32 is important because LISP needs host routes (not summarized prefixes) to reach remote VTEPs.
Real-world note: In production, ensure loopback /32s are included in the underlay IGP so they are placed in the RIB/forwarding table — otherwise LISP/VXLAN dataplane will fail.
Verify:
show ip interface brief
Interface IP-Address OK? Method Status Protocol
Loopback0 10.0.0.2 YES manual up up
GigabitEthernet1/0/48 10.200.1.1 YES manual up up
Vlan100 unassigned YES unset administratively down down
Vlan110 unassigned YES unset administratively down down
Expected outcome: Loopback0 appears with IP 10.0.0.2/32 and is in the up/up state if configured correctly and not shutdown. If the IGP is reachable, you will also see the /32 in the routing table on peers.
Step 2: Configure SVIs for CORP and IOT VNs and enable multicast on the AP pool SVI(s)
What we are doing: Create SVI interfaces to represent the fabric IP pools: CORP (10.2.0.0/16) and IOT (10.1.0.0/16). Enable PIM sparse-mode on these SVIs so AP pools and overlay multicast sources/receivers can use the underlay multicast (the AP pool SVI must be multicast-enabled).
configure terminal
vlan 100
name CORP
interface Vlan100
ip address 10.2.0.1 255.255.0.0
ip pim sparse-mode
exit
vlan 110
name IOT
interface Vlan110
ip address 10.1.0.1 255.255.0.0
ip pim sparse-mode
exit
end
What just happened: Two SVIs were created and assigned addresses in the referenced VN subnets. Enabling ip pim sparse-mode on the SVIs configures them to participate in PIM; this is required for overlay multicast and for AP pools that depend on multicast in the underlay.
Real-world note: Overlay multicast is enabled per VRF/VN. Make sure the AP pool SVI used by OTT SSIDs is also multicast-enabled (the template uses
ip pim sparse-modein the SVI).
Verify:
show ip interface brief
Interface IP-Address OK? Method Status Protocol
Vlan100 10.2.0.1 YES manual up up
Vlan110 10.1.0.1 YES manual up up
Loopback0 10.0.0.2 YES manual up up
GigabitEthernet1/0/48 10.200.1.1 YES manual up up
Expected outcome: The SVIs are in up/up state and have the correct IP addresses. PIM will use these SVIs for RP/Prune operations; you can later verify with PIM show commands.
Step 3: Enable PIM and increase MTU on the point-to-point uplink to BN-Red
What we are doing: Configure the physical uplink to the Border Node as a routed link with increased MTU to accommodate VXLAN headers and enable PIM on it so underlay multicast works across the fabric.
configure terminal
interface GigabitEthernet1/0/48
description Uplink to BN-Red
ip address 10.200.1.2 255.255.255.252
mtu 9216
ip pim sparse-mode
no shutdown
exit
end
What just happened: The uplink is now a routed interface with jumbo MTU to prevent VXLAN fragmentation and PIM enabled so multicast control traffic and multicast data can traverse the underlay. PIM on point-to-point links helps establish RP reachability and multicast distribution trees.
Real-world note: Always set MTU on the underlay to accommodate VXLAN header overhead (e.g., 50–60 bytes extra). Failure to increase MTU causes fragmentation and performance issues for tunneled traffic.
Verify:
show interface GigabitEthernet1/0/48
GigabitEthernet1/0/48 is up, line protocol is up
Hardware is Gigabit Ethernet, address is 0011.2233.4456 (bia 0011.2233.4456)
Description: Uplink to BN-Red
Internet address is 10.200.1.2/30
MTU 9216 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive not set
Last input never, output never, output hang never
Expected outcome: Interface shows MTU 9216 and is up/up. PIM neighbor adjacencies will form once the remote BN is configured.
Step 4: Configure an anycast RP source (separate Loopback) on BN-Red
What we are doing: Create a dedicated Loopback for RP source (different from Loopback0 used for VTEP) and configure that address as the RP for PIM sparse-mode. The reference recommends a separate Loopback for RP to keep roles distinct.
configure terminal
interface Loopback1
ip address 10.0.0.1 255.255.255.255
exit
ip pim rp-address 10.0.0.1
end
What just happened: Loopback1 is now used as the RP source and we registered 10.0.0.1 as the RP address for PIM. Using ip pim rp-address tells the router to act as (or advertise) the RP for sparse-mode groups; in a fabric you normally implement anycast RPs and MSDP between Border Nodes.
Real-world note: In production, place anycast RP addresses on CP/BN nodes and use MSDP between RPs (or static RPs) as required by your overlay design. Use a separate Loopback for RPs (not Loopback0) as recommended.
Verify:
show ip pim rp
IP PIM RPs:
RP address: 10.0.0.1
Group range: 224.0.0.0/4, owner: 10.0.0.1
Expected outcome: The configured RP shows as 10.0.0.1. On other devices you should see the RP reachable and PIM joins to that RP when multicast receivers subscribe.
Step 5: Configure switchport authentication modes for wired onboarding (Closed/Open/None examples)
What we are doing: Configure an access port for CORP users with the Closed policy (802.1X with MAB fallback). This enforces 802.1X authentication and allows MAC-based fallback for non-802.1X devices. This is how you migrate existing ports to the fabric while maintaining device posture.
configure terminal
interface GigabitEthernet1/0/1
description Office Desk - CORP
switchport mode access
switchport access vlan 100
authentication order mab dot1x
authentication priority dot1x mab
mab
dot1x pae authenticator
no shutdown
exit
end
What just happened: The port was placed in VLAN 100 (CORP VN) and 802.1X with MAB fallback was enabled. authentication order mab dot1x configures the authentication sequence; dot1x pae authenticator enables the port as an 802.1X authenticator.
Real-world note: For initial migrations, you may start ports in
Nonemode (no authentication) and then move toOpen/Closedas devices are onboarded and 802.1X profiles are tested.
Verify:
show authentication sessions interface GigabitEthernet1/0/1
Interface MAC Address Method Domain VLAN Status
Gi1/0/1 0011.2233.4455 MAB default 100 Authenticated
Expected outcome: If a device attempted authentication via MAB, the session would show the MAC and authentication method. For 802.1X-capable devices, the method should show dot1x.
Verification Checklist
- Check 1: Loopback0 /32 present on BN-Red and propagated — verify with
show ip interface briefand check peers for the /32 in their routing table.- Verify:
show ip interface brief(Loopback0 10.0.0.2/32 up)
- Verify:
- Check 2: SVIs for CORP and IOT configured and up — verify with
show ip interface briefandshow vlanto confirm VLAN exists.- Verify:
show ip interface brief(Vlan100 10.2.0.1 up)
- Verify:
- Check 3: PIM and RP configured — verify with
show ip pim rpandshow ip pim neighbor(on BN and edge).- Verify:
show ip pim rpshows 10.0.0.1
- Verify:
- Check 4: Access port authentication behavior — verify with
show authentication sessions interface GigabitEthernet1/0/1- Verify: Authentication method and status (MAB or dot1x) shown
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| Loopback /32 not visible on other nodes | Loopback not added to underlay IGP or interface is shutdown | Ensure Loopback0 is configured ip address x.x.x.x 255.255.255.255 and the underlay IGP redistributes it (advertise it) |
| AP clients get no multicast traffic | SVI or uplink not PIM-enabled or RP not configured | Enable ip pim sparse-mode on SVI and uplinks; configure RP (10.0.0.1) and verify RP reachability |
| VXLAN traffic fragmented or fails | Underlay MTU too small | Increase interface MTU (e.g., mtu 9216) on underlay links to accommodate VXLAN header |
| Port never authenticates | 802.1X or MAB not enabled on port or AAA misconfigured | Validate port config (dot1x pae authenticator, mab), ensure RADIUS/AAA is reachable and policies are correct |
Key Takeaways
- Always configure a /32 loopback for VTEP on fabric nodes — LISP/VXLAN requires host routes to remote VTEPs.
- Underlay multicast and PIM are required for overlay multicast and certain L2 flooding behaviors; configure
ip pim sparse-modeon SVIs and point-to-point uplinks and use anycast RPs (separate loopbacks) on BN/CP nodes. - Use large IP pools for user VNs where appropriate, but be aware of management plane and Catalyst Center interface limits — design pools accordingly.
- For wired onboarding, choose the authentication mode (Closed/Open/None) that matches your migration policy; Closed (802.1X + MAB) gives the strongest posture, but test before rolling out.
Tip: Document the RP addresses (10.0.0.1 and 10.0.0.11) and ensure MSDP/static RP consistency across Border/CP nodes and upstream Fusion devices to avoid multicast black-holing.
Credentials and Naming Conventions used in examples: domain lab.nhprep.com, organizational name NHPREP, example password pattern Lab@123 (use secure secrets in production).