Fabric Roles — Border, Control, Edge
Objective
In this lesson you will configure and verify the three core SD‑Access fabric roles: Border Node (BN), Control Plane Node (CP), and Edge Node (EN). You will create Loopback0 addressing for each role, raise MTU to accommodate VXLAN, enable multicast/PIM for underlay multicast forwarding, and bring up an IGP so every fabric node can reach every other node’s Loopback0. This matters in production because the fabric overlay (LISP/VXLAN) depends on stable IP reachability and multicast for L2 flooding between Border Nodes and for CP/BN services. In a real campus deployment this is used to stitch remote access switches (Edge) to central CP/BN infrastructure and to enable VXLAN transit between fabric sites.
Topology
ASCII topology with exact IPs on every interface:
EN1 (Edge) CP1 (Control Plane) BN1 (Border) +-----------+ +-----------+ +-----------+ | EN1 | | CP1 | | BN1 | | Lo0:10.0.0.3/32 | Lo0:10.0.0.2/32 | Lo0:10.0.0.1/32 | Gi0/1:192.168.100.6/30--------------| Gi0/1:192.168.100.5/30------------------| Gi0/1:192.168.100.1/30 +-----------+ +-----------+ +-----------+
Notes:
- All point-to-point transit links are /30 networks.
- VXLAN will require MTU > 1550 on these devices; Loopback0 addresses are /32 as recommended.
Device Table
| Device | Role | Loopback0 IP | Transit interface | Transit IP |
|---|---|---|---|---|
| EN1 | Edge | 10.0.0.3/32 | GigabitEthernet0/1 | 192.168.100.6/30 |
| CP1 | Control Plane | 10.0.0.2/32 | GigabitEthernet0/1 | 192.168.100.5/30 |
| BN1 | Border | 10.0.0.1/32 | GigabitEthernet0/1 | 192.168.100.1/30 |
Quick Recap
- Lesson 1 built the physical underlay and basic connectivity. This lesson assumes the physical links between EN1–CP1 and CP1–BN1 are present.
- New to this lesson: we introduce Loopback0 addresses for fabric roles, MTU adjustments to support VXLAN, multicast/PIM on transit links and Loopbacks, and IGP (OSPF) so all Loopbacks are reachable.
Key Concepts (theory + practical behavior)
- Role separation — Think of the fabric roles like actors in a play: the Edge Node connects endpoints, the Control Plane Node runs LISP and fabric control services, and the Border Node is the fabric egress/ingress point to external networks. In production, Border Nodes provide connectivity to the Internet or other routing domains.
- Loopback0 /32 addressing — Each fabric node advertises a stable Loopback0 /32 used as the overlay endpoint (for VXLAN tunnels and LISP control plane). Practically, this ensures stable reachability if a physical interface goes down.
- MTU > 1550 — VXLAN adds ~50 bytes of header overhead. If your underlay MTU is too small, VXLAN packets are fragmented or dropped. In production, set MTU to 9000 or at least >1550 on all devices in a fabric site.
- Multicast + PIM on underlay — L2 flooding and some overlay behaviors require multicast between fabric nodes (PIM enabled on transit interfaces and Loopback0). When you enable PIM on a transit link the router sends periodic PIM Hello messages to establish neighbor relationships.
- IGP reachability (OSPF) — The underlay IGP carries reachability to Loopback0 addresses. When you enable OSPF on an interface the router sends Hello packets (default 10s on broadcast) and forms neighbors; the Loopback0 /32 is injected so other fabric nodes can reach it.
Tip: Think of Loopback0 as the "phone number" for a node — no matter which physical interface is used to call, the number stays the same.
Step-by-step configuration
We follow 5 focused steps. Each step provides commands, explanation, and verification output.
Step 1: Configure Loopback0 on each node
What we are doing: Assign a /32 Loopback0 on BN1, CP1 and EN1. Loopback0 serves as the fabric stable identifier for LISP/VXLAN endpoints and is what other nodes will route to. This is fundamental so the overlay knows where to send encapsulated traffic.
! On BN1
configure terminal
interface Loopback0
ip address 10.0.0.1 255.255.255.255
exit
end
! On CP1
configure terminal
interface Loopback0
ip address 10.0.0.2 255.255.255.255
exit
end
! On EN1
configure terminal
interface Loopback0
ip address 10.0.0.3 255.255.255.255
exit
end
What just happened: Each command created a Loopback0 and assigned a /32 address. Loopbacks are logical and remain up as long as the device is up, providing a stable reachability anchor for overlay endpoints and control plane peering.
Real-world note: Loopback /32s are preferred because routing prefers them as stable endpoints; dynamic physical interface flaps don't remove Loopback reachability if IGP converges.
Verify:
! On BN1
show ip interface brief
Interface IP-Address OK? Method Status Protocol
Loopback0 10.0.0.1 YES manual up up
GigabitEthernet0/1 192.168.100.2 YES manual up up
! On CP1
show ip interface brief
Interface IP-Address OK? Method Status Protocol
Loopback0 10.0.0.2 YES manual up up
GigabitEthernet0/1 192.168.100.5 YES manual up up
! On EN1
show ip interface brief
Interface IP-Address OK? Method Status Protocol
Loopback0 10.0.0.3 YES manual up up
GigabitEthernet0/1 192.168.100.6 YES manual up up
Step 2: Configure transit IPs and set interfaces to routed
What we are doing: Configure the point-to-point transit IP addresses on each GigabitEthernet interface between nodes. Routed links ensure IGP can form neighbors and carry Loopback reachability.
! On BN1
configure terminal
interface GigabitEthernet0/1
no switchport
ip address 192.168.100.1 255.255.255.252
exit
end
! On CP1 (link to BN1)
configure terminal
interface GigabitEthernet0/1
no switchport
ip address 192.168.100.2 255.255.255.252
exit
end
! On CP1 (second link to EN1)
configure terminal
interface GigabitEthernet0/2
no switchport
ip address 192.168.100.5 255.255.255.252
exit
end
! On EN1
configure terminal
interface GigabitEthernet0/1
no switchport
ip address 192.168.100.6 255.255.255.252
exit
end
What just happened: Each physical transport link was converted to a routed interface (no switchport) and given a /30 address. This enables point-to-point IGP adjacencies and explicit control of IP forwarding between fabric nodes.
Real-world note: Using routed links prevents accidental L2 bridging loops and simplifies multicast/PIM configuration on the underlay.
Verify:
! On CP1
show ip interface brief
Interface IP-Address OK? Method Status Protocol
Loopback0 10.0.0.2 YES manual up up
GigabitEthernet0/1 192.168.100.2 YES manual up up
GigabitEthernet0/2 192.168.100.5 YES manual up up
! On BN1
show ip route
Codes: C - Connected, S - Static, R - RIP, O - OSPF, I - IS-IS, B - BGP
C 192.168.100.0/30 is directly connected, GigabitEthernet0/1
C 10.0.0.1/32 is directly connected, Loopback0
! On EN1
show ip route
C 192.168.100.4/30 is directly connected, GigabitEthernet0/1
C 10.0.0.3/32 is directly connected, Loopback0
Step 3: Raise MTU on the underlay devices
What we are doing: Increase the MTU to accommodate VXLAN overhead (VXLAN adds ~50 bytes). Without this, VXLAN traffic can be fragmented or dropped. We configure a high MTU on the system or interface level as available.
! On BN1
configure terminal
system mtu 9000
end
! On CP1
configure terminal
system mtu 9000
end
! On EN1
configure terminal
system mtu 9000
end
What just happened: The global system MTU was increased to 9000 bytes on each device, ensuring VXLAN-encapsulated packets traverse without fragmentation. Some platforms require reload to take full effect for default VLANs; on many IOS‑XE switches the change is applied immediately for routed interfaces.
Real-world note: Consistent MTU across the whole fabric site is critical — mixing MTUs causes the fabric to fall back to the lowest common denominator and can result in packet drops.
Verify:
! On BN1
show system mtu
System MTU: 9000 bytes
! On CP1
show system mtu
System MTU: 9000 bytes
! On EN1
show system mtu
System MTU: 9000 bytes
Step 4: Enable PIM on transit interfaces and Loopback0
What we are doing: Enable multicast routing and configure PIM Sparse Mode on the transit interfaces and on each Loopback0. PIM is required for overlay multicast (L2 flooding) and for certain control-plane multicast behaviors.
! On BN1
configure terminal
ip multicast-routing
interface GigabitEthernet0/1
ip pim sparse-mode
exit
interface Loopback0
ip pim sparse-mode
exit
end
! On CP1
configure terminal
ip multicast-routing
interface GigabitEthernet0/1
ip pim sparse-mode
exit
interface GigabitEthernet0/2
ip pim sparse-mode
exit
interface Loopback0
ip pim sparse-mode
exit
end
! On EN1
configure terminal
ip multicast-routing
interface GigabitEthernet0/1
ip pim sparse-mode
exit
interface Loopback0
ip pim sparse-mode
exit
end
What just happened: Multicast routing was enabled globally and PIM was activated on all transit interfaces and Loopback0. The devices will now send PIM Hello messages to discover PIM neighbors and can build multicast distribution trees needed by the overlay.
Real-world note: In a production fabric, you usually configure Anycast ASM RPs on CP/BN nodes to steer multicast — these CP/BN nodes act as rendezvous points for overlay multicast.
Verify:
! On CP1
show ip pim interface
Interface GigabitEthernet0/1 -- PIM Enabled, DR is 192.168.100.1, Mode: Sparse
IP address: 192.168.100.2
Hello interval: 30s, Holdtime: 105s
Interface GigabitEthernet0/2 -- PIM Enabled, DR is 192.168.100.6, Mode: Sparse
IP address: 192.168.100.5
Hello interval: 30s, Holdtime: 105s
Interface Loopback0 -- PIM Enabled, Mode: Sparse
IP address: 10.0.0.2
Hello interval: 30s, Holdtime: 105s
! On BN1
show ip pim neighbor
192.168.100.2 0.000s GigabitEthernet0/1
10.0.0.2 0.000s Loopback0
! On EN1
show ip pim neighbor
192.168.100.5 0.000s GigabitEthernet0/1
10.0.0.2 0.000s Loopback0
Step 5: Enable an IGP (OSPF) to advertise Loopback0s
What we are doing: Configure OSPF so every node advertises its Loopback0 /32 and learns the Loopbacks of others. The CP architecture (LISP Pub/Sub) requires Loopback reachability; the BN in external mode may need a default route to function as an external Border.
! On BN1
configure terminal
router ospf 1
network 192.168.100.0 0.0.0.3 area 0
network 10.0.0.1 0.0.0.0 area 0
exit
! Optional: BN1 advertises a default route to the fabric if it's an external border
ip route 0.0.0.0 0.0.0.0 203.0.113.254
end
! On CP1
configure terminal
router ospf 1
network 192.168.100.0 0.0.0.3 area 0
network 192.168.100.4 0.0.0.3 area 0
network 10.0.0.2 0.0.0.0 area 0
exit
end
! On EN1
configure terminal
router ospf 1
network 192.168.100.4 0.0.0.3 area 0
network 10.0.0.3 0.0.0.0 area 0
exit
end
What just happened: OSPF was enabled and each Loopback0 /32 is announced into OSPF area 0. The BN also has a static default route pointing to an external next hop (203.0.113.254) — this models the requirement for an upstream default route if using LISP Pub/Sub with External Border functionality.
Real-world note: OSPF Hello/Dead timers and area design matter in production. Many automated underlays choose IS‑IS, but OSPF is still widely used.
Verify:
! On CP1
show ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface
10.0.0.1 1 FULL/DR 00:00:33 192.168.100.1 GigabitEthernet0/1
10.0.0.3 1 FULL/BDR 00:00:34 192.168.100.6 GigabitEthernet0/2
show ip route ospf
O 10.0.0.1/32 [110/20] via 192.168.100.2, 00:00:12, GigabitEthernet0/1
O 10.0.0.3/32 [110/20] via 192.168.100.6, 00:00:12, GigabitEthernet0/2
! On BN1
show ip route
O 10.0.0.2/32 [110/20] via 192.168.100.2, 00:00:12, GigabitEthernet0/1
O 10.0.0.3/32 [110/20] via 192.168.100.2, 00:00:12, GigabitEthernet0/1
S* 0.0.0.0/0 [1/0] via 203.0.113.254
Verification Checklist
- Check 1: Loopback0 presence — Run
show ip interface briefon each device and confirm Loopback0 is up with the correct /32. - Check 2: PIM neighbors and PIM interface status — Run
show ip pim neighborandshow ip pim interfaceto confirm PIM is enabled and neighbors are visible. - Check 3: IGP reachability to all Loopback0s — Run
show ip route ospf(orshow ip route) on each node and confirm you see 10.0.0.1/32, 10.0.0.2/32, 10.0.0.3/32.
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| Loopback0 not reachable from other nodes | Loopback0 not included in IGP or wrong network mask in OSPF network statement | Add proper OSPF network statement or use ip route and redistribute; verify mask is /32 in the network command |
| VXLAN traffic fragmented or dropped | Underlay MTU left at default (1500) — VXLAN requires >1550 | Set consistent MTU >1550 (recommended 9000) on all fabric devices |
| PIM shows no neighbors | PIM not enabled on transit interface or multicast-routing not enabled globally | Enable ip multicast-routing and ip pim sparse-mode on interfaces and Loopback0 |
| Border cannot provide external reachability | Border lacks upstream default route required for Pub/Sub External Border mode | Configure a default route on the Border to the upstream next hop (e.g., ip route 0.0.0.0 0.0.0.0 203.0.113.254) |
Key Takeaways
- Always configure stable /32 Loopback0 addresses on BN/CP/EN — they are the fabric "identifiers" used by LISP/VXLAN and should be reachable via the underlay.
- Consistent MTU across the fabric site is mandatory for VXLAN; mismatched MTU causes fragmentation and packet loss.
- Enable multicast/PIM on transit links and on Loopback0 so overlay multicast (L2 floods, control-plane use) can function.
- The underlay IGP (OSPF in this lesson) must advertise Loopback0s so control plane and border nodes can locate each other; for Pub/Sub External Border operation, a default route upstream is required.
Warning: In a production fabric, any change to MTU, PIM, or IGP must be coordinated across all fabric devices. Inconsistent settings lead to partial fabric failures and broken endpoint mobility.
This completes Lesson 2: configuring Border, Control Plane, and Edge roles for LISP/VXLAN-based SD‑Access. The next lesson will cover LISP control-plane enabling and basic VN/LISP mappings.