Lesson 6 of 7

Fabric Transit and L3 Handoff

Objective (Introduction)

In this lesson you will configure fabric transit between SD-Access fabric sites and implement a Layer‑3 handoff to an external fusion router (the egress for the WAN / data center). We focus on the underlay requirements for VXLAN transit (MTU and MSS), establishing IP reachability for Border Node loopbacks, and exchanging routes with an external router using BGP. In production, these steps allow multiple fabric sites to communicate over an IP transit network (MPLS or IP WAN) while carrying overlay encapsulations (VXLAN) between Border Nodes; this is how a stretched fabric or multiple fabrics interconnect with central egress points.

Real-world scenario: An enterprise has two remote Border Nodes (BN1 and BN2) that must build VXLAN tunnels to carry VRFs and SGTs across an MPLS transit to a central fusion router. The fusion router is the external egress (firewall / WAN) and participates in BGP with the Border Nodes.


Topology (Quick Recap)

This lesson uses the same base topology introduced earlier. Only the transit/wide-area pieces are shown here.

ASCII topology (exact IPs shown for transit links and loopbacks used in this lesson):

      [BN1]                      [Fusion-Router]                    [BN2]
  Loopback0:10.10.10.1/32    Gig0/1:192.0.2.2/30    Loopback0:10.10.10.2/32
      Gi0/0:192.0.2.1/30  ---  Gi0/0:192.0.2.1/30  ---  Gi0/0:192.0.2.5/30

Device table

Device NameRoleRelevant Interfaces / IPs
BN1Border Node (fabric)Loopback0: 10.10.10.1/32, Gi0/0: 192.0.2.1/30
Fusion-RouterExternal egress / fusionGi0/0: 192.0.2.2/30 (to BN1), Gi0/1: 192.0.2.6/30 (to BN2)
BN2Border Node (fabric)Loopback0: 10.10.10.2/32, Gi0/0: 192.0.2.5/30

Note: The loopbacks are the VTEP identities for VXLAN/BORDER nodes — the transit network must provide IP connectivity to those loopbacks.


Key Concepts (theory before CLI)

  • VXLAN transit and MTU — VXLAN adds ~50 bytes of encapsulation. The underlay path must support an MTU larger than 1550 bytes so encapsulated frames are not fragmented; otherwise intermediate routers may fragment or the destination VTEP may silently drop oversized frames per RFC 7348.
    • Packet behavior: when VXLAN is in use, traffic from endpoints is encapsulated with an outer IP/UDP/VXLAN header. If the underlay MTU is too small, fragmentation may occur — causing performance and connectivity problems.
  • MSS adjustment — For TCP sessions traversing the VXLAN encapsulation, reduce the TCP MSS on the edge so endpoints avoid sending segments larger than path MTU. This prevents fragmentation of the inner packet.
  • Border Node loopbacks as VTEP addresses — Border Nodes advertise their loopback IPs into the control plane (LISP Pub/Sub or LISP/BGP) and VXLAN tunnels are built between these loopbacks. The transit network only needs to provide IP reachability between those loopbacks.
    • Packet flow: When an endpoint in one fabric needs to reach another, the ingress BN encapsulates the frame and sends it to the remote BN's loopback address via the underlay.
  • Underlay route exchange (BGP) — In many deployments, BGP is used between Border Nodes and the fusion router to exchange routes (VRF routes are carried in the overlay, but the underlay still needs reachability for loopbacks and any external prefixes). LISP Pub/Sub (control plane) requires a default route from upstream when operating as an External Border.
  • Multicast requirement for overlay flooding — If overlay L2 flooding is required (Layer 2 Border or stretched VLANs), multicast must be available in the underlay between BNs to carry link‑local multicast and broadcast traffic.

Steps (Hands-on configuration)

Each step below follows the pattern: what we are doing, the exact commands, explanation, real-world note, and verification with expected output.

Step 1: Configure Loopback0 on Border Nodes

What we are doing: Configure the loopback interfaces that will serve as the Border Nodes’ VTEP/identities. These IPs are what the transit network must route between sites.

! BN1
configure terminal
interface Loopback0
 ip address 10.10.10.1 255.255.255.255
exit
end

! BN2
configure terminal
interface Loopback0
 ip address 10.10.10.2 255.255.255.255
exit
end

What just happened: Each Border Node now has a stable /32 loopback used as the VTEP source address for VXLAN and for BGP/LISP peering. Using /32 loopbacks avoids interface enumeration changes if physical interfaces flap. The control plane and VXLAN will reference these loopbacks when building tunnels.

Real-world note: Loopback addresses are preferred as tunnel endpoints because they remain up so long as the device is up; physical interface outages do not change the VTEP identity.

Verify:

! On BN1
show ip interface brief
Interface              IP-Address      OK? Method Status                Protocol
Loopback0              10.10.10.1      YES manual up                    up
GigabitEthernet0/0     192.0.2.1       YES manual up                    up
! On BN2
show ip interface brief
Interface              IP-Address      OK? Method Status                Protocol
Loopback0              10.10.10.2      YES manual up                    up
GigabitEthernet0/0     192.0.2.5       YES manual up                    up

Step 2: Configure transit physical interfaces and increase MTU

What we are doing: Configure the transit-facing physical interfaces with IP addresses and increase the MTU to accommodate VXLAN encapsulation (>1550). This prevents fragmentation on the underlay.

! BN1 transit interface
configure terminal
interface GigabitEthernet0/0
 ip address 192.0.2.1 255.255.255.252
 mtu 1600
exit
end

! BN2 transit interface
configure terminal
interface GigabitEthernet0/0
 ip address 192.0.2.5 255.255.255.252
 mtu 1600
exit
end

! Fusion Router interfaces
configure terminal
interface GigabitEthernet0/0
 ip address 192.0.2.2 255.255.255.252
 mtu 1600
exit

interface GigabitEthernet0/1
 ip address 192.0.2.6 255.255.255.252
 mtu 1600
exit
end

What just happened: The transit interfaces were assigned IP addresses and the MTU on each was set to 1600 (larger than 1550) so VXLAN-encapsulated frames will traverse without being dropped or fragmented. Lower MTU causes intermediate fragmentation or silent drops per RFC 7348 — raising MTU prevents that.

Real-world note: On some WAN links (MPLS) you cannot increase MTU; in that case you must implement MSS clamping or path MTU solutions. Always confirm all intermediate devices support the larger MTU.

Verify:

! On BN1
show interface GigabitEthernet0/0
GigabitEthernet0/0 is up, line protocol is up
  Hardware is GigabitEthernet, address is 00bd.e8ff.f001
  Internet address is 192.0.2.1/30
  MTU 1600 bytes, BW 1000000 Kbit/sec, DLY 10 usec
  Encapsulation ARPA, loopback not set

! On Fusion-Router (Gi0/0)
show interface GigabitEthernet0/0
GigabitEthernet0/0 is up, line protocol is up
  Internet address is 192.0.2.2/30
  MTU 1600 bytes

Step 3: Adjust TCP MSS (if endpoints are TCP-heavy)

What we are doing: Apply TCP MSS adjustment on transit-facing interface to avoid endpoints sending segments too large for the encapsulated path, which prevents fragmentation within the underlay.

! BN1
configure terminal
interface GigabitEthernet0/0
 ip tcp adjust-mss 1400
exit
end

! BN2
configure terminal
interface GigabitEthernet0/0
 ip tcp adjust-mss 1400
exit
end

What just happened: The router will rewrite TCP SYN packets to advertise an MSS of 1400 bytes, ensuring TCP endpoints reduce segment size so resulting encapsulated packets stay within the path MTU (1600 minus headers). This is a practical mitigation when you cannot guarantee MTU across the entire WAN.

Real-world note: MSS clamping is useful on Internet-facing/remote links where you cannot control every hop. For VXLAN, a typical MSS value is around 1400 — adjust based on measured path MTU.

Verify:

! Show running config snippet
show running-config interface GigabitEthernet0/0
interface GigabitEthernet0/0
 ip address 192.0.2.1 255.255.255.252
 mtu 1600
 ip tcp adjust-mss 1400

Step 4: Establish BGP peering to the fusion router and advertise loopbacks

What we are doing: Configure BGP on each Border Node and the fusion router to exchange loopback reachability. BGP provides stable underlay route exchange and allows the fusion router to reach Border Node loopbacks (and vice versa).

! BN1 BGP
configure terminal
router bgp 65001
 bgp log-neighbor-changes
 neighbor 192.0.2.2 remote-as 65000
 neighbor 192.0.2.2 update-source GigabitEthernet0/0
 network 10.10.10.1 mask 255.255.255.255
exit
end

! BN2 BGP (peer to Fusion Router)
configure terminal
router bgp 65001
 bgp log-neighbor-changes
 neighbor 192.0.2.6 remote-as 65000
 neighbor 192.0.2.6 update-source GigabitEthernet0/0
 network 10.10.10.2 mask 255.255.255.255
exit
end

! Fusion Router BGP
configure terminal
router bgp 65000
 bgp log-neighbor-changes
 neighbor 192.0.2.1 remote-as 65001
 neighbor 192.0.2.5 remote-as 65001
exit
end

What just happened: Each Border Node formed a BGP session with the fusion router and advertised its loopback /32. BGP ensures the fusion router can route towards the Border Nodes’ loopbacks (necessary so the fusion router is able to send traffic back to endpoints behind the BNs). Using BGP also allows for scalable underlay advertising when many BNs exist.

Real-world note: In LISP Pub/Sub deployments, External Border Nodes often require a default route (0.0.0.0/0) from upstream; verify routing requirements for your control plane mode.

Verify:

! On BN1
show ip bgp summary
BGP router identifier 10.10.10.1, local AS number 65001
Neighbor        V    AS MsgRcvd MsgSent   TblVer  State/PfxRcd
192.0.2.2       4 65000     123     121        1        1

! On Fusion Router
show ip bgp summary
BGP router identifier 192.0.2.2, local AS number 65000
Neighbor        V    AS MsgRcvd MsgSent   TblVer  State/PfxRcd
192.0.2.1       4 65001     120     122        1        1
192.0.2.5       4 65001     110     111        1        1

! Check routing table on Fusion Router to see BN loopbacks
show ip route 10.10.10.0
Routing entry for 10.10.10.0/32
  Known via "bgp", distance 20, metric 0
  * 192.0.2.1 via GigabitEthernet0/0

Step 5: Verify end-to-end reachability between Border Node loopbacks

What we are doing: Validate the underlay path between BN1 and BN2 loopbacks. If the underlay cannot reach loopbacks, VXLAN tunnels cannot form and overlay traffic will fail.

! From BN1
ping 10.10.10.2 source 10.10.10.1 repeat 5

! Expected output:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.10.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms

What just happened: The pings demonstrate the transit network successfully routes between the Border Nodes’ loopbacks. This confirms the underlay is healthy and VXLAN encapsulation endpoints can communicate.

Real-world note: Always test using source=loopback so that verification uses the same addresses used in control plane and overlay formation.

Verify (additional):

! On BN1 - check BGP routes to BN2
show ip route 10.10.10.2
Routing entry for 10.10.10.2/32
  Known via "bgp", distance 20, metric 0
  * 192.0.2.2 via GigabitEthernet0/0

Verification Checklist

  • Check 1: Loopback interfaces are configured and up — verify with show ip interface brief and expect Loopback0 up/up.
  • Check 2: Transit interfaces support MTU >= 1600 — verify with show interface GigabitEthernet0/0 and expect "MTU 1600 bytes".
  • Check 3: BGP sessions with the fusion router are Established and loopbacks are advertised — verify with show ip bgp summary and show ip route 10.10.10.0/32.
  • Check 4: End-to-end reachability between Border Node loopbacks — verify with ping sourced from Loopback0 and expect 100% success.

Common Mistakes

SymptomCauseFix
VXLAN traffic dropped or high packet lossUnderlay MTU too small; encapsulated frames are fragmented/droppedIncrease MTU on all transit interfaces to >1550 (e.g., 1600) on every hop or implement MSS clamping
BNs cannot form tunnels or control-plane adjacencyFusion router cannot reach Border Node loopback addressesEnsure BGP/underlay has routes to the loopbacks (advertise loopbacks via BGP or static routes)
TCP sessions experience stalls after path changeNo MSS adjustment; inner packets exceed MTU after encapsulationConfigure ip tcp adjust-mss on transit-facing interfaces or ensure consistent MTU end-to-end
Overlay multicast flooding not working for stretched VLANsNo multicast in underlayEnable multicast in the underlay (PIM) for segments requiring Layer 2 Flooding; restrict L2F to dedicated VLANs only

Key Takeaways

  • The transit network must provide stable IP reachability between Border Node loopbacks — these loopbacks are the VTEP addresses used by VXLAN between fabric sites.
  • VXLAN adds encapsulation overhead; ensure underlay MTU > 1550 and use MSS clamping where you cannot control MTU to avoid fragmentation and packet loss.
  • Use BGP in the underlay to advertise Border Node loopbacks to the fusion/external routers. In some control-plane modes (LISP Pub/Sub), a default route from upstream is required for External Border operation.
  • Only enable Layer 2 Flooding (L2F) on subnets that truly require broadcast/link-local multicast across the overlay — and ensure the underlay offers multicast support for those VLANs.

Tip: Treat loopbacks as the single source of truth for VTEP identities and for control-plane peerings. In production, plan MTU and multicast behavior across the entire transit path before enabling overlay features — retrofitting MTU changes is disruptive.


This completes Lesson 6: Fabric Transit and L3 Handoff. In the next lesson we will configure control-plane specifics (LISP Pub/Sub vs. LISP/BGP choices) and show how VN/VRF route exchange maps into the overlay control plane.