Lesson 4 of 5

VXLAN EVPN Fabric Fundamentals

VXLAN EVPN Fabric Fundamentals

Introduction

Modern enterprise networks face a growing challenge: how do you scale Layer 2 and Layer 3 connectivity across large campus and data center environments without running into the limitations of traditional VLANs? The answer lies in VXLAN BGP EVPN fabric technology, which has become a cornerstone topic on the CCNP ENCOR exam and a critical skill for any enterprise network engineer.

In this lesson, you will learn the foundational concepts behind VXLAN and BGP EVPN, understand how they work together to build scalable overlay networks, explore the system roles within a fabric, and see how VXLAN constructs map Layer 2 and Layer 3 segments into a unified architecture. By the end, you will be able to explain what a VXLAN EVPN fabric is, identify the roles of each fabric component, and describe how traffic flows through the overlay.

Key Concepts

What Is VXLAN?

VXLAN (Virtual Extensible LAN) is a standards-based overlay technology that encapsulates original Layer 2 Ethernet frames inside UDP packets for transport across an IP underlay network. This is often described as MAC-in-UDP encapsulation. The key advantage is that VXLAN decouples the overlay network from the physical topology, allowing Layer 2 segments to extend across Layer 3 boundaries.

A classical Ethernet frame uses an 802.1Q VLAN tag expressed over 12 bits, giving a maximum of roughly 4,096 VLANs. VXLAN replaces this with the VXLAN Network Identifier (VNI), which occupies a 24-bit field in the VXLAN header. This expands the address space to over 16 million logical segments, enabling true multi-tenancy at scale.

VXLAN Frame Structure

The VXLAN frame wraps the original Layer 2 frame with additional headers:

ComponentDescription
Outer MACNew destination and source MAC for the underlay hop
Outer IPNew IP header for routing across the underlay
UDPUDP encapsulation carrying the VXLAN payload
VXLAN HeaderContains the 24-bit VNI field
Original L2 FrameThe inner Ethernet frame (DMAC, SMAC, Ethertype, Payload)
New CRCRecalculated frame check sequence

The total overhead added by VXLAN encapsulation is 50 bytes (20 bytes outer IP + 8 bytes UDP + 8 bytes VXLAN header + 14 bytes outer MAC).

Important: Because of the 50-byte overhead, the underlay network MTU must be increased to accommodate VXLAN-encapsulated frames. A common practice is to set the underlay MTU to 9216 bytes (jumbo frames).

What Is BGP EVPN?

BGP EVPN (Ethernet VPN) is the standards-based control plane used alongside VXLAN. Defined in RFC 7432, EVPN uses MP-BGP (Multi-Protocol BGP) to distribute both Layer 2 MAC and Layer 3 IP reachability information. This is a significant improvement over traditional flood-and-learn behavior because forwarding decisions are made based on the control plane rather than data-plane flooding, which minimizes flooding in the network.

EVPN can operate over multiple data-plane technologies:

Data PlaneDescription
MPLSTraditional service-provider backbone
PBB (Provider Backbone Bridges)802.1ah-based transport
NVO (Network Virtualization Overlay)VXLAN-based overlay, most common in enterprise campus and data center

In enterprise campus deployments on Catalyst 9000 platforms, the NVO data plane with VXLAN is the standard approach.

BGP EVPN System Roles

Every device in a VXLAN BGP EVPN fabric serves a specific role. Understanding these roles is essential for designing and troubleshooting fabric networks.

RoleDescription
Leaf (VTEP)The origination and termination point of the VXLAN overlay network. Leaf switches connect endpoints (hosts, servers, APs) and perform encapsulation and decapsulation of VXLAN traffic.
SpineA BGP EVPN route reflector that reflects L2/L3 VPN prefixes, providing hierarchical neighbor peering, learning, and distribution. Spines do not typically connect endpoints directly.
BorderA gateway between the EVPN fabric and an external network domain. Handles handoff to traditional Layer 2 (802.1Q), Layer 3 (VRF), or MPLS networks.
Border GatewayA gateway between two or more BGP EVPN administrative domain boundaries. This is the recommended role for multi-fabric interconnection.
IntermediateA Layer 2 or Layer 3 (IP/MPLS) underlay network system providing basic transport and forwarding. Not VXLAN-aware.

Key Point: On Catalyst 9000 platforms running IOS-XE 17.3.1 and later, a single switch can serve a hybrid role combining Spine, Leaf, and Border functions, giving smaller deployments flexibility without requiring dedicated hardware for each role.

Supported Platforms

The following Catalyst 9000 series platforms support BGP EVPN VXLAN fabric roles:

PlatformSupported Modes
Catalyst 9300L / 9300 / 9300XStandalone, StackWise
Catalyst 9400 / 9400XStandalone, StackWise Virtual
Catalyst 9500 / 9500XStandalone, StackWise Virtual
Catalyst 9600 / 9600XStandalone, StackWise Virtual

Border roles are additionally supported on Catalyst 8000 Edge, ASR 1000 (physical and virtual), Nexus 9000, and ASR 9000 platforms.

How It Works

Underlay and Overlay Networks

A VXLAN BGP EVPN fabric separates the network into two logical layers:

  • Underlay Network: The physical IP network that provides reachability between all VTEP (Leaf) switches. The underlay runs a traditional IGP (such as OSPF or IS-IS) to establish IP connectivity between loopback addresses used as VTEP endpoints. The underlay does not need to be aware of tenant traffic or VLANs.

  • Overlay Network: The virtual network built on top of the underlay using VXLAN tunnels. The overlay carries tenant traffic encapsulated in VXLAN frames. BGP EVPN serves as the control plane for the overlay, distributing MAC and IP routes so that leaf switches know exactly where to send encapsulated traffic.

VXLAN Constructs: L2VNI and L3VNI

VXLAN uses two types of VNIs to handle bridged and routed traffic:

Layer 2 VNI (L2VNI):

  • There is one L2VNI per Layer 2 segment (VLAN)
  • L2VNIs carry bridged (switched) traffic between hosts in the same subnet across the fabric
  • Each L2VNI maps to a specific VLAN and its corresponding SVI on the leaf switch

Layer 3 VNI (L3VNI):

  • There is one L3VNI per tenant (VRF) for routing
  • The L3VNI carries routed packets between different subnets within the same VRF
  • Traffic between L2VNIs is routed through the L3VNI

Consider an example with VRF-X containing multiple subnets:

  • VLAN A maps to L2VNI-A with SVI A
  • VLAN B maps to L2VNI-B with SVI B
  • VLAN C maps to L2VNI-C with SVI C
  • All three belong to VRF-X, which uses a single L3VNI (mapped to a transit VLAN X with SVI X)

When a host on VLAN A needs to communicate with a host on VLAN B, the traffic is routed through the L3VNI. When two hosts on the same VLAN communicate across different leaf switches, the traffic is bridged through the corresponding L2VNI.

This architecture supports multi-tenancy by assigning separate VRFs with their own L3VNIs. For example, VRF-X and VRF-Y can each contain their own set of L2VNIs, completely isolated from one another.

Integrated Routing and Bridging (IRB)

BGP EVPN supports Integrated Routing and Bridging (IRB), which provides optimized forwarding in the overlay. With IRB, a leaf switch can perform both Layer 2 bridging (within an L2VNI) and Layer 3 routing (between L2VNIs via the L3VNI) locally, without needing to send traffic to a centralized gateway. This is the basis of the Distributed Anycast Gateway model, where every leaf switch shares the same gateway IP and MAC address for each SVI.

BUM Traffic Handling

BUM (Broadcast, Unknown Unicast, Multicast) traffic in VXLAN fabrics can be handled through two methods:

  • Ingress Replication: The ingress VTEP replicates BUM frames and sends a unicast copy to every remote VTEP. This was supported from the earliest Catalyst 9000 EVPN release (IOS-XE 16.9.1).
  • Multicast Replication: BUM traffic is mapped to multicast groups in the underlay, reducing replication load on the ingress VTEP. More advanced multicast features including Tenant Routed Multicast (TRM) with Distributed Anycast RP were introduced in IOS-XE 17.3.1.

Configuration Example

While the reference material focuses on architecture and constructs rather than step-by-step CLI configuration, several key features and their corresponding IOS-XE release requirements are documented. Below is a summary of capabilities by release that guides what you can configure:

IOS-XE ReleaseKey Capabilities
16.9.1Layer 3 overlay, Distributed Anycast Gateway, Ingress Replication for BUM, DHCPv4 relay in EVPN VRF, Multi-VRF IPv4 handoff, L2 VLAN handoff
16.12.1Layer 2 overlay, Centralized Gateway, ARP/ND suppression, L2 multi-homing (StackWise Virtual), wireless support, firewall integration, IPv6 host overlay, VXLAN-aware Flexible NetFlow
17.3.1Hybrid role support, ESI-based L2 multi-homing, Distributed Anycast RP for TRM, per-VNI multicast BUM rate limiter, mDNS service routing over EVPN, PVLAN-based segmentation
17.6.1RT-2 to RT-5 re-originate support, doubled VNI scale (512), up to 500 leaf scale per fabric domain, optimized L2 multicast with IGMP/MLD snooping, Data MDT support
17.9.1NAT44 support, per-VLAN peer-to-peer protected mode, VXLANv6 control plane and underlay
17.12.1Route-map support for RT-2/RT-5, per-VLAN ESI L2 multi-homing, micro-segmentation, auto RD/RT, 1024 VNI scale, DHCP snooping and ARP inspection in L2 overlay
17.15.1TRM multicast IPv6 for DAG, per-VLAN ESI on C9500X/C9600X, Centralized Gateway on C9500X/C9600X, SSO high availability, L2VPN profile CLI, ND proxy, OpenConfig models

Best Practice: Always verify your target IOS-XE release supports the specific EVPN features your design requires before beginning deployment. Refer to the Catalyst EVPN Scale and Performance Matrix for platform-specific limits.

Real-World Application

Enterprise Campus Deployments

BGP EVPN VXLAN fabric is increasingly adopted in enterprise campus networks for several compelling reasons:

  • Industry Standard: BGP EVPN is a standards-based fabric that supports multi-vendor interoperability, aligning with enterprise IT strategies that avoid vendor lock-in.
  • Unified Fabric: A single fabric architecture can span across campus, data center, and WAN, simplifying operations and providing a consistent policy framework.
  • Flexible Overlay: Overlay networks can be customized per use case with different types and topologies, including full-mesh, partial-mesh, hub-and-spoke, and point-to-point L3/L3 overlay topologies (supported from IOS-XE 17.3.1).
  • Hierarchical Design: The spine-leaf architecture provides a non-blocking, structured, and scalable fabric with support for hybrid system roles.
  • Proven Protocol: BGP has decades of operational history and multi-protocol capabilities, minimizing the learning curve for network teams already familiar with BGP.

Common Design Scenarios

Typical enterprise deployments connect the EVPN fabric to external domains through border nodes:

  • Layer 2 handoff via 802.1Q trunking to legacy networks
  • Multi-VRF IPv4/IPv6 handoff to external routed domains
  • EVPN to MPLS VPNv4/VPNv6 integration for WAN connectivity
  • EVPN to VPLS bridge interworking for legacy data center interconnect
  • SD-Access integration with BGP EVPN for unified campus policy

Scaling Considerations

As of IOS-XE 17.12.1, the fabric supports up to 1,024 VNIs and up to 500 leaf switches per fabric domain (supported from 17.6.1). The Catalyst 9500-H supports a custom SDM template for large-scale MAC/IP route tables. These numbers should be validated against the platform-specific scale and performance documentation for your hardware.

Summary

  • VXLAN is a MAC-in-UDP overlay technology using a 24-bit VNI field, adding 50 bytes of encapsulation overhead and supporting over 16 million logical segments.
  • BGP EVPN (RFC 7432) serves as the standards-based control plane, distributing Layer 2 and Layer 3 reachability via MP-BGP and minimizing flood-and-learn behavior.
  • Fabric roles include Leaf (VTEP), Spine (route reflector), Border, and Border Gateway, with hybrid role support available from IOS-XE 17.3.1 on Catalyst 9000 platforms.
  • L2VNIs handle bridged traffic per VLAN segment, while a single L3VNI per VRF handles inter-subnet routing, enabling multi-tenancy at scale.
  • Enterprise adoption is driven by standards-based interoperability, unified cross-domain fabric, flexible overlay topologies, and the proven reliability of BGP as a control-plane protocol.

In the next lesson, we will explore underlay network design and configuration, covering how to build the IP transport foundation that supports your VXLAN EVPN overlay.