Back to Blog
CCNP Enterprise23 min read

Advanced LISP and SD-Access Forwarding Architecture

A
Admin
March 26, 2026
LISP SD-AccessSDA forwardingLISP architectureCCIE EnterpriseSD-Access fabric

Advanced LISP and SD-Access Forwarding Architecture

Introduction

Imagine you are troubleshooting an SD-Access fabric where endpoints in one virtual network cannot reach resources in another, yet all the underlay routing looks perfectly healthy. The problem almost certainly lives in the overlay — specifically in the LISP SD-Access forwarding plane that ties every Edge Node, Border Node, and Control Plane Node together. Without a solid grasp of how the Locator ID Separation Protocol operates beneath the surface of SD-Access, diagnosing issues like these can feel like searching for a needle in a haystack.

LISP is the control-plane protocol that makes SD-Access possible. It separates the identity of an endpoint (its MAC or IP address) from the location where that endpoint is reachable in the network underlay. This separation is what allows SD-Access to deliver host mobility, macro-segmentation through virtual networks, and micro-segmentation through Scalable Group Tags — all without requiring any changes to the underlying routed infrastructure.

In this article we will take a deep, technical look at the LISP architecture that powers SD-Access forwarding. We will walk through LISP fundamentals, examine how virtualisation maps to SD-Access constructs, explore the service mappings on Border Nodes, Control Plane Nodes, and Edge Nodes, and review real configuration examples pulled from production-representative topologies. Whether you are preparing for a CCIE Enterprise exam or designing a campus fabric, this guide will give you the depth you need.

What Is LISP and Why Does SD-Access Use It?

LISP stands for Locator ID Separation Protocol. At its core, LISP solves a fundamental problem in traditional networking: IP addresses serve a dual purpose as both the identity of a device and the locator that tells the network where that device sits. LISP breaks this coupling by introducing two distinct namespaces:

  • EID (Endpoint ID) — the identity of the endpoint. An EID can be a MAC address, an IPv4 host route, an IPv6 host route, an IPv4 or IPv6 summary route, or even a network service such as a default ETR (which represents a default route).
  • RLOC (Routing Locator) — the routable address in the underlay that tells the fabric where the endpoint currently resides.

By mapping an EID to an RLOC, LISP allows endpoints to move freely across the fabric while the underlay routing table remains stable. The underlay only needs to know how to reach RLOCs (typically loopback addresses on fabric nodes); it never carries host routes for individual endpoints.

In the SD-Access context, LISP provides the control plane that registers, resolves, and distributes endpoint reachability information across the fabric. When a host connects to an Edge Node, that node registers the host's EID (MAC and IP) with the Control Plane Node. When another node needs to reach that host, it queries the Control Plane Node to resolve the EID to the appropriate RLOC. This publish-subscribe model eliminates the need for flood-and-learn behaviour and keeps the fabric efficient at scale.

LISP SD-Access Terminology: Mapping LISP Roles to Fabric Nodes

One of the first hurdles in understanding LISP SD-Access is mapping traditional LISP terminology to the SD-Access node roles you see in design guides and in the Catalyst Center UI. The table below provides a complete mapping.

LISP RoleDescriptionSD-Access Equivalent
ITR (Ingress Tunnel Router)Receives packets from attached endpoints destined for remote EIDs, looks up the corresponding RLOC, and encapsulates the packets before forwarding them to the appropriate ETR or PETR.N/A (combined in xTR)
ETR (Egress Tunnel Router)Receives encapsulated packets, decapsulates them, and forwards the unencapsulated packets to their intended destinations within the ETR's local EID space.N/A (combined in xTR)
xTRITR and ETR capabilities co-located on the same device.Edge Node
MS (Map Server)Authenticates and accepts LISP registrations of EID-to-RLOC mappings. Publishes registrations to subscribers.N/A (combined in MSMR)
MR (Map Resolver)Accepts Map Request (lookup) messages and forwards them to the appropriate Map Server for EID-to-RLOC resolution. Sends Negative Map Replies for unknown EIDs.N/A (combined in MSMR)
MSMRMS and MR capabilities co-located on the same device.Control Plane Node
PITR (Proxy Ingress Tunnel Router)Connects non-LISP and LISP sites. Advertises aggregated EID prefixes to attract and encapsulate traffic for forwarding to LISP sites.N/A (combined in PxTR)
PETR (Proxy Egress Tunnel Router)Connects non-LISP and LISP sites. Registers external EID-to-RLOC mappings, decapsulates tunnelled data and passes it to external networks.N/A (combined in PxTR)
PxTRPITR and PETR capabilities co-located on the same device.Border Node

Pro Tip: When reading LISP debug output or show commands on SD-Access devices, you will see the traditional LISP terms (ITR, ETR, MS, MR, etc.) rather than the SD-Access names. Knowing both sets of terminology is essential for effective troubleshooting.

The key takeaway is that SD-Access consolidates LISP roles onto three main node types: the Edge Node acts as an xTR (combined ITR/ETR), the Control Plane Node acts as an MSMR (combined Map Server and Map Resolver), and the Border Node acts as a PxTR (combined Proxy ITR and Proxy ETR).

How Does LISP Virtualisation Work in SD-Access?

LISP achieves virtualisation through Instance IDs (IIDs). Each Instance ID represents an isolated forwarding domain, and traffic in one IID cannot cross into another IID without explicit policy (such as Extranet). This is the mechanism that underpins macro-segmentation in SD-Access — what the fabric calls Virtual Networks or VNs.

Within each Instance ID, LISP tracks and resolves three types of EID:

  1. MAC EIDs — Layer 2 endpoint identifiers used by the Ethernet service
  2. IPv4 EIDs — Layer 3 IPv4 endpoint identifiers
  3. IPv6 EIDs — Layer 3 IPv6 endpoint identifiers

The LISP services on a fabric node are organised by service type. On a Layer 3 switch running LISP, you will typically see:

  • Service IPv4 — handles IPv4 EID registrations and lookups
  • Service IPv6 — handles IPv6 EID registrations and lookups
  • Service Ethernet — handles MAC EID registrations and lookups

Each of these services operates within a specific Instance ID. For example, in a typical SD-Access deployment you might see:

  • IID 4096 — maps to the Global Routing Table (INFRA_VN) for Service IPv4
  • IID 4097 — maps to another L3 virtual network for Service IPv4 / IPv6
  • IID 4098 — maps to yet another L3 virtual network
  • IID 8188 — maps to a Layer 2 service (Service Ethernet)

The numbering convention is significant: Layer 3 Instance IDs typically start in the 4096 range, while Layer 2 Instance IDs start in the 8188 range. Each Layer 2 VLAN that is extended across the fabric gets its own L2 IID under the Ethernet service.

Pro Tip: The relationship between L2 IIDs and L3 IIDs is what connects a VLAN (Layer 2 segment) to its parent VRF (Layer 3 virtual network). Understanding this mapping is critical when troubleshooting endpoints that can ping their gateway but cannot reach remote subnets — it often indicates a mismatch between the L2 and L3 IID bindings.

LISP SD-Access Publish and Subscribe Model

SD-Access uses a publish/subscribe model for distributing reachability information across the fabric. This is a significant departure from the traditional LISP map-request/map-reply pull model and is one of the reasons SD-Access can converge rapidly when endpoints move or when new hosts come online.

Here is how the model works for each node role:

Border Nodes

  • Register EID-to-RLOC mappings with the Control Plane Node. These registrations represent the external prefixes that the Border Node has learned from the external routing domain (via BGP, OSPF, or static routes).
  • Subscribe to the Control Plane Node for IPv4 and IPv6 reachability information. This ensures the Border Node receives updates whenever new internal EIDs are registered by Edge Nodes.
  • May send Map Requests (lookups) if there is an Extranet policy configured. Extranet allows controlled communication between different Instance IDs, and the Border Node may need to resolve EIDs across IID boundaries.

Control Plane Nodes

  • Receive and store EID-to-RLOC registrations from both Edge Nodes and Border Nodes.
  • Publish reachability information to all subscribers whenever a new registration arrives or an existing registration changes.
  • May resolve Border Node Map Requests if there is an Extranet policy in place, performing cross-IID lookups on behalf of the requesting node.

Edge Nodes

  • Register their locally attached endpoint EIDs (MAC, IPv4, IPv6) with the Control Plane Node.
  • Subscribe to reachability information so they can maintain up-to-date forwarding tables without flooding.

This publish/subscribe architecture means that when a new endpoint connects to Edge Node A, the following sequence occurs: Edge Node A registers the EID with the Control Plane Node; the Control Plane Node publishes the new mapping to all subscribed nodes (other Edge Nodes and Border Nodes); those nodes update their local map caches. The result is near-instantaneous convergence across the entire fabric.

Border Node Service Mapping and Configuration

The Border Node is the gateway between the SD-Access fabric and external routing domains. Its LISP configuration must map each virtual network (VRF) to a corresponding LISP Instance ID while also establishing external routing adjacencies — typically using EBGP — to exchange prefixes with fusion routers or other external devices.

VRF and Interface Structure

A typical Border Node has the following Layer 3 structure:

  • Global Routing Table (INFRA_VN) — contains the underlay interfaces (physical uplinks, management, loopback). This maps to a base LISP Instance ID (e.g., IID 4097 for IPv4 service in the infrastructure VN).
  • CORP_VN — a user-defined VRF for corporate traffic, with its own VLAN interfaces and a dedicated loopback.
  • IOT_VN — a user-defined VRF for IoT traffic, similarly structured.

Each VRF gets its own set of VLAN interfaces for internal fabric connectivity and a set of VLAN interfaces facing the external routing domain. External BGP peerings are established per-VRF to exchange routes with fusion devices.

Border Node Configuration Example

The following configuration shows how a Border Node maps VRFs, interfaces, BGP peerings, and LISP Instance IDs together:

! Border Node (BN1)
!
vrf definition CORP_VN
!
vrf definition IOT_VN
!
interface Loopback0
 ip address 192.168.8.1 255.255.255.255
!
interface Vlan222
 ip address 172.29.0.114 255.255.255.252
!
interface Vlan236
 vrf forwarding CORP_VN
 ip address 172.29.4.51 255.255.255.248
!
interface Vlan237
 vrf forwarding IOT_VN
 ip address 172.29.4.59 255.255.255.248
!
router bgp 65106
 address-family ipv4
  neighbor 172.29.0.113 activate
 address-family ipv4 vrf CORP_VN
  neighbor 172.29.4.49 activate
 address-family ipv4 vrf IOT_VN
  neighbor 172.29.4.57 activate
!
router lisp
 service ipv4
  etr
  proxy-etr
  proxy-itr 192.168.8.1
 service ethernet
  itr
  etr
 !
 instance-id 4097
  service ipv4
   eid-table default
 !
 instance-id 4099
  service ipv4
   eid-table vrf CORP_VN
 !
 instance-id 4100
  service ipv4
   eid-table vrf IOT_VN

Let us break down the key elements of this configuration:

  1. VRF DefinitionsCORP_VN and IOT_VN are defined as separate VRFs, providing Layer 3 isolation between virtual networks.

  2. Loopback0 (192.168.8.1) — serves as the RLOC address for this Border Node. This is the address that will appear in EID-to-RLOC mappings and is used as the proxy-itr address under the LISP IPv4 service.

  3. VLAN Interfaces — Vlan222 sits in the global routing table and provides connectivity to the external routing domain for INFRA_VN. Vlan236 and Vlan237 are placed in their respective VRFs and face the external fusion devices for CORP_VN and IOT_VN.

  4. BGP Configuration — EBGP peerings are established per address-family per VRF. The neighbor in the global table (172.29.0.113) handles infrastructure routes, while VRF-specific neighbors (172.29.4.49 for CORP_VN, 172.29.4.57 for IOT_VN) handle user traffic routes. These external BGP peers are the fusion devices that bridge the SD-Access fabric to the rest of the enterprise network.

  5. LISP Service Configuration — Under router lisp, the Border Node enables:

    • etr and proxy-etr under service IPv4, because as a PxTR it must decapsulate fabric traffic destined for external networks.
    • proxy-itr 192.168.8.1 to encapsulate traffic coming from external networks into the fabric.
    • itr and etr under service Ethernet for Layer 2 operations.
  6. Instance ID Mappings:

    • IID 4097 with eid-table default — maps IPv4 service to the global routing table (INFRA_VN).
    • IID 4099 with eid-table vrf CORP_VN — maps IPv4 service to the CORP_VN virtual network.
    • IID 4100 with eid-table vrf IOT_VN — maps IPv4 service to the IOT_VN virtual network.

Pro Tip: The proxy-itr address must match the Loopback0 address used as the RLOC. If these do not match, traffic returning from external networks will be sourced from an unexpected address, causing LISP encapsulation failures.

Control Plane Node Service Mapping and Configuration

The Control Plane Node (MSMR) is the brain of the SD-Access fabric. It does not forward user data traffic — instead, it maintains the authoritative database of all EID-to-RLOC mappings and manages the publish/subscribe relationships with Edge Nodes and Border Nodes.

Control Plane Node Structure

Unlike Border Nodes and Edge Nodes, the Control Plane Node has a simpler Layer 3 structure:

  • Global Routing Table (INFRA_VN) — contains the management and underlay interfaces (GigabitEthernet1, GigabitEthernet2) and a Loopback interface.
  • No user VRFs — the Control Plane Node does not participate in user data forwarding, so it does not need VRF definitions for CORP_VN, IOT_VN, or any other user virtual network.

However, the Control Plane Node must be aware of every Instance ID in the fabric because it needs to accept and store registrations across all IIDs. Its LISP configuration includes both Layer 3 and Layer 2 IIDs:

  • L3 LISP IIDs: 4096, 4099, 4100 — corresponding to the IPv4 service for each virtual network.
  • L2 LISP IIDs: 8188, 8189, 8190 — corresponding to the Ethernet service for each Layer 2 segment extended across the fabric.

Control Plane Node Configuration Example

! Control Plane Node (CP1)
!
interface Loopback1023
 ip address 172.31.136.1 255.255.255.255
!
router lisp
 !
 service ipv4
  map-server
  map-resolver
 exit-service-ipv4
 !
 service ethernet
  map-server
  map-resolver
 exit-service-ethernet
 !
 site site_uci
  eid-record instance-id 4097 0.0.0.0/0 accept-more-specifics
  eid-record instance-id 4097 172.31.136.0/24 accept-more-specifics
  eid-record instance-id 4099 0.0.0.0/0 accept-more-specifics
  eid-record instance-id 4099 10.4.3.0/24 accept-more-specifics
  eid-record instance-id 4100 0.0.0.0/0 accept-more-specifics
  eid-record instance-id 4100 10.3.3.0/24 accept-more-specifics
  eid-record instance-id 8188 any-mac
  eid-record instance-id 8189 any-mac
  eid-record instance-id 8190 any-mac
  allow-locator-default-etr instance-id 4097 ipv4
  allow-locator-default-etr instance-id 4099 ipv4
  allow-locator-default-etr instance-id 4100 ipv4

Let us examine each section:

  1. Loopback1023 (172.31.136.1) — this is the RLOC address for the Control Plane Node. Edge Nodes and Border Nodes point to this address when registering their EIDs or subscribing to updates.

  2. Service IPv4map-server and map-resolver are enabled, making this device the authoritative server for IPv4 EID registrations and the resolver for IPv4 EID lookups.

  3. Service Ethernet — similarly, map-server and map-resolver are enabled for Layer 2 MAC EID registrations and lookups.

  4. Site Definition (site_uci) — this is where the Control Plane Node defines which EID registrations it will accept. Each eid-record line specifies an Instance ID and a prefix (or any-mac for Layer 2) along with the accept-more-specifics keyword, which tells the Map Server to accept any host route that falls within the specified range.

  5. EID Records per IID:

    • IID 4097: Accepts 0.0.0.0/0 (default route registrations) and 172.31.136.0/24 with more specifics — this covers the INFRA_VN.
    • IID 4099: Accepts 0.0.0.0/0 and 10.4.3.0/24 with more specifics — this covers CORP_VN.
    • IID 4100: Accepts 0.0.0.0/0 and 10.3.3.0/24 with more specifics — this covers IOT_VN.
    • IID 8188, 8189, 8190: Accepts any-mac — these are the Layer 2 Ethernet IIDs, and the Control Plane Node will accept MAC registrations from any endpoint in these segments.
  6. allow-locator-default-etr — this command, applied per L3 IID, permits Border Nodes to register a default route (0.0.0.0/0) as a "default ETR" for each virtual network. This is how the fabric knows where to send traffic destined for prefixes that are not locally registered — the Border Node advertises itself as the default exit point.

Pro Tip: The accept-more-specifics keyword is critical. Without it, the Map Server would only accept exact prefix matches. Since SD-Access registers individual host routes (/32 for IPv4, /128 for IPv6), the Map Server must be configured to accept more specific entries under the defined aggregate.

Edge Node Service Mapping in LISP SD-Access

The Edge Node is where endpoints connect to the SD-Access fabric. In LISP terms, it functions as an xTR — combining both the Ingress Tunnel Router (ITR) and Egress Tunnel Router (ETR) roles on a single device.

Edge Node Structure

The Edge Node's Layer 3 structure mirrors that of the Border Node in many ways:

  • Global Routing Table (INFRA_VN) — contains the underlay interfaces (physical uplinks like FourHundredGigabitEthernet1 and FourHundredGigabitEthernet2), infrastructure VLANs (VLAN 10, VLAN 11), and Loopback0.
  • CORP_VN — a VRF containing user-facing VLANs (VLAN 20, VLAN 21) and a dedicated Loopback (Loopback 4097).
  • IOT_VN — a VRF containing IoT VLANs (VLAN 30, VLAN 31) and a dedicated Loopback (Loopback 4098).

The critical difference from the Border Node is in the Layer 2 LISP configuration. Because the Edge Node is where endpoints physically attach, it must run the Ethernet service for every VLAN that is extended across the fabric. Each VLAN maps to its own Layer 2 Instance ID:

  • IID 8188 — Service Ethernet (e.g., VLAN 20)
  • IID 8189 — Service Ethernet (e.g., VLAN 21)
  • IID 8190 — Service Ethernet (e.g., VLAN 30)
  • IID 8191 — Service Ethernet (e.g., VLAN 31)
  • IID 8192 — Service Ethernet (e.g., another extended VLAN)

Each of these L2 IIDs carries MAC address registrations for all endpoints connected to that VLAN on this Edge Node. When an endpoint sends its first frame, the Edge Node learns the MAC address, registers it as an EID with the Control Plane Node under the appropriate L2 IID, and the Control Plane Node publishes that mapping to all subscribers.

How Edge Nodes Handle Endpoint Registration

When a new endpoint connects to an Edge Node, the following sequence occurs:

  1. The endpoint sends traffic (ARP, DHCP, or data) on its access VLAN.
  2. The Edge Node learns the source MAC address and, once an IP is assigned, the source IP address.
  3. The Edge Node registers the MAC EID under the corresponding L2 IID (e.g., IID 8189) with the Control Plane Node.
  4. The Edge Node registers the IPv4 EID (and/or IPv6 EID) under the corresponding L3 IID (e.g., IID 4099 for CORP_VN) with the Control Plane Node.
  5. The Control Plane Node stores these mappings and publishes them to all subscribed nodes.

This dual registration — both MAC and IP — is what enables the fabric to perform both Layer 2 and Layer 3 lookups, supporting both bridged (intra-subnet) and routed (inter-subnet) forwarding within the overlay.

How Do VRFs Map to LISP Instance IDs in SD-Access?

Understanding the mapping between VRFs and LISP Instance IDs is fundamental to both designing and troubleshooting SD-Access fabrics. Here is a summary of how this mapping works based on the configuration examples we have examined:

VRF / Virtual NetworkL3 LISP IIDL2 LISP IIDsPurpose
Global Routing Table (INFRA_VN)4096 / 4097N/AUnderlay infrastructure, management
CORP_VN40998188, 8189Corporate user traffic
IOT_VN41008190, 8191, 8192IoT device traffic

Each VRF is assigned exactly one L3 IID that carries all IPv4 (and IPv6) host routes for endpoints in that virtual network. However, a single VRF may contain multiple VLANs (subnets), and each VLAN gets its own L2 IID. This means a VRF with five VLANs will have one L3 IID and five L2 IIDs.

The dedicated Loopback per VRF (e.g., Loopback 4097 for CORP_VN, Loopback 4098 for IOT_VN) serves as the anycast gateway source for that virtual network. This loopback address is used as the source for LISP registrations within that VRF's IID scope.

Pro Tip: When troubleshooting reachability issues within a specific virtual network, always start by verifying that the L3 IID on the Edge Node maps to the correct VRF using show running-config | section router lisp. A misconfigured eid-table statement that points an IID to the wrong VRF is a common cause of silent forwarding failures.

Understanding the Default ETR Concept in LISP SD-Access

One of the more nuanced aspects of LISP SD-Access forwarding is the concept of the default ETR. In the Control Plane Node configuration, you saw the command:

allow-locator-default-etr instance-id 4097 ipv4
allow-locator-default-etr instance-id 4099 ipv4
allow-locator-default-etr instance-id 4100 ipv4

This command permits a Border Node to register a default route (0.0.0.0/0) as an EID within each Layer 3 Instance ID. When a Border Node registers a default route, it is effectively telling the fabric: "If you have traffic for a destination that no Edge Node has registered, send it to me — I will forward it to the external routing domain."

This is how traffic exits the SD-Access fabric. When an Edge Node receives a packet from a local endpoint destined for an IP address that is not registered in the Control Plane Node's database, the Map Resolver returns the default ETR mapping, which points to the Border Node's RLOC. The Edge Node then LISP-encapsulates the packet and sends it to the Border Node, which decapsulates it and forwards it via the appropriate VRF and BGP peering to the external network.

The default ETR is configured on the Border Node through the proxy-etr command under service ipv4:

router lisp
 service ipv4
  etr
  proxy-etr
  proxy-itr 192.168.8.1

The proxy-etr keyword enables the Border Node to act as a Proxy Egress Tunnel Router, accepting encapsulated traffic on behalf of external (non-LISP) destinations. Combined with the allow-locator-default-etr on the Control Plane Node, this creates the complete path for fabric-to-external forwarding.

LISP SD-Access and External Routing: The Role of Fusion Devices

The Border Node does not connect directly to the WAN or data centre in most SD-Access designs. Instead, it peers with fusion devices — routers or Layer 3 switches that sit between the fabric border and the rest of the enterprise network.

From the Border Node configuration, we can see that EBGP is used to exchange routes with fusion devices:

router bgp 65106
 address-family ipv4
  neighbor 172.29.0.113 activate
 address-family ipv4 vrf CORP_VN
  neighbor 172.29.4.49 activate
 address-family ipv4 vrf IOT_VN
  neighbor 172.29.4.57 activate

Each VRF has its own EBGP peering to the fusion device. This per-VRF peering ensures that route leaking between virtual networks does not happen at the border — each virtual network's routes are exchanged independently.

The VLAN interfaces used for these peerings use small subnets:

  • Vlan222 (172.29.0.114/30) — INFRA_VN peering to fusion device at 172.29.0.113
  • Vlan236 (172.29.4.51/29) — CORP_VN peering to fusion device at 172.29.4.49
  • Vlan237 (172.29.4.59/29) — IOT_VN peering to fusion device at 172.29.4.57

The fusion device is responsible for maintaining the separation between virtual networks on the external side. If inter-VN communication is required (for example, allowing CORP_VN users to reach a shared services subnet in IOT_VN), the fusion device can be configured with route leaking policies. Alternatively, SD-Access Extranet policies can handle this within the fabric itself, with the Control Plane Node resolving cross-IID map requests.

How Does EID Registration and Resolution Work End-to-End?

Let us walk through a complete end-to-end scenario to tie together all the concepts covered so far. Consider the following topology:

  • Edge Node EN1 (RLOC: 192.168.1.1) has an endpoint with MAC 1111.2222.3333, IPv4 10.10.10.10, IPv6 fc00::10 in CORP_VN.
  • Edge Node EN2 (RLOC: 192.168.1.2) has an endpoint with MAC 4444.5555.6666, IPv4 10.10.10.11, IPv6 fc00::11 in CORP_VN.
  • Control Plane Node CP1 (RLOC: 192.168.1.3) serves as the MSMR.
  • Border Node BN1 (RLOC: 192.168.1.4 / 192.168.8.1) connects to the external network.

Step 1: EID Registration

When endpoint 10.10.10.10 connects to EN1:

  • EN1 registers MAC EID 1111.2222.3333 with CP1 under L2 IID 8189, mapping it to RLOC 192.168.1.1.
  • EN1 registers IPv4 EID 10.10.10.10/32 with CP1 under L3 IID 4099 (CORP_VN), mapping it to RLOC 192.168.1.1.
  • CP1 stores these mappings and publishes them to all subscribers.

Similarly, when endpoint 10.10.10.11 connects to EN2:

  • EN2 registers MAC 4444.5555.6666 and IPv4 10.10.10.11 with CP1, both mapped to RLOC 192.168.1.2.

Step 2: Intra-Fabric Forwarding (EN1 to EN2)

When endpoint 10.10.10.10 on EN1 wants to reach 10.10.10.11 on EN2:

  1. EN1 (acting as ITR) looks up the destination EID 10.10.10.11 in its local map cache.
  2. If the mapping exists (received via publish/subscribe), EN1 knows the destination RLOC is 192.168.1.2.
  3. EN1 LISP-encapsulates the original packet with a new outer IP header: source RLOC 192.168.1.1, destination RLOC 192.168.1.2.
  4. The encapsulated packet is routed through the underlay to EN2.
  5. EN2 (acting as ETR) receives the encapsulated packet, decapsulates it, and delivers the original packet to endpoint 10.10.10.11.

Step 3: Fabric-to-External Forwarding

When endpoint 10.10.10.10 on EN1 wants to reach an external destination (e.g., 8.8.8.8):

  1. EN1 looks up EID 8.8.8.8 in its map cache.
  2. No specific registration exists, but the default ETR mapping points to BN1 (RLOC 192.168.8.1).
  3. EN1 encapsulates the packet and sends it to BN1.
  4. BN1 (acting as Proxy ETR) decapsulates the packet and performs a VRF lookup in CORP_VN.
  5. BN1 forwards the packet to its EBGP peer (fusion device at 172.29.4.49) for routing to the external destination.

This end-to-end flow demonstrates the elegance of the LISP SD-Access architecture: the underlay handles simple RLOC-to-RLOC routing, while the overlay (LISP) handles all endpoint identity resolution and policy enforcement.

Frequently Asked Questions

What is the difference between an EID and an RLOC in LISP SD-Access?

An EID (Endpoint ID) is the identity of an endpoint — its MAC address, IPv4 address, or IPv6 address. An RLOC (Routing Locator) is the routable address of the fabric node where that endpoint is currently attached (typically the Loopback0 address of an Edge Node or Border Node). LISP maps EIDs to RLOCs so the underlay only needs to carry RLOC routes, not individual host routes for every endpoint. This separation is what enables host mobility and scalability in SD-Access.

Why does SD-Access use separate Instance IDs for Layer 2 and Layer 3?

SD-Access needs to track both MAC addresses (for Layer 2 switching within a VLAN) and IP addresses (for Layer 3 routing between VLANs). Layer 2 Instance IDs (in the 8188+ range) handle MAC EID registrations per VLAN, while Layer 3 Instance IDs (in the 4096+ range) handle IP EID registrations per VRF. A single VRF may span multiple VLANs, so there is a one-to-many relationship between L3 IIDs and L2 IIDs. This dual-IID approach allows the fabric to support both bridged and routed forwarding scenarios within the same overlay.

What role does the Control Plane Node play in SD-Access forwarding?

The Control Plane Node functions as the MSMR (Map Server / Map Resolver) in LISP. It does not forward any user data traffic. Instead, it maintains the authoritative database of all EID-to-RLOC mappings for every Instance ID in the fabric. Edge Nodes and Border Nodes register their local EIDs with the Control Plane Node and subscribe to updates. When a new endpoint is registered, the Control Plane Node publishes the mapping to all subscribers. It also handles Map Requests for EID resolution, including cross-IID lookups when Extranet policies are configured.

How does traffic leave the SD-Access fabric through the Border Node?

The Border Node acts as a PxTR (Proxy xTR) in LISP. It registers a default route (0.0.0.0/0) as a "default ETR" in each Layer 3 Instance ID. When an Edge Node has traffic for a destination not registered in the fabric, the Control Plane Node resolves the lookup to the Border Node's RLOC. The Edge Node encapsulates the packet and sends it to the Border Node, which decapsulates it, performs a VRF routing lookup, and forwards it to the external network via EBGP peering with a fusion device.

What is the purpose of the accept-more-specifics keyword in the Control Plane Node configuration?

The accept-more-specifics keyword tells the Map Server to accept EID registrations that are more specific (longer prefix) than the configured eid-record prefix. Since SD-Access registers individual host routes (/32 for IPv4), the Map Server needs this keyword to accept those specific entries under a broader aggregate. Without it, only exact prefix matches would be accepted, and host route registrations would fail.

Can endpoints in different virtual networks communicate in SD-Access?

By default, virtual networks (Instance IDs) in SD-Access are completely isolated from each other. Traffic in IID 4099 (CORP_VN) cannot reach endpoints in IID 4100 (IOT_VN). However, if inter-VN communication is required, it can be achieved through Extranet policies configured on the Control Plane Node, which allow cross-IID map resolution. Alternatively, route leaking can be performed on the external fusion devices that peer with the Border Node via EBGP.

Conclusion

The LISP SD-Access forwarding architecture is a sophisticated yet elegant system that separates endpoint identity from network location, enabling scalable, secure, and agile campus networks. In this article, we have explored the full depth of this architecture — from the fundamental EID-to-RLOC mapping concept, through LISP virtualisation with Instance IDs, to the detailed service mappings on Border Nodes, Control Plane Nodes, and Edge Nodes.

The key takeaways are:

  • LISP provides the control plane for SD-Access, mapping endpoint identities (EIDs) to fabric node locations (RLOCs).
  • Instance IDs deliver virtualisation, with separate L3 IIDs for each VRF and L2 IIDs for each extended VLAN.
  • The publish/subscribe model ensures rapid convergence when endpoints move or new hosts connect.
  • Border Nodes bridge the fabric to external networks using per-VRF EBGP peerings and the default ETR mechanism.
  • Control Plane Nodes maintain the authoritative EID-to-RLOC database and manage registrations across all Instance IDs.
  • Edge Nodes register local endpoints and perform LISP encapsulation/decapsulation for all overlay traffic.

Mastering these concepts is essential for anyone pursuing the CCIE Enterprise Infrastructure certification or deploying SD-Access in production environments. The interplay between LISP roles, Instance IDs, VRFs, and the publish/subscribe mechanism is tested heavily in both the written and lab exams.

To deepen your understanding of SD-Access design and deployment, explore the training courses available at NHPREP. Hands-on practice with real configurations — like the Border Node, Control Plane Node, and Edge Node examples covered in this article — is the fastest path to mastery.