Lesson 2 of 5

DMVPN Phase 1, 2, and 3 Configuration

DMVPN Phase 1, 2, and 3 Configuration

Introduction

Dynamic Multipoint VPN, commonly known as DMVPN, is one of the most important tunnel technologies used in modern enterprise networks. When you need to connect dozens or even hundreds of remote sites back to a central headquarters, building individual point-to-point tunnels becomes unmanageable. DMVPN solves this by allowing tunnels to be created dynamically and on demand, reducing configuration complexity while maintaining security through IPsec encryption.

DMVPN sits alongside other tunnel technologies such as manually configured tunnels, IPv6 over GRE, LISP, and standalone IPsec tunnels. What sets DMVPN apart is its ability to combine Generic Routing Encapsulation (GRE) with Next Hop Resolution Protocol (NHRP) to create a scalable multipoint tunnel architecture. When paired with IPsec, DMVPN delivers both flexibility and confidentiality across untrusted transport networks.

In this lesson, you will learn:

  • The fundamental differences between DMVPN Phase 1, Phase 2, and Phase 3
  • How NHRP resolution drives spoke-to-spoke communication
  • How GRE and IPsec work together inside the DMVPN framework
  • The role of tunnel interfaces and multipoint GRE in each phase
  • How tunnel stability mechanisms protect against path flapping

Key Concepts

Before diving into configuration, it is essential to understand the building blocks that make DMVPN work. Each component plays a distinct role in establishing and maintaining the overlay network.

Tunnel Technologies Overview

DMVPN is part of a broader family of tunnel technologies used to connect sites across service provider or internet infrastructure. The following table compares the primary tunnel options available on Cisco IOS platforms:

Tunnel TechnologyTypeKey Characteristic
Manually Configured TunnelsPoint-to-pointStatic tunnel endpoints, simple but does not scale
IPv6 over GREPoint-to-point or multipointCarries IPv6 traffic inside GRE across an IPv4 backbone
IPsec TunnelsPoint-to-pointProvides encryption and authentication, commonly paired with GRE
LISPOverlaySeparates locator and identifier for host mobility
DMVPNMultipointDynamic tunnel creation using NHRP over multipoint GRE with optional IPsec

DMVPN Phases Compared

DMVPN operates in three distinct phases. Each phase builds on the previous one, adding capabilities for more efficient traffic forwarding.

AttributePhase 1Phase 2Phase 3
Hub-to-Spoke TunnelsYesYesYes
Spoke-to-Spoke TunnelsNoYes (direct)Yes (dynamic, on-demand)
Traffic Path Between SpokesThrough hubDirect after NHRP resolutionDirect after NHRP redirect
NHRP RoleRegistration onlyResolution requests between spokesHub sends NHRP redirect to trigger direct tunnels
Routing RequirementHub is next hopSpokes must see each other's prefixesHub remains next hop in routing table; NHRP overrides forwarding
ScalabilityLowModerateHigh

Core Protocol Components

  • Multipoint GRE (mGRE): A single tunnel interface on the hub that accepts connections from multiple spokes. Instead of configuring one tunnel per remote site, a single mGRE interface handles all of them.
  • NHRP: The protocol that maps tunnel IP addresses to the underlying transport (NBMA) addresses. Spokes register their mappings with the hub, which acts as the Next Hop Server (NHS).
  • IPsec: Provides encryption for the GRE tunnel traffic. Multiple highly available IPsec tunnels can be established to protect data flowing between sites.

How It Works

Phase 1: Hub-and-Spoke Only

In Phase 1, all traffic between spokes must traverse the hub. Each spoke builds a GRE tunnel to the hub and registers its NHRP mapping. The hub maintains a table of all spoke-to-NBMA address mappings. When Spoke A needs to reach Spoke B, the packet travels from Spoke A to the hub, and the hub forwards it to Spoke B. There is no direct spoke-to-spoke path.

This phase is the simplest to deploy but has a significant drawback: the hub becomes a bottleneck for all inter-spoke traffic, and latency doubles because every packet crosses the WAN twice.

Phase 2: Direct Spoke-to-Spoke

Phase 2 introduces the ability for spokes to communicate directly. When Spoke A needs to reach Spoke B, it sends an NHRP Resolution Request to the NHS (the hub). The hub responds with the NBMA address of Spoke B. Spoke A then builds a dynamic GRE tunnel directly to Spoke B and forwards traffic over that path.

The challenge with Phase 2 is routing. The routing protocol must advertise spoke-specific prefixes so that each spoke knows the other spoke is the actual next hop. This means the hub cannot summarize routes, which limits scalability in large deployments.

Phase 3: NHRP Shortcut and Redirect

Phase 3 is the most scalable design. The hub remains the next hop in the routing table, which means route summarization is fully supported. When Spoke A sends traffic to Spoke B through the hub, the hub detects that a more efficient path exists and sends an NHRP Redirect message back to Spoke A. Simultaneously, the hub forwards the original packet to Spoke B.

Upon receiving the NHRP redirect, Spoke A sends an NHRP Resolution Request directly. Once resolved, a dynamic spoke-to-spoke tunnel is created. Subsequent packets flow directly between the two spokes without touching the hub. This mechanism is called NHRP shortcut routing, and it allows the hub to maintain summarized routing while still enabling optimal spoke-to-spoke forwarding.

Tunnel Stability and Dampening

In production environments, tunnel interfaces can experience instability due to underlying transport issues. When a tunnel flaps repeatedly, it creates routing churn that affects convergence across the entire DMVPN cloud.

Tunnel dampening mechanisms help manage this instability. The concept follows a state machine model similar to how interface dampening works elsewhere in IOS:

StateBehavior
StableTunnel is up and forwarding normally
DampeningTunnel has flapped; suppressed for a configured duration before being allowed back
Flap detectedTimer starts; tunnel must remain stable for the full dampening window to exit suppression

When a tunnel goes out of SLA and then returns, a dampening timer starts. If the tunnel remains stable for the configured dampening duration, the timer is canceled and the tunnel returns to full forwarding. If it flaps again during the dampening window, the timer restarts. This prevents a constantly flapping spoke from destabilizing the hub's routing table.

Configuration Example

Hub Router Tunnel Interface

The hub router uses a multipoint GRE tunnel interface. This single interface serves all spoke connections. The tunnel source should be the physical interface facing the transport network, and the tunnel mode is set to multipoint GRE.

interface Tunnel0
 ip address 10.0.0.1 255.255.255.0
 tunnel source GigabitEthernet0/0
 tunnel mode gre multipoint
 ip nhrp network-id 1
 ip nhrp map multicast dynamic
  • The tunnel mode gre multipoint command enables mGRE so multiple spokes can connect to this single interface.
  • The ip nhrp network-id 1 assigns an NHRP domain identifier. All routers in the same DMVPN cloud must share the same network ID.
  • The ip nhrp map multicast dynamic allows the hub to dynamically learn which spokes should receive multicast and broadcast traffic, which is essential for routing protocol adjacencies.

Spoke Router Tunnel Interface

Each spoke configures a tunnel pointing to the hub's NBMA address and registers itself via NHRP.

interface Tunnel0
 ip address 10.0.0.2 255.255.255.0
 tunnel source GigabitEthernet0/0
 tunnel destination 203.0.113.1
 ip nhrp network-id 1
 ip nhrp nhs 10.0.0.1
 ip nhrp map 10.0.0.1 203.0.113.1
 ip nhrp map multicast 203.0.113.1
  • The tunnel destination on the spoke points to the hub's public IP address.
  • The ip nhrp nhs command identifies the hub as the Next Hop Server for NHRP resolution.
  • The static ip nhrp map entry tells the spoke how to reach the hub's tunnel address via its NBMA address.

Enabling Phase 3 on the Hub

To enable Phase 3 behavior, the hub must be configured to send NHRP redirects, and spokes must accept NHRP shortcuts.

On the hub:

interface Tunnel0
 ip nhrp redirect

On each spoke:

interface Tunnel0
 ip nhrp shortcut

Important: In Phase 3, the routing protocol on the hub can use route summarization because the NHRP redirect and shortcut mechanism overrides the routing table for spoke-to-spoke forwarding. This is the key scalability advantage over Phase 2.

Verification Commands

After configuration, verify NHRP registrations and tunnel status:

show ip nhrp
show dmvpn
show ip nhrp nhs

These commands display the current NHRP mappings, tunnel states, and NHS reachability. On the hub, show ip nhrp should list all registered spokes with their tunnel and NBMA addresses. On spokes, show dmvpn reveals whether the spoke has an active spoke-to-spoke tunnel after NHRP resolution completes.

Real-World Application

Enterprise Branch Connectivity

DMVPN is widely deployed in enterprise networks where a headquarters or data center needs to connect to dozens or hundreds of branch offices. The multipoint architecture means the hub configuration stays constant regardless of how many spokes are added. New branch routers simply register via NHRP, and the overlay network grows without touching the hub.

Dual-Stack Considerations

In environments transitioning to IPv6, DMVPN tunnels can carry both IPv4 and IPv6 traffic across a dual-stack or even IPv4-only transport network. Running dual stack is the preferred method for its versatility, scalability, and highest performance. It runs in parallel on the same hardware without dependency on tunneling, MTU adjustments, NAT, or other performance-degrading technologies. The requirement is that all devices in the path support both protocol stacks.

QoS Over DMVPN Tunnels

Applying quality of service policies to DMVPN tunnel interfaces ensures that critical traffic receives priority treatment. The same QoS design principles apply: mark traffic at the edge, maintain trust boundaries, and size queues appropriately. A typical policy applied to a DMVPN tunnel interface might classify traffic using DSCP values:

class-map match-any Critical_Data
 match dscp af21
class-map match-any Voice
 match dscp ef
class-map match-all Scavenger
 match dscp cs1
class-map match-any Bulk_Data
 match dscp af11
!
policy-map DISTRIBUTION
 class Voice
  priority percent 10
 class Critical_Data
  bandwidth percent 25
  random-detect dscp-based
 class Bulk_Data
  bandwidth percent 4
  random-detect dscp-based
 class Scavenger
  bandwidth percent 1
 class class-default
  bandwidth percent 25
  random-detect

Best Practice: Class maps can match both IPv4 and IPv6 traffic simultaneously, or they can be separated into distinct ip and ipv6 match statements for granular control. Ensure QoS policies are applied consistently at both the hub and spoke tunnel interfaces to avoid asymmetric treatment.

IPsec Protection

In production deployments, DMVPN tunnels should always be protected with IPsec. Multiple highly available IPsec tunnels provide redundancy and ensure that if one transport path fails, encrypted connectivity is maintained over an alternate path. IPsec adds encryption and authentication to the GRE-encapsulated traffic, protecting data as it crosses untrusted internet or service provider networks.

Summary

  • DMVPN Phase 1 provides hub-and-spoke connectivity where all inter-spoke traffic flows through the hub. It is the simplest to configure but does not scale well for spoke-to-spoke communication.
  • DMVPN Phase 2 enables direct spoke-to-spoke tunnels through NHRP resolution, but requires specific routing configurations that prevent the hub from summarizing routes.
  • DMVPN Phase 3 uses NHRP redirect and shortcut mechanisms to achieve direct spoke-to-spoke forwarding while still allowing route summarization at the hub, making it the most scalable option.
  • Tunnel dampening prevents routing instability caused by flapping transport links by suppressing unstable tunnels for a configured duration.
  • QoS policies should be applied to DMVPN tunnel interfaces using the same design principles as physical interfaces: mark at the edge, enforce trust boundaries, and allocate bandwidth by traffic class.

In the next lesson, we will explore advanced DMVPN designs including redundant hub topologies and integration with dynamic routing protocols for automatic failover across the DMVPN cloud.