Lesson 7 of 7

Migration Strategy: Legacy to SD-WAN

Objective

Plan and execute a safe migration strategy from legacy site-to-site VPN technologies (DMVPN / FlexVPN) to a modern SD-WAN architecture while maintaining service continuity during coexistence. We will show the essential configuration changes and verification commands you need when introducing SD‑WAN-managed tunnels and when using Policy-Based Routing (PBR) and data-plane controls on firewalls during the transition.

In production, phased migrations avoid downtime for hundreds of branch sites. This lesson shows the practical steps and verification you need to keep traffic flowing while you onboard sites to SD‑WAN. Real-world scenario: an enterprise with a DMVPN hub-and-spoke network wants to move to SD‑WAN but must keep some spokes on DMVPN for months during testing.

Quick Recap

This lesson references the same topology used in Lesson 1 (Hub DMVPN, multiple Branch routers, and one central firewall / FTD). No new physical devices are added in this lesson. We will show how the branch/firewall side is prepared to coexist with SD‑WAN and how to lift management to the data interface when using virtual FTD (FTDv) during onboarding.

ASCII topology — devices and management identities only (use your topology from Lesson 1). All IPs below are examples used for this lesson and must align to your lab topology when you run commands.

FTD-HUB (central firewall) — outside: 198.51.100.10
Branch-RTR-01 (DMVPN spoke) — outside: 198.51.100.101
SD-WAN-Controller — mgmt: 10.10.10.10 (controller reachable to orchestrate SD‑WAN policies)
Branch-LAN networks remain unchanged.

FTD / ASA-like firewall and routers remain as in Lesson 1. For naming and examples below we use the domain lab.nhprep.com and password Lab@123 for orchestration credentials where appropriate.

Key Concepts (before hands-on)

  • Coexistence vs. Cutover: Coexistence means running DMVPN/FlexVPN and SD‑WAN simultaneously during migration. This requires explicit traffic steering so that flows continue to use the legacy tunnels until a site is switched. Think of coexistence like running two parallel road systems and guiding cars onto one as routes are re-signed.

  • Control vs. Data Plane Differences: SD‑WAN introduces a centralized control plane (controller/orchestrator) and distributed edge data plane. Legacy DMVPN uses decentralized control (NHRP + dynamic tunnels). During migration you must ensure routing and forwarding decisions map correctly—control-plane routes can be redistributed; data-plane encapsulation must match endpoints.

  • Use of VTI/DVTI and routing protocols: Modern firewalls (FTD/ASA 9.20 / FTD 7.4+) support VTI/DVTI for scalable hub-and-spoke VPNs. In production, DVTI hubs require a routing protocol (eBGP or iBGP) to distribute routes between spokes — this matters because without a routing protocol, hub-spoke route propagation is manual and error-prone.

  • Policy-Based Routing (PBR) on FTD: PBR lets you steer selected traffic to specific next-hops or interfaces — often used during migration to keep management/control traffic or test traffic on legacy tunnels while bulk traffic moves to SD‑WAN paths. On FTD, PBR decisions generate syslog events (e.g., FTD-6-880001) that you can monitor.

  • Data interface management for virtual FTDs (FTDv): Virtual firewalls may require enabling management on an external/data interface so they can reach controllers (cdFMC) during provisioning. The CLI has a configuration command to accomplish this: configure network management data-interface. This is crucial when you cannot use the default management interface in cloud IaaS or virtual deployments.

Step-by-step configuration

We will perform 5 practical steps: enabling data-interface management on FTDv, enabling PBR for coexistence testing, creating a temporary static route pointing to SD‑WAN next-hop, preparing VTI planning notes (non-invasive), and verifying with logs and packet tests.

Note: The CLI examples below are representative for FTD/ASA-style devices and IOS routers. Replace interface names and IPs with those in your lab topology.

Step 1: Enable management on the data (outside) interface for FTDv

What we are doing: Allow the virtual FTD to use its outside/data interface for management connectivity to the controller (cdFMC / SD‑WAN controller). This is required for virtual appliances that don’t have a separate management network or when provisioning from the orchestrator is needed.

configure network management data-interface outside

What just happened: This command tells the FTD to bind management plane services (such as controller onboarding and central management connectivity) to the specified data interface named "outside". The firewall will use the IP assigned on the outside interface to reach the controller rather than a dedicated management interface.

Real-world note: In cloud deployments, the virtual firewall often lacks a separate management NIC; enabling management on a data interface avoids creating extra network paths.

Verify:

show running-config network
Interface outside: IP address = 198.51.100.10
Management interface = outside

What to expect: The running configuration fragment shows the outside interface is now the management interface and its configured IP. When this is set, the orchestrator will be able to reach the device at 198.51.100.10 for onboarding.


Step 2: Create a PBR policy on the FTD to keep test traffic on DMVPN while other traffic uses SD‑WAN

What we are doing: Add a temporary PBR that matches a test subnet or application and forces its next-hop toward the legacy DMVPN spoke. This lets you validate DMVPN flows during coexistence while other traffic is routed via the SD‑WAN fabric.

access-list 101 permit ip host 10.10.100.50 any
route-map PBR-MIGRATION permit 10
 match ip address 101
 set ip next-hop 198.51.100.101
policy-map PBR-POLICY
 class class-default
  service-policy route-map PBR-MIGRATION

What just happened:

  • The access-list identifies the test source IP (10.10.100.50) whose traffic should remain on DMVPN.
  • The route-map matches this traffic and sets the next-hop to 198.51.100.101 (the branch DMVPN spoke). This forces selected flows to traverse the legacy tunnel.
  • The policy-map binds the route-map so the device will apply PBR on forwarded packets.

Real-world note: Using PBR to steer only a small set of test traffic is safer than migrating an entire prefix at once. Monitor FTD syslog for PBR decision changes (syslog id FTD-6-880001).

Verify:

show route-map
Route-map PBR-MIGRATION, permit, sequence 10
  Match clauses: ip address 101
  Set clauses: ip next-hop 198.51.100.101

show access-lists 101
Extended IP access list 101
    10 permit ip host 10.10.100.50 any

Expected output shows the route-map with match and set statements and the access-list entry used by the route-map. Look for logging (see step 5) to confirm PBR actions.


Step 3: Add a temporary static route pointing prefix(s) to the SD‑WAN next-hop

What we are doing: Add a route pointing the target prefix to the SD‑WAN edge device or next-hop so that the bulk of traffic flows over the SD‑WAN overlay for performance testing. This is reversible and is used as a controlled cutover step.

ip route 10.20.0.0 255.255.0.0 10.10.10.1

What just happened: This static route sends all traffic destined for 10.20.0.0/16 to the SD‑WAN edge at 10.10.10.1. In a coexistence migration, this lets you run majority traffic via the SD‑WAN path while keeping selected test flows on DMVPN via the PBR we created in Step 2.

Real-world note: Use short-lived static routes or route tags so you can easily revert if the SD‑WAN path shows issues in metrics (jitter, MOS, RTT, packet loss).

Verify:

show ip route 10.20.0.0
Routing entry for 10.20.0.0/16
  Known via "static", distance 1, metric 0
  Redistributing via ospf 1
  Last update: 00:00:23
  * 10.10.10.1, via GigabitEthernet0/1

You should see the static route present and pointing at the SD‑WAN next-hop.


Step 4: Validate packet flow with VPN Packet Tracer or equivalent data-plane test

What we are doing: Run a data-plane test to confirm the PBR and routing take effect and that traffic actually traverses the intended tunnel or SD‑WAN path. Note: VPN Packet Tracer in controller releases 7.3+ supports policy and data-plane tests across VTI tunnels and supports decrypted flows; it is not supported from loopback or VTI interfaces — run tests from data interfaces.

vpn packet-tracer input outside tcp 10.10.100.50 12345 10.20.5.5 80

What just happened: The packet-tracer simulates a TCP flow from source 10.10.100.50 to 10.20.5.5 over the outside interface. The device evaluates the traffic through access-lists, route-map, NAT, and PBR, and reports the forwarding decision.

Real-world note: Use packet-tracer from a data interface to validate how the device will forward real packets. This is essential for verifying PBR and SD‑WAN steering during migrations.

Verify:

Result: permit
Flow: input-interface: outside
        input-status: up
        ACL: 101 permit ip host 10.10.100.50 any
        Route lookup: next-hop 198.51.100.101
        PBR applied: route-map PBR-MIGRATION sequence 10 set ip next-hop 198.51.100.101
        Output-interface: outside
Final decision: Translated via PBR to DMVPN spoke at 198.51.100.101

Expected output shows the packet is permitted, matched ACL 101, PBR applied, and chosen next-hop is the DMVPN spoke. If instead the static route to 10.10.10.1 applied, the route lookup would indicate that next-hop.


Step 5: Monitor logs for PBR syslog events and SD‑WAN telemetry

What we are doing: Watch the device logs for PBR decision changes (FTD-6-880001) and WAN interface telemetry (RTT, MOS, packet loss) that SD‑WAN controllers use to make uplink decisions. This helps you detect when flows switch paths.

show logging | include FTD-6-880001
show interface outside

What just happened: The first command filters logging for PBR syslog ID FTD-6-880001 which records uplink/egress interface decision changes made by PBR. The second shows interface status and basic metrics that influence SD‑WAN path selection.

Real-world note: SD‑WAN dashboards use metrics like jitter, RTT, MOS and packet loss to pick paths. During migration, the SD‑WAN summary dashboard will show WAN connectivity, interface throughput, and VPN topology so you can validate traffic behavior.

Verify:

Apr  2 12:34:56 FTD-6-880001: PBR decision: flow from 10.10.100.50 changed egress to outside via 198.51.100.101 based on route-map PBR-MIGRATION

GigabitEthernet0/1 is up, line protocol is up
  Hardware is i82546GB...
  Internet address is 198.51.100.10/24
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec
  5 minute input rate 2000 bits/sec, 2 packets/sec
  5 minute output rate 4000 bits/sec, 3 packets/sec

Expected outputs include the PBR syslog entry showing the decision and interface stats showing the outside interface is up with traffic counters.


Verification Checklist

  • Check 1: FTDv management bound to data interface — Verify with show running-config network and confirm the management interface equals outside.
  • Check 2: PBR in effect for test traffic — Verify with show route-map and vpn packet-tracer outputs to confirm PBR set next-hop selection.
  • Check 3: Majority traffic flows via SD‑WAN next-hop — Verify with show ip route for the SD‑WAN static route and device interface counters from show interface to confirm traffic on SD‑WAN edge.
  • Check 4: PBR syslog events are generated — Verify with show logging | include FTD-6-880001 to see PBR decisions being logged.

Common Mistakes

SymptomCauseFix
Management to controller fails after enabling data-interface managementOutside interface has no route to controller or ACL blocking mgmt trafficEnsure default-gateway and ACLs allow controller IP; verify show route and show access-lists
PBR not applied to test trafficAccess-list does not match the actual source IP or policy not attachedConfirm source IP, correct ACL (e.g., host 10.10.100.50), and that the route-map/policy-map is applied in correct direction
Packet-tracer shows route to SD‑WAN next-hop instead of DMVPNStatic route to SD‑WAN has higher precedence than PBR or route-map order wrongReview route-map sequence numbers and ensure PBR is evaluated before route lookup where supported; check packet-tracer order
No PBR syslog entries visibleLogging level not high enough or syslog filter configuredIncrease logging level or ensure show logging includes informational messages; check centralized logging target

Key Takeaways

  • Plan migrations as coexistence phases: keep legacy tunnels operational and steer only selected traffic to SD‑WAN initially using PBR and controlled static routes.
  • Use FTDv’s data-interface management option during virtual onboarding so the controller can reach the firewall without a separate management network.
  • Validate forwarding behavior with data-plane tests (packet-tracer) from data interfaces — Packet Tracer (7.3+) supports VTI tests and decrypted flows but not from loopback or VTI interfaces.
  • Monitor PBR syslog entries (FTD-6-880001) and SD‑WAN interface metrics (RTT, jitter, MOS, packet loss) to ensure uplink decisions match your migration policy.

Tip: Treat the migration as a staged cutover — migrate a small set of prefixes or a pilot site first, use PBR and short-lived static routes to control where flows go, and monitor telemetry closely before scaling up.

This completes Lesson 7: Migration Strategy: Legacy to SD‑WAN. Use the verification checklist and common mistakes table during your lab runs to quickly locate and correct issues. Remember: controlled, observable changes with reversible steps are the safest way to migrate an enterprise WAN.