Cloud Integration and DIA
Objective
In this lesson you will configure Direct Internet Access (DIA) from a Catalyst branch, implement a simple Cloud OnRamp / SaaS steer using policy-based routing (PBR), and add path monitoring with IP SLA so SaaS traffic fails over automatically when the preferred Internet path is degraded. These features matter in production because many branches require direct, optimized access to SaaS apps (Office 365, Salesforce, etc.) while still keeping control, visibility, and failover. Real-world scenario: a regional office needs low-latency access to a SaaS POP while preserving centralized policies and a fast, deterministic failover to the MPLS hub when the Internet path fails.
Quick Recap
Refer to the topology introduced in Lesson 1 (Branch Router, WAN Edge Router, Controller). This lesson adds/uses the same devices and addresses below — no new controllers are required.
ASCII topology (interfaces show exact IPs):
Branch Router (Catalyst Branch)
- Gi0/0 = 10.0.0.2/30 (to WAN Edge Gi0/0)
- Gi0/1 = 192.168.10.1/24 (LAN)
WAN Edge Router
- Gi0/0 = 10.0.0.1/30 (to Branch Gi0/0)
- Gi0/1 = 203.0.113.1/30 (to ISP)
- Gi0/2 = 198.51.100.1/30 (to Cloud/SaaS POP — logical next-hop for Cloud OnRamp)
Internet / SaaS POP
- 203.0.113.2/30 (ISP gateway)
- 198.51.100.10/32 (SaaS POP anycast address)
ASCII diagram:
[Branch Router]
Gi0/1 192.168.10.1/24 (LAN)
Gi0/0 10.0.0.2/30
|
|10.0.0.0/30
|
[WAN Edge Router]
Gi0/0 10.0.0.1/30
Gi0/1 203.0.113.1/30 (ISP)
Gi0/2 198.51.100.1/30 (Cloud next-hop)
|
| 203.0.113.2 (ISP gw)
|
[Internet / SaaS POP at 198.51.100.10]
Device table:
| Device | Hostname used in commands | Key interfaces/IPs |
|---|---|---|
| Branch Router | Branch-RTR | Gi0/0 = 10.0.0.2/30, Gi0/1 = 192.168.10.1/24 |
| WAN Edge Router | WAN-EDGE | Gi0/0 = 10.0.0.1/30, Gi0/1 = 203.0.113.1/30, Gi0/2 = 198.51.100.1/30 |
| SaaS POP (cloud) | SaaS-POP | 198.51.100.10/32 |
Key Concepts
-
Direct Internet Access (DIA): Branch traffic destined to Internet/SaaS is sent directly to a local Internet break-out rather than hairpinning to a central data center. In production this reduces latency and preserves WAN bandwidth. DIA requires NAT at the egress point so internal addresses are translated to a public IP.
-
Cloud OnRamp / SaaS Steering: Steering SaaS traffic to the optimal Internet POP (Cloud OnRamp) can be implemented with policy-based routing or SD-WAN application-aware steering. Here we use PBR to forward SaaS-destined flows to a preferred next-hop (the cloud POP). Packet flow: client → Branch-RTR → PBR matches SaaS prefix → next-hop set to Cloud OnRamp → WAN-EDGE → ISP → SaaS POP.
-
IP SLA + Track for Path Monitoring: IP SLA probes test reachability/latency to the SaaS POP. A Track object changes its state when IP SLA detects failure; route-maps or static routes associated with that Track can then be modified to fail traffic to an alternate path (for example an MPLS tunnel).
-
Why NAT matters: Without NAT, private RFC1918 addresses cannot traverse the Internet. NAT also enables firewall/policy correlation by tying flows to a known public egress IP.
-
Resiliency: You must monitor the Internet path — simply setting a next-hop is not enough. IP SLA probes (HTTP/TCP/ICMP) give active health information. In production, SD-WAN controllers automate this; in a basic lab we implement it locally.
Real-world note: Enterprises often combine local DIA with centralized inspection (SSE / cloud security) — the PBR approach here is a simple local steering mechanism useful when a full SD-WAN controller or cloud service is not available yet.
Step-by-step configuration
Step 1: Configure WAN Edge Internet interface and default route (DIA gateway)
What we are doing: Configure the WAN Edge interface toward the ISP and add a default route pointing to the ISP. This provides internet connectivity for DIA traffic and a known egress for NAT.
configure terminal
interface GigabitEthernet0/1
description To-ISP
ip address 203.0.113.1 255.255.255.252
no shutdown
exit
ip route 0.0.0.0 0.0.0.0 203.0.113.2
end
What just happened: The interface Gi0/1 now has the public IP 203.0.113.1/30 and is up. The static default route sends any traffic with no more-specific route to the ISP gateway 203.0.113.2. At the protocol level, packets with unknown destinations are forwarded out Gi0/1 to the ISP.
Real-world note: In production, default routers often use BGP with the ISP for prefix learn/backup; static default routes are appropriate for labs or small sites.
Verify:
show ip interface brief
Interface IP-Address OK? Method Status Protocol
GigabitEthernet0/0 10.0.0.1 YES manual up up
GigabitEthernet0/1 203.0.113.1 YES manual up up
GigabitEthernet0/2 198.51.100.1 YES manual up up
show ip route 0.0.0.0
Gateway of last resort is 203.0.113.2 to network 0.0.0.0
S* 0.0.0.0/0 [1/0] via 203.0.113.2
Step 2: Configure NAT overload for branch LAN (DIA translation)
What we are doing: Configure NAT at the WAN Edge so the branch 192.168.10.0/24 network can be translated to the public egress IP when accessing the Internet. NAT is necessary for Internet connectivity and for consistent egress IP for security logging.
configure terminal
access-list 100 permit ip 192.168.10.0 0.0.0.255 any
interface GigabitEthernet0/2
description To-Branch-LAN
ip address 192.168.10.254 255.255.255.0
ip nat inside
no shutdown
exit
interface GigabitEthernet0/1
ip nat outside
exit
ip nat inside source list 100 interface GigabitEthernet0/1 overload
end
What just happened: ACL 100 defines the inside network to be NATted. Gi0/2 is marked as NAT inside and Gi0/1 as NAT outside. The NAT rule uses interface Gi0/1's address (203.0.113.1) for overload translations. At packet level, outbound TCP/UDP flows will have source IP rewritten to 203.0.113.1 with port translations.
Real-world note: Using interface-based NAT provides a single public egress IP — in production, enterprises may use a pool or integrate with firewall clusters for high availability.
Verify:
show ip nat translations
Pro Inside global Inside local Outside local Outside global
tcp 203.0.113.1:61000 192.168.10.10:443 198.51.100.10:443 198.51.100.10:443
udp 203.0.113.1:52000 192.168.10.20:123 198.51.100.1:123 198.51.100.1:123
show ip nat statistics
Total translations: 2 (0 static, 2 dynamic; 1 extended)
Peak translations: 2, occurred 00:05:00
Outside interfaces: GigabitEthernet0/1
Inside interfaces: GigabitEthernet0/2
Hits: 1234 Misses: 12
Step 3: Implement SaaS steering with Policy-Based Routing (PBR) toward Cloud OnRamp
What we are doing: Create a route-map that matches SaaS prefixes and sets the next-hop to the Cloud OnRamp/POP (198.51.100.1). Apply the PBR inbound on the Branch LAN interface so that SaaS-destined traffic is steered to the Cloud OnRamp via the WAN Edge and ISP.
configure terminal
ip access-list extended SAAS-DEST
remark SaaS prefixes (example: cloud service anycast)
permit ip any host 198.51.100.10
exit
route-map STEER-SAAS permit 10
match ip address SAAS-DEST
set ip next-hop 198.51.100.1
exit
interface GigabitEthernet0/1
description LAN-to-Branch
ip policy route-map STEER-SAAS
exit
end
What just happened: The ACL SAAS-DEST identifies traffic aimed at the SaaS POP (198.51.100.10). The route-map STEER-SAAS forces matching packets to use 198.51.100.1 as the next hop. When a packet arrives from the LAN, PBR intercepts it before FIB routing and sets the next-hop as configured. This causes steering to the Cloud OnRamp rather than normal routing lookup.
Real-world note: SD-WAN controllers implement similar steering with application awareness and health information; PBR is a useful local mechanism where such controllers are not deployed.
Verify:
show ip policy
Interface GigabitEthernet0/1
Policy route-map: STEER-SAAS
Route map record: 10, permit, match ip address SAAS-DEST, set ip next-hop 198.51.100.1
show route-map STEER-SAAS
route-map STEER-SAAS, permit, sequence 10
Match clauses: ip address SAAS-DEST
Set clauses: ip next-hop 198.51.100.1
To validate runtime behavior, generate or simulate traffic and inspect route entries for flows:
show ip cef 198.51.100.10 detail
192.168.10.10/32 via 198.51.100.1, GigabitEthernet0/0, ... (PBR applied)
Step 4: Configure IP SLA to monitor the Cloud OnRamp and fail PBR when degraded
What we are doing: Create an IP SLA probe that periodically tests connectivity to the SaaS POP (ICMP) and track its state. If the SLA fails, we will rely on alternate routing (normal BGP/static) because PBR will be bypassed or supplemental route changes can be applied. This ensures traffic is not steered to a broken path.
configure terminal
ip sla 1
icmp-echo 198.51.100.10 source-interface GigabitEthernet0/1
frequency 10
exit
ip sla schedule 1 life forever start-time now
track 1 ip sla 1 reachability
delay down 5 up 5
! Optional: tie track to a static route preference (example: lower admin distance)
ip route 198.51.100.10 255.255.255.255 10.0.0.254 track 1
end
What just happened: IP SLA 1 sends ICMP echo probes to 198.51.100.10 every 10 seconds sourcing from Gi0/1. Track object 1 tracks the reachability of this SLA. If the SLA reports failure, track 1 goes DOWN. A static route with a track can be used to remove or add next-hops when the path is down; in this snippet we illustrated tracking a route to influence reachability decisions.
Real-world note: Use TCP-based SLAs for SaaS where HTTP/TLS reachability is important. ICMP is simple but less representative of application reachability.
Verify:
show ip sla summary
IP SLAs: Time Source for this router is NTP
Number of IP SLAs configured: 1
IP SLA statistics:
IP SLA #1 (icmp-echo 198.51.100.10) :
Destination address: 198.51.100.10
State: OK
RTT (ms) latest: 24
RTT (ms) average: 27
Successful responses/Failed responses: 125/0
Jitter: 2
show track
Track 1
Type: IP-SLA
Last state: Up
Duration: 00:03:45
Watched by 1 objects
If IP SLA fails, expected outputs change:
show ip sla summary
...
State: Not Found or State: DOWN
RTT (ms) latest: -
Successful responses/Failed responses: 0/12
show track
Track 1
Type: IP-SLA
Last state: Down
Duration: 00:00:30
Verification Checklist
- Check 1: WAN Edge has a public IP and default route — verify with
show ip interface briefandshow ip route 0.0.0.0. - Check 2: NAT translations exist for LAN clients — verify with
show ip nat translationsandshow ip nat statistics. - Check 3: PBR is applied and matches SaaS traffic — verify with
show ip policyandshow route-map STEER-SAAS. - Check 4: IP SLA and Track are operational — verify with
show ip sla summaryandshow track.
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| SaaS traffic is not leaving via the Cloud OnRamp next-hop | PBR not applied on the correct interface or ACL does not match SaaS addresses | Ensure route-map is applied inbound on the LAN interface; verify ACL matches exact SaaS prefix (198.51.100.10) |
| NAT translations not showing / Internet unreachable | NAT inside/outside applied on wrong interfaces | Correctly mark inside interfaces (LAN side) and outside (ISP side); re-create NAT rule using the correct ACL |
| IP SLA never reports "Up" even though Internet is reachable | IP SLA source interface incorrect or firewall blocks ICMP | Use correct source-interface; confirm ICMP is allowed to the SaaS POP or use TCP connect SLA |
| When path fails, traffic still tries to go to failed next-hop | Static PBR next-hop is not tied to IP SLA/track | Use track to influence static routes or implement conditional PBR via route-map counters/administrative distance |
Key Takeaways
- DIA reduces latency and bandwidth use to the central WAN by allowing branches to exit local Internet/SaaS traffic directly; NAT at the egress is required for outbound connectivity.
- Steering SaaS to a Cloud OnRamp can be implemented locally with PBR, but must be paired with active health checks (IP SLA) to avoid blackholing traffic.
- IP SLA and Track objects provide automated, deterministic failover behavior — critical for production availability.
- In production SD-WAN environments, controllers orchestrate Cloud OnRamp selection and health-based steering; understanding local mechanisms (PBR, NAT, IP SLA) is essential for troubleshooting and transitional deployments.
Tip: Use representative SLA types (TCP/TLS) for application-critical services instead of ICMP for more accurate health detection. For long-term deployments, migrate steering logic into the SD-WAN controller or cloud security service for centralized visibility and policy consistency.
Lab credentials and DNS examples used in this lesson:
- Domain: lab.nhprep.com
- Default password examples: Lab@123
This completes Lesson 7: "Cloud Integration and DIA" for Lab 39.