Lesson 5 of 7

AI in Cisco Products

Objective

In this lesson you will learn where AI/ML capabilities are applied across four Cisco product families — Catalyst Center, SD‑WAN, Secure Firewall, and Meraki — and how to enable basic telemetry/log export so those AI assistants and analytics engines can consume data. This matters because AI-driven analytics need consistent, high‑quality telemetry to detect anomalies, automate remediation, and speed troubleshooting in production networks. Real-world scenario: an enterprise wants automated root‑cause suggestions for WAN performance degradations (SD‑WAN + Catalyst Center) while simultaneously blocking suspected exfiltration from a branch (Secure Firewall) and correlating client issues from Meraki APs.

Quick Recap

Reference topology (from Lesson 1) is extended here only to show management/telemetry links used by AI features. No routing changes are made to the data plane — we only configure telemetry/log destinations.

ASCII topology (management plane only — exact IPs shown on every link):

Device table

DeviceRole
CatSwitch1Edge Catalyst switch managed by Catalyst Center
vSmartControllerSD‑WAN controller (control plane)
SecFW1Secure Firewall appliance
MerakiMX1Meraki MX at branch
Splunk/SIEMCentral log analytics (AI consumes logs)
CatalystCenterManagement and analytics server

ASCII topology with management IPs and interfaces

[CatSwitch1] Gig0/0 (mgmt) 10.10.10.11/24 --- Gig0/1 10.10.10.1/24 --- [CatalystCenter] 10.10.10.10/24
[vSmartController] Mgmt0 10.10.10.20/24 --- 10.10.10.1/24 --- [CatalystCenter] 10.10.10.10/24
[SecFW1] Mgmt0 10.10.10.30/24 --- 10.10.10.1/24 --- [CatalystCenter] 10.10.10.10/24
[MerakiMX1] Mgmt0 10.10.10.40/24 --- 10.10.10.1/24 --- [CatalystCenter] 10.10.10.10/24
[Splunk/SIEM] Eth0 10.10.10.60/24 --- 10.10.10.1/24 --- [CatalystCenter] 10.10.10.10/24
  • Management gateway/router on its subnet: 10.10.10.1
  • Management/analytics server (CatalystCenter): 10.10.10.10
  • Use domain: lab.nhprep.com and organization: NHPREP for any API or hostname examples.

Tip: Treat the management network as a separate, highly available fabric — AI analytics fail if telemetry is lost.


Key Concepts (theory before hands‑on)

  • Telemetry is the fuel for AI/ML
    AI assistants need structured telemetry (flow records, syslog, SNMP, streaming telemetry) to correlate events. Think of telemetry as the “sensor feed” that the model trains on; if telemetry is incomplete, model outputs will be wrong.

  • Native skills vs Composite skills
    A native skill is an AI function integrated into a single product (e.g., firewall policy summarization). A composite skill combines multiple native skills cross‑product (e.g., correlate firewall logs + SD‑WAN performance to identify a multi‑domain incident).

  • Flow of data: device → collector → AI model
    Devices emit logs/flows (e.g., connection logs, NetFlow, path telemetry). A collector or SIEM normalizes these and forwards to the AI engine. In production, this pipeline must be low‑latency and reliable.

  • Protocol/transport behavior

    • Syslog is UDP/TCP text — minimal processing on device, high compatibility.
    • NetFlow/IPFIX provides recordized flow summaries; sampling reduces load but reduces fidelity.
    • Streaming telemetry (gNMI/gRPC) is structured and supports high throughput; used where low latency and schema are required.
  • Security and privacy
    Ensure logs that contain PII or credentials are handled per policy; AI models trained on raw logs can inadvertently learn sensitive patterns.

Analogy: Telemetry to AI is like camera feeds to a security system — more cameras and better resolution let the system make smarter decisions, but only if the feeds are routed, stored, and processed reliably.


Step-by-step configuration

Note: Each step shows the CLI/actions for the device type, why it matters, and verification with expected outputs. Use credentials or API keys in examples only with domain lab.nhprep.com and password Lab@123 where required. All configuration commands are shown in Cisco-style CLI blocks.

Step 1: Configure Catalyst switch to push telemetry/syslog to Catalyst Center

What we are doing: Configure management IP and a syslog/telemetry destination so Catalyst Center can ingest switch events and interface/state changes. This is the base for Catalyst Center to provide AI-driven insights.

CatSwitch1# configure terminal
CatSwitch1(config)# interface GigabitEthernet0/0
CatSwitch1(config-if)# ip address 10.10.10.11 255.255.255.0
CatSwitch1(config-if)# no shutdown
CatSwitch1(config-if)# exit
CatSwitch1(config)# ip route 0.0.0.0 0.0.0.0 10.10.10.1
CatSwitch1(config)# logging host 10.10.10.10 transport udp port 514
CatSwitch1(config)# telemetry subscriber 10.10.10.10
CatSwitch1(config)# end
CatSwitch1# write memory

What just happened:

  • The interface Gig0/0 now has the management IP 10.10.10.11/24 and a default route to the gateway 10.10.10.1 so the device can reach Catalyst Center.
  • logging host sends syslog messages to 10.10.10.10:514 (UDP) so the management server receives device logs.
  • telemetry subscriber instructs the switch to stream structured telemetry to 10.10.10.10 (this is the control to start streaming). Streaming provides higher fidelity than plain syslog.

Real-world note: In production, prefer TCP or TLS‑wrapped transport for syslog/telemetry and place collectors in an HA pair.

Verify:

CatSwitch1# show running-config | section interface GigabitEthernet0/0
interface GigabitEthernet0/0
 ip address 10.10.10.11 255.255.255.0
 no shutdown

CatSwitch1# show logging
Syslog logging: enabled
  Sending to 10.10.10.10 transport udp port 514
  Persistent logging: enabled

CatSwitch1# show telemetry subscription
Subscriptions:
  Subscriber: 10.10.10.10, Status: ACTIVE

Expected outputs shown above confirm interface IP, syslog destination, and telemetry subscription are active.


Step 2: Configure SD‑WAN controller telemetry and OMP control visibility

What we are doing: Ensure the SD‑WAN controller (vSmart/controller) has management IP and forwards control-plane visibility/telemetry to Catalyst Center and SIEM. This is necessary for AI to correlate WAN control events (OMP updates) with network anomalies.

vSmartController# configure terminal
vSmartController(config)# interface Mgmt0
vSmartController(config-if)# ip address 10.10.10.20 255.255.255.0
vSmartController(config-if)# no shutdown
vSmartController(config-if)# exit
vSmartController(config)# ip route 0.0.0.0 0.0.0.0 10.10.10.1
vSmartController(config)# logging host 10.10.10.10 transport udp port 514
vSmartController(config)# telemetry subscriber 10.10.10.10
vSmartController(config)# end
vSmartController# write memory

What just happened:

  • Management IP allows the controller to communicate with Catalyst Center at 10.10.10.10.
  • The controller now forwards syslog and telemetry so the analytics engine can ingest OMP/control-plane events (e.g., route advertisements, policy installation events). These events are critical to understand WAN behavior.

Real-world note: SD‑WAN control protocols generate frequent control messages — ensure collectors can ingest at scale and consider sampling options.

Verify:

vSmartController# show interface Mgmt0
Mgmt0 is up, line protocol is up
  Internet address is 10.10.10.20/24

vSmartController# show logging
Syslog logging: enabled
  Sending to 10.10.10.10 transport udp port 514

vSmartController# show telemetry subscription
Subscriptions:
  Subscriber: 10.10.10.10, Status: ACTIVE

Expected outputs confirm management IP, syslog forwarder, and active telemetry subscription.


Step 3: Configure Secure Firewall to export connection logs to the SIEM

What we are doing: Configure the Secure Firewall to send connection/security logs to the Splunk/SIEM device (10.10.10.60). This provides the AI engine with evidence of blocked connections or unusual traffic patterns.

SecFW1# configure terminal
SecFW1(config)# interface Mgmt0
SecFW1(config-if)# ip address 10.10.10.30 255.255.255.0
SecFW1(config-if)# no shutdown
SecFW1(config-if)# exit
SecFW1(config)# ip route 0.0.0.0 0.0.0.0 10.10.10.1
SecFW1(config)# logging host 10.10.10.60 transport udp port 514
SecFW1(config)# logging trap informational
SecFW1(config)# end
SecFW1# write memory

What just happened:

  • Firewall management IP is reachable and it forwards logs to the SIEM at 10.10.10.60.
  • logging trap informational ensures connection and security events of informational severity and above are sent. These logs are the primary input for AI to detect suspicious activity.

Real-world note: For security logs, prefer TLS/TCP and ensure log integrity (timestamps, sequence numbers) for forensic value.

Verify:

SecFW1# show running-config | include logging
logging host 10.10.10.60 transport udp port 514
logging trap informational

SecFW1# show interface Mgmt0
Mgmt0 is up, line protocol is up
  Internet address is 10.10.10.30/24

Expected outputs show the logging host and management interface configured.


Step 4: Configure Meraki MX to forward event logs to the SIEM/Collector

What we are doing: Configure the Meraki MX (branch appliance) to forward syslog and event telemetry to the central SIEM, allowing AI to correlate client-side events with WAN and firewall data.

MerakiMX1# configure terminal
MerakiMX1(config)# interface Mgmt0
MerakiMX1(config-if)# ip address 10.10.10.40 255.255.255.0
MerakiMX1(config-if)# no shutdown
MerakiMX1(config-if)# exit
MerakiMX1(config)# ip route 0.0.0.0 0.0.0.0 10.10.10.1
MerakiMX1(config)# logging host 10.10.10.60 transport udp port 514
MerakiMX1(config)# end
MerakiMX1# write memory

What just happened:

  • The MX can now send client event logs and security events to the SIEM. The SIEM aggregates logs from MX, firewall, SD‑WAN, and switches so AI can run cross‑product correlation.

Real-world note: Meraki Dashboard often provides cloud-based configuration; this CLI shows device-level forwarding if present in managed deployments.

Verify:

MerakiMX1# show running-config | section interface Mgmt0
interface Mgmt0
 ip address 10.10.10.40 255.255.255.0
 no shutdown

MerakiMX1# show logging
Syslog logging: enabled
  Sending to 10.10.10.60 transport udp port 514

Expected outputs confirm management IP and logging destination.


Step 5: Confirm end-to-end telemetry ingestion on Catalyst Center and SIEM

What we are doing: Verify that the management/analytics server (CatalystCenter) and Splunk/SIEM see incoming telemetry and syslog messages from all devices. This ensures the AI assistants have the required data stream.

CatalystCenter# show collector connections
Collector connections:
  Source 10.10.10.11 (CatSwitch1)  Status: CONNECTED  Last-Message: 00:00:12
  Source 10.10.10.20 (vSmartController) Status: CONNECTED  Last-Message: 00:00:08
  Source 10.10.10.30 (SecFW1) Status: CONNECTED  Last-Message: 00:00:05
  Source 10.10.10.40 (MerakiMX1) Status: CONNECTED  Last-Message: 00:00:18

CatalystCenter# show syslog summary
Syslog sources:
  10.10.10.11 received 254 events in last 5 minutes
  10.10.10.20 received 431 events in last 5 minutes
  10.10.10.30 received 1500 events in last 5 minutes
  10.10.10.40 received 96 events in last 5 minutes

What just happened:

  • The collector shows active connections from all devices and recent messages, confirming telemetry is being received. The SIEM would show similar per-source ingestion counts.

Real-world note: Monitor ingestion rates and retention — AI accuracy improves with consistent long-term data.

Verify on SIEM (example Splunk query):

Splunk# search index=network_logs host=10.10.10.30 OR host=10.10.10.11 OR host=10.10.10.20 OR host=10.10.10.40 | stats count by host
host=10.10.10.30 count=1500
host=10.10.10.20 count=431
host=10.10.10.11 count=254
host=10.10.10.40 count=96

Expected output shows events from each host.


Verification Checklist

  • Check 1: Catalyst switch telemetry active — verify with show telemetry subscription on CatSwitch1. Expected: subscriber 10.10.10.10, Status: ACTIVE.
  • Check 2: SD‑WAN controller logs/telemetry forwarding — verify with show logging and show telemetry subscription on vSmartController. Expected: syslog to 10.10.10.10 and active telemetry subscription.
  • Check 3: Secure Firewall sends logs to SIEM — verify with show running-config | include logging on SecFW1. Expected: logging host 10.10.10.60.
  • Check 4: SIEM/Catalyst Center receives messages — verify collector connections and syslog summary on CatalystCenter/SIEM. Expected: connected sources and event counts.

Common Mistakes

SymptomCauseFix
No telemetry visible on Catalyst CenterManagement interface not configured or default route missingConfigure mgmt IP and default route; re-enable telemetry subscription
Low fidelity in AI reports (missing detail)Using only syslog (unstructured) and not streaming telemetry/flowsEnable structured telemetry or NetFlow/IPFIX in addition to syslog
Firewall logs not appearing in SIEMFirewall logging destination misconfigured or blocked by ACLVerify logging host config and firewall management ACLs; allow UDP/TCP to SIEM port
High bandwidth on collector link causing packet lossTelemetry sent uncompressed at full frequencyEnable sampling, reduce telemetry interval, or scale collector capacity

Key Takeaways

  • AI/ML requires consistent, structured telemetry — syslog alone is often insufficient for high‑fidelity AI insights; consider streaming telemetry and flow records.
  • Cross‑product correlation (Catalyst Center + SD‑WAN + Secure Firewall + Meraki) delivers the most powerful AI outcomes; ensure all products forward telemetry to a common collector/SIEM.
  • In production, secure and scale the telemetry pipeline: use TLS/TCP, HA collectors, and rate controls to avoid creating new failure points.
  • For exams and real networks: remember that enabling telemetry and log forwarding is the foundational step — without it, AI assistants have nothing to analyze.

Final tip: Treat telemetry pipelines as first‑class network services — plan capacity, security, and redundancy. AI is only as good as the data you feed it.