Lesson 1 of 7

Catalyst Center Overview

Objective

Understand what Cisco Catalyst Center (formerly Cisco DNA Center) is, how its microservices-based architecture operates, and why it matters for intent-based networking and automation in production networks. In this lesson you will map the logical components (API gateway, microservices, DBaaS, messaging, monitoring), verify basic network reachability to a Catalyst Center appliance, and prepare the management-plane foundation required before any onboarding or automation. In production, this foundational connectivity and architectural understanding prevents deployment failures, enables reliable automation, and provides the observability needed for troubleshooting a distributed control plane.

Topology & Device Table

ASCII topology (management-plane view). Exact IP addresses are shown for every interface.

Device naming convention:

  • Catalyst-Center — the Catalyst Center appliance (virtual or physical)
  • R1 — edge router providing gateway/routing for management network
  • SW1 — management switch with SVI for the management VLAN
  • PC1 — operator workstation used to access Catalyst Center UI/API
Network Topology Diagram

Device Table

DeviceInterfaceIP AddressSubnet MaskRole
R1 (Router)GigabitEthernet0/010.0.0.1255.255.255.0Management default gateway / routing
R1 (Router)GigabitEthernet0/110.0.1.1255.255.255.0Upstream / Internet (not used)
SW1 (Switch)GigabitEthernet0/1Access port for PC1 (VLAN 10)
SW1 (Switch)Vlan1010.0.0.3255.255.255.0Management SVI / switch mgmt
Catalyst-Center VMeth010.0.0.10255.255.255.0Catalyst Center management IP
PC1 (Workstation)eth010.0.0.100255.255.255.0Operator workstation

Important: Domain names in examples use lab.nhprep.com and passwords use Lab@123. Organization name used in examples is NHPREP.

Key Concepts (before hands-on)

  • Microservices Architecture — Catalyst Center is built as a collection of microservices running in containers orchestrated by Kubernetes. Each microservice (for example, Automation, Assurance, Catalog) is a self-contained service; Kubernetes schedules pods, provides service discovery, and manages lifecycle. In production, this allows independent updates and scaling of parts of the system.

  • Container & Pod behavior — A container image packages code and runtime; Kubernetes runs containers inside pods. Pods share network and storage namespaces. When a pod is scheduled, kubelet on the node pulls the image and runs it. When you see a service in Catalyst Center fail, you must inspect the underlying pod logs and events (kubelet/pod lifecycle).

  • Service Abstraction & API Gateway (Kong) — Kubernetes Services provide stable network endpoints for sets of pods. Catalyst Center uses an API gateway (Kong) as the northbound entry point. The gateway handles authentication, routing, and rate-limiting. In production, the gateway ensures all northbound traffic is controlled and observed.

  • DBaaS and Messaging — Catalyst Center relies on managed DB services such as MongoDB and Postgres, and messaging systems like RabbitMQ or Kafka for asynchronous workflows. If messaging queues are degraded, automation tasks will queue or fail; understanding these components is critical for troubleshooting automation flows.

  • Monitoring & Observability — Time-series metrics and dashboards are provided by InfluxDB/Prometheus and Grafana. Logs and metrics are used to detect anomalies. In production, use these tools for capacity planning and incident response.

Analogy: Think of Catalyst Center like a city: Kubernetes is the city planner allocating land (nodes), pods are buildings where people (containers) live and work, services are roads and transit routes, and the API gateway (Kong) is the central train station controlling entry and security.

Step-by-step configuration

We will perform basic management-plane configuration: configure R1 as gateway, configure SW1 SVI for management VLAN, verify connectivity from PC1 to Catalyst-Center. Each step includes commands, why they matter, and verification.

Step 1: Configure R1 — management gateway

What we are doing: Assign R1 the management IP and enable the interface that provides default gateway for the management network. A functioning gateway is required so Catalyst Center and operator workstations can reach each other and upstream services.

configure terminal
interface GigabitEthernet0/0
 description Management Network to 10.0.0.0/24
 ip address 10.0.0.1 255.255.255.0
 no shutdown
exit
ip domain-name lab.nhprep.com
ip name-server 8.8.8.8
end
write memory

What just happened: The interface Gi0/0 was configured with the management IP 10.0.0.1/24 and brought up; this establishes the Layer-3 endpoint for the management subnet. Setting ip domain-name configures the local DNS domain which some services use for certificate generation or CLI convenience. ip name-server points to a DNS resolver (here 8.8.8.8) so the router can resolve hostnames.

Real-world note: In production, the gateway IP is often provided by a pair of redundant routers (HSRP/VRRP) rather than a single router; ensure you plan for redundancy before deploying Catalyst Center.

Verify:

show ip interface brief
Interface                  IP-Address      OK? Method Status                Protocol
GigabitEthernet0/0        10.0.0.1        YES manual up                    up
GigabitEthernet0/1        10.0.1.1        YES manual up                    up
Loopback0                 10.255.255.1    YES manual up                    up

Step 2: Configure SW1 — management VLAN and SVI

What we are doing: Create VLAN 10 for management, configure SVI with 10.0.0.3/24 so layer-2 devices and attached endpoints have a reachable switch management address. The SVI provides device management reachability and can serve as the default gateway for devices if required.

configure terminal
vlan 10
 name MANAGEMENT
exit
interface Vlan10
 description Management SVI
 ip address 10.0.0.3 255.255.255.0
 no shutdown
exit
interface GigabitEthernet0/1
 switchport mode access
 switchport access vlan 10
 no shutdown
exit
ip default-gateway 10.0.0.1
end
write memory

What just happened: VLAN 10 was created and an SVI assigned 10.0.0.3/24. The access port Gi0/1 was placed in VLAN 10 so PC1 can reach the management SVI and gateway. The switch's default-gateway is set so the switch OS can reach out-of-subnet management services (for syslog, SNMP, or Catalyst Center reachability).

Real-world note: On multi-switch networks, SVI state follows VLAN presence on the switch — if the VLAN is not active (no active ports), the SVI can be down. Ensure at least one active port or use no autostate if supported.

Verify:

show vlan brief
VLAN Name                             Status    Ports
---- -------------------------------- --------- -------------------------------
1    default                          active    Gi0/2, Gi0/3
10   MANAGEMENT                       active    Gi0/1

show ip interface brief
Interface              IP-Address      OK? Method Status       Protocol
Vlan10                 10.0.0.3        YES manual up           up
GigabitEthernet0/1     unassigned      YES unset  up           up

Step 3: Configure Catalyst-Center VM network (basic IP)

What we are doing: On the Catalyst Center appliance (virtual machine), ensure the management interface has the expected IP (10.0.0.10/24), gateway 10.0.0.1, and domain lab.nhprep.com. This enables the Catalyst Center to be reachable and to resolve FQDNs for integrations.

!--- These are illustrative Linux network commands as executed on the Catalyst Center appliance shell
ip addr add 10.0.0.10/24 dev eth0
ip link set eth0 up
ip route add default via 10.0.0.1
echo "search lab.nhprep.com" > /etc/resolv.conf
echo "nameserver 8.8.8.8" >> /etc/resolv.conf

What just happened: The appliance's eth0 interface was assigned 10.0.0.10/24 and activated; a default route to 10.0.0.1 was added so the appliance can reach external networks. The resolv.conf entries allow the appliance to resolve names in lab.nhprep.com and to use 8.8.8.8 as a DNS server.

Real-world note: Production Catalyst Center appliances often use DHCP with reservation or static addressing with DNS, NTP, and proxy settings. Always validate /etc/resolv.conf and NTP settings — time skew can break certificate validation and Kubernetes control-plane operations.

Verify:

ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 02:42:c0:a8:00:0a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.10/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever

ip route show
default via 10.0.0.1 dev eth0
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.10

Step 4: Verify operator workstation (PC1) connectivity and access Catalyst Center UI/API

What we are doing: Configure PC1 with static IP 10.0.0.100/24 and verify reachability to Catalyst Center (ping + HTTP(S)). This confirms management-plane path is correctly established.

!--- PC1: Example commands on a Linux/Windows CLI (represented here)
ip addr add 10.0.0.100/24 dev eth0
ip route add default via 10.0.0.1
ping -c 4 10.0.0.10
curl -k https://10.0.0.10

What just happened: PC1 was assigned an IP in the management subnet and a default gateway. Ping verifies L3 reachability. curl attempts to fetch the Catalyst Center UI over HTTPS; by default the appliance presents certificates and the API gateway will respond on port 443. Using -k ignores certificate verification for lab tests.

Real-world note: Do not ignore certificate checks in production — use valid certificates for the Catalyst Center UI and API gateway (Kong) to avoid security issues.

Verify:

PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.
64 bytes from 10.0.0.10: icmp_seq=1 ttl=64 time=1.23 ms
64 bytes from 10.0.0.10: icmp_seq=2 ttl=64 time=0.95 ms
64 bytes from 10.0.0.10: icmp_seq=3 ttl=64 time=0.89 ms
64 bytes from 10.0.0.10: icmp_seq=4 ttl=64 time=0.88 ms

--- 10.0.0.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3057ms
rtt min/avg/max/mdev = 0.887/0.987/1.233/0.143 ms

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Content-Length: 3456
Connection: keep-alive

<!DOCTYPE html>
<html>
<head><title>Catalyst Center</title></head>
<body>
<h1>Welcome to Catalyst Center</h1>
<!-- UI content -->
</body>
</html>

Step 5: Snapshot validation — check essential Catalyst Center services (conceptual verification)

What we are doing: Confirm that core services (API gateway, Kubernetes control plane, and DB services) are up. In a lab, you will inspect service status via the appliance's management commands or the Kubernetes API. This step prevents proceeding with onboarding until platform dependencies are healthy.

!--- Conceptual commands (run on Catalyst-Center appliance shell or management CLI)
kubectl get pods --all-namespaces
systemctl status kong
ps aux | grep mongod

What just happened: kubectl get pods lists all pods across namespaces — you should see pods for platform components (kube-system, catalyst services). systemctl status kong checks the API gateway process. ps aux | grep mongod verifies MongoDB process presence. If any of these services are down, orchestration, API, or DB operations will fail.

Real-world note: In production, Catalyst Center includes clustered DB and messaging (MongoDB/Postgres, RabbitMQ/Kafka). A single failed DB node can impact system behavior; always check cluster health, replication, and disk space.

Verify:

kubectl get pods --all-namespaces
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE
kube-system   kube-apiserver-catalyst-master-0              1/1     Running   0          3d
kube-system   kube-controller-manager-catalyst-master-0     1/1     Running   0          3d
platform      kong-5f7d8c9b6f-abcde                          1/1     Running   0          3d
platform      automation-service-7c9df7b6f7-xyz12            1/1     Running   2          3d
database      mongodb-0                                     1/1     Running   0          3d

systemctl status kong
● kong.service - Kong API Gateway
   Loaded: loaded (/lib/systemd/system/kong.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2026-03-31 09:15:03 UTC; 3 days ago
 Main PID: 2345 (kong)
    Tasks: 12 (limit: 4915)
   CGroup: /system.slice/kong.service
           └─2345 /usr/local/bin/kong start

root     1234  0.5  1.2 123456 23456 ?        Ssl  Mar29   0:12 /usr/bin/mongod --config /etc/mongod.conf

Verification Checklist

  • Check 1: Router Gi0/0 is up with 10.0.0.1 — verify with show ip interface brief on R1.
  • Check 2: Switch SVI Vlan10 is up with 10.0.0.3 — verify with show ip interface brief and show vlan brief on SW1.
  • Check 3: Catalyst Center eth0 reachable at 10.0.0.10 — verify with ping 10.0.0.10 and curl https://10.0.0.10 from PC1.
  • Check 4: Core platform pods and Kong service are running — verify with kubectl get pods --all-namespaces and systemctl status kong on the appliance.

Common Mistakes

SymptomCauseFix
PC1 cannot ping Catalyst-Center (10.0.0.10)Access port on switch placed in wrong VLAN or SVI is downVerify show vlan brief and move port to VLAN 10; ensure interface Vlan10 is up
Catalyst-Center responds but UI shows errors / services failingOne or more microservices (pods) are CrashLooping or DB unavailableRun kubectl get pods --all-namespaces and kubectl logs <pod-name>; check DB processes (MongoDB/Postgres) and messaging services
DNS resolution failures from Catalyst-Center/etc/resolv.conf not configured or wrong DNS serverUpdate resolv.conf with search lab.nhprep.com and a reachable nameserver (e.g., 8.8.8.8)
Certificates invalid when accessing UIAppliance configured with self-signed certs or wrong domainUse valid certificates matching lab.nhprep.com; for lab only, browser ignore is acceptable but not for production

Key Takeaways

  • Catalyst Center is a distributed, microservices-based platform running containers orchestrated by Kubernetes; understanding pods, services, and the API gateway (Kong) is essential for troubleshooting and operations.
  • Before onboarding devices, establish and verify the management-plane network: gateway, SVI, appliance IP, DNS, and operator workstation access — without these, automation and assurance will fail.
  • DBaaS (MongoDB/Postgres), messaging queues (RabbitMQ/Kafka), and monitoring (InfluxDB/Grafana) are core platform dependencies; issues in these layers often manifest as failing automations or incomplete assurance data.
  • In production, always plan for redundancy (gateway high-availability, clustered DBs, multiple controller nodes) and proper certificate management (do not use insecure workarounds).

Tip: Treat the Catalyst Center appliance like any other critical control-plane device: validate networking, DNS, time synchronization (NTP), and storage health before starting onboarding or large-scale automation.