SD-Access as Code: Infrastructure Automation with Catalyst Center APIs
Introduction
Imagine deploying an entire SD-Access fabric across multiple sites, complete with virtual networks, anycast gateways, transit configurations, and TrustSec policies, all from a single YAML file that takes minutes to apply. No clicking through GUI wizards, no copy-pasting CLI commands, and no wondering whether the network as designed matches the network as deployed. This is the promise of SD-Access as Code, and it is transforming how enterprise teams build and operate campus networks.
Network engineers today face a set of persistent challenges: network changes consume too many resources and introduce too many errors, the network as designed rarely matches reality, compliance is difficult to verify, and skill shortages limit innovation. The traditional change management mindset treats change as risky and complex, relying on manual processes with limited feedback systems. SD-Access as Code flips this model by embracing a DevOps mindset where change is expected, collaboration is active, accountability is empowered, and automation is the default.
In this article, we will explore how SD-Access as Code works from end to end. We will cover the evolution of Infrastructure as Code (IaC), how Terraform integrates with Catalyst Center APIs, the data model that simplifies fabric definitions, the CI/CD pipeline that enforces validation, and the IOS-XE programmability interfaces that power device-level automation. Every configuration example and technical detail comes from real-world lab environments running Catalyst Center 2.3.7 and ISE 3.2 on Virtual Catalyst 9000 platforms.
What Is SD-Access as Code?
SD-Access as Code is an approach to deploying and managing SD-Access fabrics using Infrastructure as Code principles. Rather than manually configuring fabric sites, virtual networks, anycast gateways, and security policies through the Catalyst Center GUI, engineers define their intended network state in structured data files. An orchestration layer then translates that data into API calls against Catalyst Center and ISE, computing the minimal set of changes needed to bring the live network into alignment with the declared intent.
The core goals of SD-Access as Code are straightforward:
- Reduce time to value by lowering the barrier of entry to network orchestration through simplification, abstraction, and curated examples
- Enable rapid provisioning by allowing users to instantiate SD-Access fabrics and provision devices in minutes
- Remove complexity by eliminating the need to deal with references, dependencies, or loops in the underlying SD-Access object model
- Ensure consistency by making every configuration versioned, auditable, and reproducible
Users focus on describing the intended configuration while relying on a set of maintained and tested Terraform modules. There is no need to understand the low-level SD-Access object model or the intricacies of API endpoint sequencing. The framework handles all of that behind the scenes.
How Has Infrastructure as Code Evolved for Networking?
The journey to modern IaC for networking has unfolded across roughly three generations, each addressing the limitations of the one before it.
Generation 1: Scripts and Spreadsheets (~20 Years Ago)
In the earliest era of network automation, the source of truth was typically an Excel spreadsheet or a flat file stored on some shared drive. A human would trigger a script with no or limited support for modify and remove scenarios. The script would translate the spreadsheet data and apply configuration via an API or CLI. Verification was either nonexistent or entirely manual. Configuration may or may not have been correct and complete, and there was no way to know for certain without logging into each device.
Generation 2: Templates and Data (~10 Years Ago)
The second generation separated templates from data and introduced review workflows. Changes would trigger a complex script that merged a template with data, attempting to handle all add, modify, and remove scenarios. Templates and data were stored and maintained separately, which was an improvement, but configuration correctness was still not guaranteed. The verification step was manual or semi-automated at best.
Generation 3: Modern IaC (Today)
Today's IaC approach uses data-model-based tools that perform pre-change validations and compute minimal diffs for all add, modify, and remove scenarios. The data model serves as both the source of truth and the input to automated end-to-end tests and health checks. Configuration is working, complete, versioned, and auditable. This is the generation that SD-Access as Code represents.
| Aspect | ~20 Years Ago | ~10 Years Ago | IaC Today |
|---|---|---|---|
| Source | Excel / flat files | Templates + data separately maintained | Data model with structured data |
| Interpretation | Human triggers script, limited scenarios | Complex script merging template + data | Data-model tools compute minimal diffs |
| Verification | None or manual | Manual or semi-automated | Automated e2e tests and health checks |
| Configuration | May or may not be correct | May or may not be correct | Working, complete, versioned, auditable |
Pro Tip: The key differentiator of modern IaC is not just automation but the fact that the tooling understands the data model well enough to compute precise diffs. This means you can safely run the same plan repeatedly and only the actual differences will be applied.
What Is the SD-Access as Code Architecture?
The SD-Access as Code architecture consists of three logical layers that work together to translate human intent into deployed infrastructure.
The Data Model Layer
At the top sits the data model, expressed in YAML files. This is where network engineers define their intended state. The YAML structure follows the Catalyst Center and ISE object hierarchy but abstracts away implementation details. A typical fabric definition might look like this:
catalyst_center:
fabric:
transits:
- name: IP_TRANSIT
type: IP_BASED_TRANSIT
routing_protocol_name: BGP
autonomous_system_number: 65023
fabric_sites:
- name: Global/United States/New York
l3_virtual_networks:
- name: SDA_VN_TECH
- name: SDA_VN_GUEST
- name: SDA_VN_BYOD
- name: SDA_VN_CORP
anycast_gateways:
- name: ADM_TECH
vlan_name: VLAN_TECH
vlan_id: 201
traffic_type: DATA
This single YAML block defines an IP-based transit with BGP (ASN 65023), a fabric site under the Global/United States/New York hierarchy, four Layer 3 virtual networks, and an anycast gateway with VLAN 201 for data traffic. Compare this to the dozens of GUI steps or API calls that would be required to configure the same topology manually.
The Orchestration Layer
The orchestration layer sits between the data model and the APIs. It consists of Terraform modules that consume the YAML data, resolve dependencies between objects, and make the appropriate API calls in the correct order. The orchestration layer interacts with two primary APIs:
- Catalyst Center API for fabric sites, transits, virtual networks, anycast gateways, device provisioning, network profiles, LAN automation, PnP, templates, and wireless configuration
- ISE API for TrustSec security groups (SGTs), security group ACLs (SGACLs), and TrustSec policies
The API Layer
At the bottom, the Catalyst Center and ISE APIs receive the orchestrated requests. The architecture diagram shows this clearly: the data model feeds the orchestration layer, which communicates with Catalyst Center to configure SDA virtual networks and anycast gateways, and with ISE to configure TrustSec security groups and SGTs. For example, a deployment might create a virtual network named CORP with anycast gateways on VLAN 2800 and VLAN 2801, while simultaneously configuring ISE with security groups for Employees (SGT 20) and Guests (SGT 30).
How Does Terraform Enable SD-Access as Code?
Terraform is the engine that powers SD-Access as Code. Understanding its core concepts is essential for working with this framework.
Terraform Fundamentals
Terraform is an open-source infrastructure provisioning tool. It ships as a single executable binary ready to run on Linux, Windows, and macOS. It has zero server-side dependencies and operates with a strictly client-side architecture. Terraform uses a declarative language called HCL (HashiCorp Configuration Language) and is extended via plugins called providers, created either by HashiCorp or by IT vendors directly.
Terraform has three important building blocks:
- Providers describe a type of infrastructure provider (Catalyst Center, ISE, vSphere, and so on)
- Resources are specific to a given provider and represent individual infrastructure objects
- Variables parameterize the configuration for reuse and flexibility
A collection of HCL instructions is called an execution plan. Here is what a basic Catalyst Center provider configuration looks like:
provider "catalystcenter" {
username = "admin"
password = "Lab@123"
url = "https://lab.nhprep.com"
}
resource "catalystcenter_fabric_virtual_network" "VN1" {
name = "VN1"
}
resource "catalystcenter_anycast_gateway" "CORP" {
vlan_name = "VLAN_CORP"
vlan_id = 201
traffic_type = "DATA"
l3_virtual_network = "VN1"
}
This declares a Catalyst Center provider with credentials, then defines a fabric virtual network named VN1 and an anycast gateway named CORP on VLAN 201 for data traffic.
The Terraform Workflow: Init, Plan, Apply
The Terraform workflow follows three steps that map directly to the SD-Access as Code deployment process.
Step 1: terraform init -- Run this command in the directory where your .tf files reside. Terraform downloads the required provider plugins (such as the Catalyst Center provider) and initializes the working directory.
Step 2: terraform plan -- Terraform queries the configured provider for the current state of the resources you want to create or modify. The result is a delta analysis between your desired state and the actual runtime configuration. You see what will be created (+), deleted (-), or modified (~). This is your pre-change validation step.
Step 3: terraform apply -- Terraform executes the plan, making the actual API calls to Catalyst Center and ISE to bring the infrastructure into the desired state. Because Terraform maintains state, it knows exactly what has changed and applies only the minimal set of modifications.
Pro Tip: Always review the output of
terraform planbefore runningterraform apply. The plan output is your safety net. It shows you exactly what will change, what will be created, and what will be destroyed. In a production SD-Access deployment, an unexpected delete operation could take down a fabric site.
Why Does SD-Access as Code Separate Data from Code?
One of the most powerful design decisions in the SD-Access as Code framework is the strict separation of data from code. In order to ease maintenance, variable definitions (data) are separated from infrastructure declarations (logic), where one can be updated independently from the other.
Native Terraform vs. SD-Access as Code Data Model
Consider the difference between writing native Terraform and using the SD-Access as Code data model. In native Terraform, you must write HCL resources with loops and variable references:
resource "catalystcenter_fabric_site" "site" {
name = "Campus"
}
variable "transit" {
default = {
Transit1 = {
name = "CORP"
type = "IP_BASED_TRANSIT"
asn = "65010"
},
Transit2 = {
name = "Guest"
type = "IP_BASED_TRANSIT"
asn = "65020"
}
}
}
resource "catalystcenter_transit_network" "tr" {
for_each = var.transit
name = each.value.name
autonomous_system_number = each.value.asn
type = each.value.type
}
With the SD-Access as Code data model, the same configuration becomes a simple YAML declaration:
catalyst_center:
fabric:
fabric_sites:
- name: Campus
transits:
- name: CORP
autonomous_system_number: "65010"
- name: Guest
autonomous_system_number: "65020"
The YAML version is dramatically simpler. There are no for_each loops, no variable blocks, no resource references. The orchestration module handles all of those implementation details automatically.
The Terraform Module
The main Terraform file that ties everything together is remarkably concise:
module "catalyst_center" {
source = "git::https://github.com/netascode/terraform-catalystcenter-nac-catalystcenter"
yaml_directories = ["data/"]
templates_directories = ["data/templates/"]
write_default_values_file = "defaults.yaml"
}
This module declaration points to the SD-Access as Code Terraform module, tells it where to find the YAML data files and templates, and specifies where to write default values. The module itself contains the logic split across multiple Terraform files:
| File | Purpose |
|---|---|
main.tf | Module declaration and provider configuration |
cc_fabric.tf | Fabric site and transit definitions |
cc_sites.tf | Site hierarchy management |
cc_network_settings.tf | Network settings (DNS, DHCP, NTP) |
cc_network_profiles.tf | Network profile assignments |
cc_device_provision.tf | Device provisioning and role assignment |
cc_lan_automation.tf | LAN automation configuration |
cc_pnp.tf | Plug and Play device onboarding |
cc_templates.tf | Day-N template management |
cc_wireless.tf | Wireless configuration |
Each file handles a specific domain of the SD-Access configuration, but engineers never need to modify these files directly. All customization happens in the YAML data files.
How Do Default Values Simplify SD-Access as Code Deployments?
SD-Access as Code comes with pre-defined default values based on common best practices. In some cases, those default values might not be the best choice for a particular deployment and can be overwritten if needed.
The defaults.yaml file contains sensible defaults for all major object types:
defaults:
catalyst_center:
fabric:
fabric_sites:
anycast_gateways:
critical_pool: false
intra_subnet_routing_enabled: false
ip_directed_broadcast: false
layer2_flooding: false
multiple_ip_to_mac_addresses: false
traffic_type: DATA
wireless_pool: false
authentication_template_name: No Authentication
pub_sub_enabled: false
transits:
routing_protocol_name: BGP
type: IP_BASED
These defaults mean that when you define an anycast gateway in your data file, you only need to specify the values that differ from the defaults. If your gateway uses DATA traffic type with no Layer 2 flooding, no critical pool, and no authentication, you do not need to declare any of those settings. The framework applies them automatically.
Appending suffixes to object names is a common practice that introduces room for human errors. Using default values, such suffixes can be defined once and then consistently appended to all objects of a specific type, including references to those objects. This ensures naming consistency across the entire fabric without requiring engineers to remember and manually apply naming conventions.
Pro Tip: Start with the provided defaults and only override values that your specific deployment requires. This minimizes configuration size, reduces errors, and ensures you benefit from best-practice baselines that have been tested across many deployments.
What Does the SD-Access as Code CI/CD Pipeline Look Like?
The target IaC flow for SD-Access as Code integrates with CI/CD pipelines to enforce quality gates at every stage. The workflow follows a structured three-phase approach: Declare, Commit/Validate/Build, and Deploy.
Phase 1: Declare
Customer engineers define their intended network state in YAML data files. This is the "what" -- what fabric sites exist, what virtual networks are needed, what anycast gateways to create, what security policies to enforce. The declaration is purely descriptive and contains no procedural logic.
Phase 2: Commit, Validate, Build
When an engineer pushes changes to a feature branch in Git, the CI/CD pipeline (such as GitLab) triggers automatically. The pipeline performs the following steps:
- Input validation and linting -- The YAML data files are checked for syntax errors, schema compliance, and semantic correctness
- Automated testing -- Automated tests verify that the declared state is valid and internally consistent
- Notification -- Team members receive notifications (via messaging platforms) about the pipeline status
If validation passes, the engineer creates a merge request. The team reviews the proposed changes, and upon approval, the branch is merged to the master branch.
Phase 3: Deploy
When the merge to master occurs, a second pipeline triggers:
- Deployment -- The Terraform modules execute against the production Catalyst Center and ISE instances
- Post-deployment testing -- Automated tests verify that the deployed configuration matches the intended state
- Notification -- The team is notified of the deployment outcome
- Artifact storage -- The deployment artifacts (state files, plans) are stored for audit purposes
This pipeline approach ensures that no configuration reaches production without passing validation, review, and testing gates. It transforms network changes from risky manual procedures into repeatable, auditable, automated workflows.
The Three-Phase Framework
The Network as Code framework structures this workflow into clear phases with distinct roles:
| Phase | Activity | Tools |
|---|---|---|
| 1. Declare | Define intended state in YAML | Text editor, Git |
| 2. Commit, Validate, Build | Push to Git, trigger pipeline, run validations | Git, CI/CD platform, linting tools |
| 3. Deploy (optional manual gate) | Apply changes to production infrastructure | Terraform, Catalyst Center API, ISE API |
The Catalyst Center module and ISE module operate as separate components within the framework, each handling their respective API interactions while sharing the same data model and pipeline infrastructure.
How Does Device Inventory Fit into SD-Access as Code?
Device inventory management is a critical component of the SD-Access as Code workflow. Devices must be discovered, onboarded, and provisioned before they can participate in the fabric. The data model handles this through the inventory section:
catalyst_center:
inventory:
devices:
- name: P3-BN1.lab.nhprep.com
hostname: P3-BN1.lab.nhprep.com
device_ip: 192.168.30.64
serial_number: FOC2644022A
pid: C9300-24P
state: PROVISION
onboarding_template:
name: onboarding_template
variables:
- name: hostname
value: P3-BN1.lab.nhprep.com
- name: ip_address
value: 192.168.30.64
- name: peer_ip_address
value: tbd
- name: source_vlan
value: tbd
This inventory entry declares a Catalyst 9300-24P switch with a specific serial number and IP address. The state: PROVISION indicates that this device should be fully provisioned into the fabric. The onboarding template section specifies variables that will be applied during the device onboarding process, including hostname, IP address, peer IP address, and source VLAN.
The lab topology used for testing SD-Access as Code deployments typically runs Virtual Catalyst 9000 switches simulated on Cisco Modeling Labs (CML), with all devices reachable via the 198.18.128.0/18 subnet. The environment includes Catalyst Center 2.3.7.7 and ISE 3.2 in a fully virtualized setup.
How Does IOS-XE Programmability Support SD-Access as Code?
While SD-Access as Code operates primarily at the controller level through Catalyst Center APIs, the underlying IOS-XE programmability on Catalyst 9000 switches provides the device-level automation capabilities that make the entire stack work. Understanding these interfaces is essential for anyone building comprehensive network automation.
Structured Data vs. CLI: Why It Matters
The shift from CLI-based to model-driven management is fundamental to reliable network automation. CLI produces unstructured text designed for human consumption, while YANG data models produce structured data (XML or JSON) designed for machine-to-machine communication.
| Capability | Structured Data (YANG) | Unstructured Data (CLI) |
|---|---|---|
| Data Format | Structured XML/JSON following a schema | Free-form text |
| Parsing | Easy to parse programmatically | Requires fragile screen-scraping |
| Validation | Schema-based YANG validation before commit | Errors caught only at runtime |
| Transactions | Supports atomic commits (all-or-nothing) | Commands executed line-by-line, no rollback |
| Consistency | Standard models (OpenConfig, IETF) enable multi-vendor support | Vendor-specific syntax and outputs |
| Scalability | Efficient for managing hundreds or thousands of devices | Slow, sequential, complex to scale |
| Monitoring | Works with model-driven telemetry (push-based) | Relies on CLI polling or SNMP |
The Three Programmable Interfaces
IOS-XE provides three model-driven programmable interfaces, all built on YANG data models:
NETCONF operates over SSH on port 830. It is the most mature API, offering candidate datastore support, confirmed commits, and rollback capabilities. To enable NETCONF on a Catalyst 9000:
Cat9k-1#conf t
Cat9k-1(config)# aaa new-model
Cat9k-1(config)# aaa authentication login default local
Cat9k-1(config)# aaa authorization exec default local
Cat9k-1(config)# username admin privilege 15 password Lab@123
Cat9k-1(config)# netconf-yang
RESTCONF operates over HTTPS on port 443. It provides a REST-like API that is familiar to web developers and integrates well with tools like Terraform, which uses the RESTCONF interface for its IOS-XE provider. To enable RESTCONF:
Cat9k-1#conf t
Cat9k-1(config)# restconf
Cat9k-1(config)# ip http secure-server
gNMI (gRPC Network Management Interface) operates over HTTP/2, typically on port 9339. It supports both configuration management through SET operations and model-driven telemetry through SUBSCRIBE operations. To enable gNMI:
Cat9k-1#conf t
Cat9k-1(config)# gnmi-yang
Cat9k-1(config)# gnmi-yang server
Cat9k-1(config)# gnmi-yang port 50052
Programmable Interface Comparison
| Feature | NETCONF | RESTCONF | gNMI |
|---|---|---|---|
| Minimum IOS XE | 16.6 (2017) | 16.7 (2017) | 16.8 (2018) |
| Default Port | 830 | 443 | 9339 |
| Encoding | XML | XML or JSON | JSON_IETF + Proto |
| Security | SSH + PKI certificate or password | HTTPS user/pass | mTLS certificate |
| Transport | SSH | HTTPS | HTTP/2 |
| Telemetry | Dial-In supported | Not supported | Dial-In and Dial-Out |
| Key Benefit | Candidate datastores, validation, rollback | REST is common, well-known operations | Single API for config + telemetry |
Pro Tip: The Terraform IOS-XE provider uses RESTCONF under the hood. When you use Terraform to manage Catalyst 9000 switches directly (outside of Catalyst Center), make sure RESTCONF and the HTTPS server are enabled on every target device.
Terraform with IOS-XE
The Terraform IOS-XE provider manages device configuration declaratively across 19 features using 98 resources and data sources. It interfaces with the IOS-XE RESTCONF API and supports both declarative and imperative configuration:
- Declarative support through feature-specific resources that manage configuration state
- Imperative support through two additional resources:
iosxe_restconffor direct YANG-modeled API calls andiosxe_clifor CLI commands abstracted over RESTCONF using the Cisco-IOS-XE-cli-rpc YANG model
The provider configuration follows the same pattern as the Catalyst Center provider:
provider "iosxe" {
username = "admin"
password = "Lab@123"
url = "https://lab.nhprep.com"
}
How Do Terraform and Ansible Work Together for Network Automation?
A common question from network engineers adopting SD-Access as Code is whether to use Terraform, Ansible, or both. The answer is that they serve complementary roles and can coexist effectively.
Key Differences
| Characteristic | Terraform | Ansible |
|---|---|---|
| Approach | Declarative (describe desired state) | Can be declarative and imperative |
| State Management | Keeps state locally, knows configured vs. desired | Does not keep state (mostly) |
| Resource Lifecycle | Can automatically destroy/recreate resources | Mutates infrastructure directly |
| Primary Use Case | Infrastructure as Code (IaC) tooling | Task-based configuration management |
| Interface | CLI / HCL files | CLI / YAML files |
| Server Dependencies | None (client-side only) | Python on the control node |
Terraform excels at managing infrastructure state declaratively. It knows what is currently configured versus the desired end state and can automatically destroy and recreate resources when needed. Ansible, on the other hand, is a task-based tool that mutates infrastructure procedurally. Terraform can even call Ansible to perform post-deployment tasks, such as configuring virtual machines that Terraform has provisioned.
For SD-Access as Code specifically, Terraform is the primary tool because the deployment is fundamentally about declaring desired state and having the tooling compute the required changes. However, Ansible remains valuable for operational tasks, ad-hoc configuration changes, and integration with NETCONF for device-level automation.
Ansible with IOS-XE APIs
Ansible supports all three IOS-XE programmable interfaces. For NETCONF integration, you install the NETCONF collection and use XML payloads:
- name: conf-host
hosts: c9300
connection: netconf
gather_facts: no
tasks:
- name: hostname-conf
netconf_config:
xml: |
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<native xmlns="http://cisco.com/ns/yang/Cisco-IOS-XE-native">
<hostname>c9300-pod29</hostname>
</native>
</config>
For RESTCONF, the ansible.netcommon collection provides a restconf_get module that can retrieve operational data in JSON or XML format. For gNMI, the nokia.grpc collection supports gNMI configuration and retrieval operations, though this integration currently works only with OpenConfig models.
What Does a Complete SD-Access as Code Lab Workflow Look Like?
A comprehensive SD-Access as Code deployment follows a structured lab workflow that builds complexity progressively. Understanding this workflow helps when planning your own automation journey.
Lab 1: Catalyst Center as Code (Basic)
The first phase covers foundational elements using manual Terraform execution:
- Network Design -- Define the site hierarchy in YAML
- IP Pools -- Declare IP address pools for the fabric
- Network Settings -- Configure DNS, DHCP, NTP, and other network-wide settings
This phase serves as a simple example to verify that the Terraform-to-Catalyst-Center-API pipeline is working correctly.
Lab 2: Catalyst Center as Code (Comprehensive)
The second phase extends to a full SD-Access fabric deployment:
- Full SDA Fabric deployment -- Create fabric sites, transits, and virtual networks
- Fabric Role assignment -- Assign border, control plane, and edge roles to devices
- Day-N templating -- Apply post-provisioning templates for additional configuration
- Endpoint onboarding -- Bring endpoints into the fabric
- Connectivity verification -- Validate end-to-end connectivity across the fabric
Lab 3: ISE as Code
The third phase brings ISE into the automation framework:
- TrustSec Security Groups (SGTs) -- Define security groups like Employees (SGT 20) and Guests (SGT 30)
- Security Group ACLs (SGACLs) -- Create ACLs that enforce policies between security groups
- TrustSec Policies -- Build the policy matrix that governs inter-group communication
- Autogen Matrix -- Automatically generate the complete CTS matrix from high-level policy definitions
Lab 4: CI/CD Integration
The fourth phase integrates everything into a CI/CD pipeline, connecting the Catalyst Center and ISE modules with Git-based version control, automated validation, and pipeline-triggered deployments.
Lab 5: Validation and Testing (Optional)
The final phase adds syntax and semantic validations to catch errors before they reach production. This includes YAML schema validation, cross-reference checking, and post-deployment verification tests.
Model-Driven Telemetry for SD-Access as Code Monitoring
Once your SD-Access fabric is deployed via code, monitoring it programmatically is equally important. Model-driven telemetry on IOS-XE replaces traditional SNMP polling with push-based streaming of operational data.
Configuring Telemetry Subscriptions
A gRPC telemetry subscription to collect CPU utilization data looks like this:
telemetry ietf subscription 1
encoding encode-kvgpb
filter xpath /process-cpu-ios-xe-oper:cpu-usage/cpu-utilization
stream yang-push
update-policy periodic 60000
receiver ip address 10.1.1.3 57500 protocol grpc-tcp
This subscription pushes CPU utilization data every 60 seconds (60000 centiseconds) in key-value GPB encoding to a gRPC receiver at 10.1.1.3 on port 57500.
Telemetry Performance Comparison
Model-driven telemetry is significantly more efficient than SNMP. In a 60-minute collection sample with a 60-second update interval collecting 17 xpaths:
| Interface | CPU Impact | Data Rate | Average Packet Rate |
|---|---|---|---|
| gNMI | +3% | 6 kBps | 5 pps |
| gRPC Dial-Out | +3% | 19 kBps | 58 pps |
| NETCONF | +2% | 23 kBps | 29 pps |
| RESTCONF | +4% | 35 kBps | 37 pps |
| SNMP | +6% | 24 kBps | 90 pps |
Even when SNMP is measuring only interfaces, the load is still significantly higher than YANG-based telemetry, which measures substantially more data across 17 different operational xpaths including ARP, CDP, environment sensors, interface state, LLDP, memory statistics, CPU utilization, PoE data, and more.
The telemetry data flows from IOS-XE devices to a collector/receiver that decodes the data, stores it in a time-series database, and presents it through monitoring and visualization dashboards.
Pro Tip: Start with gRPC Dial-Out telemetry for most enterprise deployments. It has the lowest overhead, pushes data from the device to the collector based on static configuration, and is the most widely deployed option in production environments. Consider gNMI Dial-In for more advanced NetDevOps workflows where the collector needs dynamic subscription control.
Frequently Asked Questions
What prerequisites do I need to start with SD-Access as Code?
You need fundamental knowledge of SD-Access, Catalyst Center, and ISE. On the tooling side, you need Terraform installed (a single binary available for Linux, Windows, and macOS) and access to a Catalyst Center instance with API credentials. The framework assumes you understand basic Git workflows and YAML syntax. You do not need deep Terraform expertise because the SD-Access as Code modules abstract away most of the HCL complexity.
Can I use SD-Access as Code for brownfield deployments?
Yes, but with careful planning. Terraform maintains a state file that tracks the resources it manages. For brownfield deployments where infrastructure already exists, you would need to import existing resources into Terraform state before managing them declaratively. The terraform plan step is critical here because it shows you exactly what changes Terraform intends to make, allowing you to verify that existing infrastructure will not be disrupted.
How does SD-Access as Code handle the ISE integration?
The framework includes a separate ISE module that operates alongside the Catalyst Center module. The ISE module manages TrustSec configuration including security groups (SGTs), security group ACLs (SGACLs), and the TrustSec policy matrix. It uses the ISE API to create and manage these objects. The data model allows you to define security groups like Employees (SGT 20) and Guests (SGT 30) in the same YAML files that define your fabric topology, ensuring that network and security policies stay synchronized.
What happens if a Terraform apply fails midway through?
Terraform tracks state for every resource it manages. If an apply operation fails partway through, Terraform records which resources were successfully created and which were not. On the next terraform plan and terraform apply cycle, Terraform picks up where it left off, attempting to create only the resources that failed. This is fundamentally different from script-based automation where a mid-execution failure often requires manual cleanup and investigation before retrying.
Which programmable interface should I use for direct device management?
It depends on your use case. NETCONF is best when you need candidate datastore support, confirmed commits, and rollback capabilities. RESTCONF is ideal when you want REST-like simplicity and Terraform integration (the IOS-XE Terraform provider uses RESTCONF). gNMI is the best choice when you need a single API for both configuration and streaming telemetry. For SD-Access as Code specifically, the Catalyst Center API handles most device configuration, so direct device-level API access is typically needed only for Day-N operational tasks and telemetry.
How do I validate my YAML data files before deploying?
The CI/CD pipeline includes input validation and linting as the first step after a Git push. This catches syntax errors (malformed YAML), schema violations (incorrect field names or types), and semantic errors (references to nonexistent objects). Additionally, terraform plan performs its own validation by comparing the declared state against the live infrastructure and reporting any inconsistencies. Running both validation layers before deployment ensures that only correct, complete configurations reach production.
Conclusion
SD-Access as Code represents a fundamental shift in how enterprise networks are deployed and managed. By combining the declarative power of Terraform with the comprehensive APIs of Catalyst Center and ISE, network engineers can define entire SD-Access fabrics in simple YAML files and deploy them through automated, auditable CI/CD pipelines. The separation of data from code means that network engineers focus on describing what they want while the framework handles the complexity of how to achieve it.
The key takeaways from this guide are:
- SD-Access as Code simplifies fabric deployment by abstracting the low-level SD-Access object model into an intuitive YAML data model
- Terraform provides the automation engine with its init, plan, and apply workflow ensuring that only validated, minimal changes reach production
- Default values reduce configuration burden by encoding best practices that can be selectively overridden
- CI/CD pipelines enforce quality gates with input validation, automated testing, and post-deployment verification
- IOS-XE programmability (NETCONF, RESTCONF, gNMI) provides the device-level interfaces that make model-driven automation possible
- Model-driven telemetry extends the automation story from deployment into ongoing monitoring with dramatically lower overhead than SNMP
The DevOps mindset that SD-Access as Code promotes -- embracing change, active collaboration, empowered accountability, and automation -- is the path forward for network operations teams that want to reduce errors, accelerate deployments, and maintain compliance at scale.
To deepen your understanding of SD-Access, Catalyst Center, and network automation, explore the certification training courses available at NHPREP. Hands-on practice with these technologies in a structured learning environment is the fastest way to build the skills needed to implement SD-Access as Code in your own organization.