Fabric Provisioning via API
Objective
In this lesson you will provision an SDA fabric site, create transit networks, declare L3 virtual networks, and assign fabric roles programmatically using a Catalyst Center “as‑code” workflow. This matters in production because automated fabric provisioning removes human error, ensures consistency across sites, and enables repeatable deployments during scale‑out or DR events. In a real campus or multi‑site deployment, operators use this pattern to bring new sites online quickly while guaranteeing consistent policy, VLANs, and transit BGP parameters.
Quick Recap
This lab continues from Lesson 1 where the base SDA design and a management network were introduced. No new physical devices are added in this lesson — instead we create the fabric site objects and transits in code and push them to Catalyst Center (API-driven automation).
ASCII topology (management plane view)
Note: the fabric underlay/overlay devices were introduced in Lesson 1. Here we show the management endpoints used by the provisioning automation.
+-------------------------------+
| Catalyst Center API Server |
| Management IP: 10.10.100.10 |
| Hostname: api.lab.nhprep.com |
+---------------+---------------+
|
| mgmt/HTTPS (TCP 443)
|
+---------------------+---------------------+
| | |
+----------+---------+ +--------+---------+ +--------+---------+
| Seed Switch (PNP) | | Border Node 1 | | Border Node 2 |
| Mgmt: 10.10.100.1 | | Mgmt: 10.10.100.2| | Mgmt: 10.10.100.3|
| If: Gig0/0 | | If: Gig0/1 | | If: Gig0/1 |
+--------------------+ +------------------+ +------------------+
Fabric logical objects created by code:
- Fabric Site: Global/United States/New York
- Transits: CORP (ASN 65010), Guest (ASN 65020), IP_TRANSIT (ASN 65023)
- L3 VNs: SDA_VN_TECH, SDA_VN_GUEST, SDA_VN_BYOD, SDA_VN_CORP
- Anycast GW VLAN: VLAN_TECH (ID 201)
Device table (management-plane info used by automation)
| Device | Role | Management IP | Interface for API/PNP |
|---|---|---|---|
| Catalyst Center API Server | Orchestration API | 10.10.100.10 | N/A (HTTPS) |
| Seed Switch (PNP) | Fabric seed/provision | 10.10.100.1 | GigabitEthernet0/0 |
| Border Node 1 | Transit | 10.10.100.2 | GigabitEthernet0/1 |
| Border Node 2 | Transit | 10.10.100.3 | GigabitEthernet0/1 |
Tip: Production Catalyst Center runs as an API endpoint and accepts declarative data (YAML/HCL). Your automation must reach the management IP over HTTPS and must authenticate with a valid API token (not covered here).
Key Concepts
- Declarative vs Imperative configuration: With “as‑code” you declare the desired end state (fabric site, transits, VNs) in YAML/HCL. The orchestration engine computes and enforces differences. Think of it like writing a shopping list (declarative) rather than instructing someone step‑by‑step to pick items.
- Separation of data and logic: Keep variables (site names, ASNs, VLAN IDs) in data files (defaults.yaml / variables) and the provisioning logic in the module/templates. This reduces human error when reusing templates across sites.
- Transit networks and BGP ASN: Transit objects represent the connectivity to external routing domains. The
autonomous_system_numberdeclared for a transit (e.g., 65023) is used when generating BGP configuration on border nodes — this is critical for route exchange. - Anycast Gateways and VLANs: Anycast gateway objects map to VLANs and SVI constructs used for host default gateway behavior in VXLAN overlays; VLAN IDs (e.g., 201) must be consistent across templates and actual switch VLAN database.
- Idempotence and verification: Running the same Terraform/HCL/YAML against API multiple times should produce no changes if the desired state already exists. Always verify with plan/show operations before and after apply.
Step-by-step configuration
Step 1: Prepare the defaults and fabric data files
What we are doing: Create the data payloads (defaults.yaml and fabric.yaml) that declare fabric-wide defaults and the new fabric site with transit and L3 virtual networks. Separating defaults from the site data reduces errors and centralizes common settings (e.g., BGP defaults).
# defaults.yaml
defaults:
catalyst_center:
fabric:
fabric_sites:
anycast_gateways:
critical_pool: false
intra_subnet_routing_enabled: false
ip_directed_broadcast: false
layer2_flooding: false
multiple_ip_to_mac_addresses: false
traffic_type: DATA
wireless_pool: false
authentication_template_name: "No Authentication"
pub_sub_enabled: false
transits:
routing_protocol_name: BGP
type: IP_BASED
# fabric.yaml
catalyst_center:
fabric:
transits:
- name: IP_TRANSIT
type: IP_BASED_TRANSIT
routing_protocol_name: BGP
autonomous_system_number: 65023
fabric_sites:
- name: Global/United States/New York
l3_virtual_networks:
- name: SDA_VN_TECH
- name: SDA_VN_GUEST
- name: SDA_VN_BYOD
- name: SDA_VN_CORP
anycast_gateways:
- name: ADM_TECH
vlan_name: VLAN_TECH
vlan_id: 201
traffic_type: DATA
What just happened: The YAML files declare the fabric defaults and the specific site objects. The defaults.yaml centralizes common boolean defaults (e.g., layer2 flooding off) while fabric.yaml defines the transit (ASN 65023), the site path, L3 VNs, and an anycast gateway VLAN with ID 201. When the module reads these files, it will create the corresponding Catalyst Center API objects.
Real-world note: Keeping defaults in a single file avoids inconsistent suffixes and repeated booleans across many sites in a large deployment.
Verify:
# show files in data directory
ls -l data/
-rw-r--r-- 1 admin admin 842 Apr 1 12:00 defaults.yaml
-rw-r--r-- 1 admin admin 1024 Apr 1 12:01 fabric.yaml
# display the fabric.yaml content to confirm values
cat data/fabric.yaml
catalyst_center:
fabric:
transits:
- name: IP_TRANSIT
type: IP_BASED_TRANSIT
routing_protocol_name: BGP
autonomous_system_number: 65023
fabric_sites:
- name: Global/United States/New York
l3_virtual_networks:
- name: SDA_VN_TECH
- name: SDA_VN_GUEST
- name: SDA_VN_BYOD
- name: SDA_VN_CORP
anycast_gateways:
- name: ADM_TECH
vlan_name: VLAN_TECH
vlan_id: 201
traffic_type: DATA
Step 2: Declare transit networks as variables and resources
What we are doing: Define transit network data that will be turned into Catalyst Center transit objects. We use a variable map to keep multiple transits flexible and iterate with a for_each resource block — this prevents repetitive code and makes it simple to add/remove transits.
# variables.tf
variable "transit" {
default = {
Transit1 = {
name = "CORP"
type = "IP_BASED_TRANSIT"
asn = "65010"
},
Transit2 = {
name = "Guest"
type = "IP_BASED_TRANSIT"
asn = "65020"
}
}
}
# cc_transit.tf
resource "catalystcenter_transit_network" "tr" {
for_each = var.transit
name = each.value.name
autonomous_system_number = each.value.asn
type = each.value.type
}
What just happened: The variable "transit" map defines two transits (CORP ASN 65010 and Guest ASN 65020). The resource uses for_each to generate one Catalyst Center transit object per map entry. This constructs API payloads for each transit network without duplicating resource blocks.
Real-world note: Using maps and
for_eachis a scalable practice in multi-site deployments where transits vary by site or customer.
Verify:
# terraform plan shows the creation of transit resources
terraform plan
# Expected output (complete lines relevant to transits)
Plan: 2 to add, 0 to change, 0 to destroy.
+ catalystcenter_transit_network.tr["Transit1"]
name: "CORP"
autonomous_system_number: "65010"
type: "IP_BASED_TRANSIT"
+ catalystcenter_transit_network.tr["Transit2"]
name: "Guest"
autonomous_system_number: "65020"
type: "IP_BASED_TRANSIT"
# End of plan output
Step 3: Instantiate the Catalyst Center module to create the fabric site
What we are doing: Use the provisioning module to read data files and create the declared fabric objects (site, l3 virtual networks, anycast gateway, transits). This step pushes the declarative state to the Catalyst Center API so objects exist in the orchestration system.
# module usage in main.tf (source omitted for lab)
module "catalyst_center" {
yaml_directories = ["data/"]
templates_directories = ["data/templates/"]
write_default_values_file = "defaults.yaml"
}
# initialize and apply
terraform init
terraform apply -auto-approve
What just happened: The module ingested YAML files from data/ and created Catalyst Center API objects per the templates and defaults. terraform init prepares providers and modules; terraform apply sends API requests to create the fabric site, transits, L3 VNs, and anycast gateway. The write_default_values_file parameter ensures that defaults are written for debugging and traceability.
Real-world note: In production you would set up a remote state backend and CI/CD pipeline so applies are audited and rollbacks are reproducible.
Verify:
# terraform apply output
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding latest version of provider...
- Installing provider plugins...
Terraform has been successfully initialized!
terraform apply -auto-approve
module.catalyst_center.catalystcenter_transit_network.tr["Transit1"]: Creating...
module.catalyst_center.catalystcenter_transit_network.tr["Transit2"]: Creating...
module.catalyst_center.catalystcenter_transit_network.tr["Transit1"]: Creation complete after 3s [id=tr-CORP]
module.catalyst_center.catalystcenter_transit_network.tr["Transit2"]: Creation complete after 3s [id=tr-Guest]
module.catalyst_center.catalystcenter_site.site["Global/United States/New York"]: Creating...
module.catalyst_center.catalystcenter_site.site["Global/United States/New York"]: Creation complete after 5s [id=site-ny]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
# Show the created resources
terraform show
# Expected excerpt showing the created site and transits
# (Full resource attributes follow; excerpt shows key attributes)
# catalystcenter_transit_network.tr["Transit1"]
# name: "CORP"
# autonomous_system_number: "65010"
# type: "IP_BASED_TRANSIT"
# catalystcenter_transit_network.tr["Transit2"]
# name: "Guest"
# autonomous_system_number: "65020"
# type: "IP_BASED_TRANSIT"
# catalystcenter_site.site["Global/United States/New York"]
# name: "Global/United States/New York"
# anycast_gateways[0].vlan_id: 201
# anycast_gateways[0].vlan_name: "VLAN_TECH"
Step 4: Provision device role assignment and templates (PnP / device provisioning)
What we are doing: Configure device provisioning templates and instruct the module to apply role assignments (seed, border, edge) using the templates directory. This step binds physical devices (by serial/IP) to the logical fabric roles created in the site.
# data/templates/device_roles.yaml (example)
devices:
- hostname: seed-switch-ny
management_ip: 10.10.100.1
role: seed
- hostname: border-ny-1
management_ip: 10.10.100.2
role: border
- hostname: border-ny-2
management_ip: 10.10.100.3
role: border
# re-run terraform apply to pick up templates
terraform plan
terraform apply -auto-approve
What just happened: The templates declare device identity and intended fabric roles. On apply, the module converts those templates into Catalyst Center device provisioning API calls (PnP). The API interprets role assignments and schedules device templates to be applied during bootstrap (seed will distribute fabric information; borders will receive transit BGP config).
Real-world note: Ensure device inventory (serial numbers / certificates) match the templates; mismatches will prevent PnP from assigning the role at bootstrap.
Verify:
# terraform plan shows planned changes for device provisioning
terraform plan
# Expected plan output indicating device provisioning objects to create
Plan: 3 to add, 0 to change, 0 to destroy.
+ catalystcenter_device.provision["seed-switch-ny"]
hostname: "seed-switch-ny"
management_ip: "10.10.100.1"
role: "seed"
+ catalystcenter_device.provision["border-ny-1"]
hostname: "border-ny-1"
management_ip: "10.10.100.2"
role: "border"
+ catalystcenter_device.provision["border-ny-2"]
hostname: "border-ny-2"
management_ip: "10.10.100.3"
role: "border"
Step 5: Validate the fabric objects and role assignment in Catalyst Center
What we are doing: Confirm that the transits, site, L3 VNs, anycast gateway and device role assignments exist in Catalyst Center by inspecting the Terraform state and module outputs.
# show terraform state list and outputs
terraform state list
terraform output
What just happened: terraform state list enumerates the objects Terraform created and is tracking. terraform output (if module defines outputs) presents key values such as site ID, transit IDs, and anycast gateway mapping. This verification ensures the desired state exists in the orchestration system and is consistent with the data files.
Real-world note: Use Catalyst Center UI or API to cross‑check device onboarding progress; Terraform state shows what was requested but the device may still be in process of bootstrapping.
Verify:
# sample terraform state list output
terraform state list
module.catalyst_center.catalystcenter_transit_network.tr["Transit1"]
module.catalyst_center.catalystcenter_transit_network.tr["Transit2"]
module.catalyst_center.catalystcenter_site.site["Global/United States/New York"]
module.catalyst_center.catalystcenter_device.provision["seed-switch-ny"]
module.catalyst_center.catalystcenter_device.provision["border-ny-1"]
module.catalyst_center.catalystcenter_device.provision["border-ny-2"]
# sample terraform output (module outputs)
terraform output
site_id = "site-ny"
transit_ids = [
"tr-CORP",
"tr-Guest",
"tr-IP_TRANSIT"
]
anycast_vlan = 201
Verification Checklist
- Check 1: The fabric site object exists — verify with
terraform state listandterraform showto see the site name "Global/United States/New York". - Check 2: Transit networks created — verify that transit objects exist with ASNs 65010, 65020, and 65023 via
terraform showor module outputs. - Check 3: Anycast gateway VLAN 201 exists — verify with
terraform showthat the anycast gateway hasvlan_id: 201. - Check 4: Device provisioning entries appear — ensure device provisioning objects for seed and border devices exist in
terraform state list.
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| Terraform plan shows different VLAN ID than expected | The defaults.yaml or fabric.yaml has an incorrect vlan_id | Edit data/fabric.yaml to set vlan_id: 201; run terraform plan to confirm |
| Device stays in “unassigned” or not bootstrapping | Device inventory (serial) or management IP mismatch vs templates | Confirm device serials and mgmt IPs match device_roles template; correct template and re-apply |
| BGP not forming with external transit | Transit ASN mismatch between Catalyst Center transit object and upstream peer | Verify autonomous_system_number in transit declaration (e.g., 65023) and coordinate with external peer configuration |
| Repeated manual edits in the GUI cause drift | Manual changes in Catalyst Center UI differ from code | Reconcile by updating YAML/HCL to match desired state and re-apply; adopt strict code-of-record policy |
Key Takeaways
- Provisioning SDA fabrics via API (Catalyst Center as‑code) gives repeatability and scale: declare transits, sites, L3 VNs and device roles in data files and let the orchestration engine enforce the state.
- Separate data from logic: keep site-specific variables in YAML and template logic in modules. This reduces errors when deploying multiple sites.
- Transits and ASNs are critical — ensure
autonomous_system_numbervalues (65010, 65020, 65023) match routing design and external peers. - Always validate with
terraform plan,terraform applyandterraform show(or equivalent API calls) to confirm objects are created and that device provisioning succeeded before expecting data-plane connectivity.
Final tip: In production, connect your repository to CI/CD and a remote state backend, enforce code review on changes to defaults.yaml and fabric.yaml, and monitor device provisioning events in Catalyst Center to catch onboarding issues early.