Lesson 2 of 6

Automating Network Hierarchy

Objective

In this lesson you will automate the creation of network hierarchy objects (sites, buildings, floors) and provision network settings for an SD-Access fabric using the Catalyst Center provider and Terraform. You will learn how to express topology data as code (separating data from logic), push that desired state via the API, and verify the deployed objects. In production, modeling sites/buildings/floors as code ensures consistent site naming, repeatable deployments across locations, and faster onboarding of devices and wireless mapping.

Topology

ASCII diagram (management-plane view). Every link shows the exact IPs from the reference material.

Operator workstation (lab.nhprep.com) / (HTTPS API) | | https://10.1.1.1 | Catalyst Center (API endpoint) IP: 10.1.1.1 | | SSH/Provisioning | Device: P3 -BN1.cisco.com IP: 192.168.30.64

Note: This lesson focuses on the management/API plane (Catalyst Center and Terraform). The device listed is present in the inventory and will be referenced by its exact IP from the reference material.

Device Table

DeviceRoleManagement IP
Catalyst Center (API)Management / Catalyst Center Provider10.1.1.1
P3 -BN1.cisco.comSwitch to provision / inventory example192.168.30.64
Operator workstationWhere Terraform runs(uses lab.nhprep.com as DNS name)

Quick Recap

  • Lesson 1 introduced the SDA-as-Code model and basic Terraform flow. This lesson continues by modeling fabric sites (hierarchy), buildings and floors and pushing those definitions to Catalyst Center via the provider.
  • No new physical devices/IPs are introduced; we reuse 10.1.1.1 for Catalyst Center and 192.168.30.64 for the example switch from the inventory.

Key Concepts

  • Catalyst Center provider: Terraform communicates with the Catalyst Center API over HTTPS (the provider URL). Terraform performs GETs to read current state and POST/PUT to create/update resources during apply. Think of Terraform as a remote client that requests the current inventory, computes a delta, and instructs Catalyst Center to reconcile.
  • Data vs Code separation: Keep variables and site/building/floor data in YAML/variables files and keep the provider/resource logic in HCL. This reduces human error when many sites share common attributes (e.g., default VLAN pools, anycast gateway settings).
  • Idempotency and plan/apply: Running terraform plan produces the delta — what will change. terraform apply sends the API calls. In production, you always run plan first so you can review added/modified/deleted resources.
  • Site hierarchy model: A site contains buildings, which contain floors. This maps directly to Catalyst Center objects; mapping endpoints to a location (floor) is used for policy, wireless mapping, and troubleshooting.
  • Why it matters: If you model floors consistently, wireless and wired endpoint location services and policy attribution (for example, TrustSec SGT mapping) work across the enterprise without manual steps for every new campus.

Step-by-step configuration

Step 1: Author the provider block and authentication

What we are doing: Configure Terraform to talk to the Catalyst Center API. This block defines the provider endpoint and credentials so subsequent resources are managed there. Authentication is required because Terraform will query and modify Catalyst Center objects.

provider "catalystcenter" {
  username = "admin"
  password = "Lab@123"
  url      = "https://10.1.1.1"
}

What just happened: The provider block tells Terraform which API endpoint to contact (10.1.1.1) and the credentials to use (admin / Lab@123). When you run terraform init and later terraform plan, Terraform will load the provider plugin and use these credentials to authenticate over HTTPS. The provider performs initial API queries to build a model of existing resources.

Real-world note: In production you would not store credentials in plaintext in the HCL file. Use a secrets manager or environment variables. Here we use Lab@123 for lab consistency.

Verify:

terraform init
Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/catalystcenter versions matching ">= 1.0.0"...
- Installing catalystcenter v1.0.0...
Terraform has been successfully initialized!

You may now begin working with Terraform. Try "terraform plan" to see the execution plan.

Step 2: Declare the fabric site and hierarchy (site, buildings, floors)

What we are doing: Create a Terraform resource that represents a Catalyst Center fabric site and defines buildings and floors under that site. This models the real-world campus hierarchy.

resource "catalystcenter_fabric_site" "ny_site" {
  name = "Global/United States/New York"

  buildings = [
    {
      name   = "Building-A"
      floors = ["Floor-1", "Floor-2"]
    },
    {
      name   = "Building-B"
      floors = ["Floor-1"]
    }
  ]

  l3_virtual_networks = [
    "SDA_VN_TECH",
    "SDA_VN_GUEST",
    "SDA_VN_CORP"
  ]

  anycast_gateways = [
    {
      name      = "ADM_TECH"
      vlan_name = "VLAN_TECH"
      vlan_id   = 201
      traffic_type = "DATA"
    }
  ]
}

What just happened: This HCL block declares a fabric site named "Global/United States/New York" and two buildings (Building-A and Building-B) with floors. It also defines the set of L3 virtual networks and an anycast gateway definition used by that site. When applied, Terraform will instruct the Catalyst Center API to create the site object and its nested building/floor objects. These objects are then available for mapping devices, APs, and endpoints.

Real-world note: Consistent naming (e.g., "Global/United States/New York") is crucial for integrating with inventory systems and automation pipelines. Use a canonical naming convention.

Verify:

terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan.

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # catalystcenter_fabric_site.ny_site will be created
  + resource "catalystcenter_fabric_site" "ny_site" {
      + id                  = (known after apply)
      + name                = "Global/United States/New York"
      + buildings           = [
          + {
              + floors = [
                  + "Floor-1",
                  + "Floor-2",
                ]
              + name   = "Building-A"
            },
          + {
              + floors = [
                  + "Floor-1",
                ]
              + name   = "Building-B"
            },
        ]
      + l3_virtual_networks = [
          + "SDA_VN_TECH",
          + "SDA_VN_GUEST",
          + "SDA_VN_CORP",
        ]
      + anycast_gateways     = [
          + {
              + name         = "ADM_TECH"
              + vlan_name    = "VLAN_TECH"
              + vlan_id      = 201
              + traffic_type = "DATA"
            },
        ]
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Step 3: Externalize site data (separate data from code)

What we are doing: Put the site/building/floor data into a YAML file and reference it from Terraform. This keeps data separate from resource logic, so the same HCL template can be reused across multiple sites.

# data/sites.yaml
fabric_sites:
  - name: "Global/United States/New York"
    buildings:
      - name: "Building-A"
        floors: ["Floor-1", "Floor-2"]
      - name: "Building-B"
        floors: ["Floor-1"]
    l3_virtual_networks: ["SDA_VN_TECH", "SDA_VN_GUEST", "SDA_VN_CORP"]
    anycast_gateways:
      - name: "ADM_TECH"
        vlan_name: "VLAN_TECH"
        vlan_id: 201
        traffic_type: "DATA"
# main.tf (snippet showing module usage)
module "catalyst_center" {
  source = "git::https://github.com/netascode/terraform-catalystcenter-nac-catalystcenter"
  yaml_directories   = ["data/"]
  templates_directories = ["data/templates/"]
  write_default_values_file = "defaults.yaml"
}

What just happened: The YAML file contains the concrete site data; the Terraform module is configured to read YAML data from data/. This approach enables operators to update site definitions without modifying the module logic. When the module runs, it will iterate over the YAML entries and create the corresponding Catalyst Center objects.

Real-world note: Using a module that reads YAML or variables makes it possible for network operators to maintain spreadsheets exported to YAML or to automate bulk site creation from a CMDB.

Verify:

terraform plan
Refreshing Terraform state in-memory prior to plan...
Data sources evaluated.

An execution plan has been generated and is shown below.
  + catalystcenter_fabric_site.ny_site
      ...
Plan: 1 to add, 0 to change, 0 to destroy.

Step 4: Apply the configuration and deploy the site objects

What we are doing: Execute the planned changes so Terraform instructs Catalyst Center to create the site, buildings, floors, and anycast gateway objects. This makes the objects available to the network inventory and subsequent automation (device onboarding, LAN automation).

terraform apply -auto-approve

What just happened: Terraform sent API requests to the Catalyst Center endpoint (10.1.1.1) authenticated with the provider credentials. The provider created the fabric site record, nested building and floor records, and the anycast gateway object. Because Terraform tracks state, it recorded the object IDs so subsequent runs can detect drift or further changes.

Real-world note: During apply, watch for errors indicating conflicts with existing names or permissions issues. In production, run apply in a gated pipeline with approvals.

Verify:

terraform state list
catalystcenter_fabric_site.ny_site
terraform show catalystcenter_fabric_site.ny_site
# (abridged output intentionally replaced with complete representation)
id = "fabricsite-1234"
name = "Global/United States/New York"
buildings = [
  {
    name = "Building-A"
    floors = ["Floor-1", "Floor-2"]
  },
  {
    name = "Building-B"
    floors = ["Floor-1"]
  }
]
l3_virtual_networks = ["SDA_VN_TECH", "SDA_VN_GUEST", "SDA_VN_CORP"]
anycast_gateways = [
  {
    name = "ADM_TECH"
    vlan_name = "VLAN_TECH"
    vlan_id = 201
    traffic_type = "DATA"
  }
]

Step 5: Validate mapping and integration (inventory mapping)

What we are doing: Verify inventory contains the device and that site mappings are ready for manual device assignment or automated onboarding templates. We confirm that the Catalyst Center inventory shows the listed switch (P3 -BN1.cisco.com) and that site objects exist for assignment.

# Verify inventory device exists (simulate provider inventory query)
terraform state list | grep catalystcenter_inventory_device || true

What just happened: We checked Terraform state for inventory device objects. If the device was previously added to Catalyst Center inventory (as shown in the reference), its state would appear. Seeing both the device and the fabric_site in state confirms that site objects and devices coexist and can be linked by onboarding templates or later automation.

Real-world note: In production, onboarding templates will reference these site names and variables (source_vlan, peer_ip_address) so devices are automatically placed into the correct site/floor during zero-touch provisioning.

Verify:

terraform state list
catalystcenter_fabric_site.ny_site
catalystcenter_inventory_device.P3 -BN1.cisco.com
terraform show catalystcenter_inventory_device.P3 -BN1.cisco.com
id = ""
hostname = "P3 -BN1.cisco.com"
device_ip = "192.168.30.64"
serial_number = "FOC2644022A"
pid = "C9300-24P"
state = "PROVISION"
onboarding_template = {
  name = "onboarding_template"
  variables = [
    { name = "hostname" value = "P3 -BN1.cisco.com" },
    { name = "ip_address" value = "192.168.30.64" },
    { name = "peer_ip_address" value = "tbd" },
    { name = "source_vlan" value = "tbd" }
  ]
}

Verification Checklist

  • Check 1: Fabric site exists — run terraform state list and confirm catalystcenter_fabric_site.ny_site is present.
  • Check 2: Buildings and floors created — run terraform show catalystcenter_fabric_site.ny_site and verify buildings contains Building-A and Building-B with expected floors.
  • Check 3: Inventory device present — run terraform show catalystcenter_inventory_device.P3 -BN1.cisco.com and ensure device IP 192.168.30.64 and onboarding_template variables exist.

Common Mistakes

SymptomCauseFix
terraform plan shows authentication errorProvider credentials incorrect or API URL unreachableVerify provider block uses correct URL (https://10.1.1.1) and credentials. For this lab use username "admin" and password "Lab@123". Ensure network access to 10.1.1.1.
Resource creation fails due to duplicate nameA site/building/floor with the same canonical name already exists in Catalyst CenterUse unique, canonical names or query Catalyst Center to remove or rename existing objects before apply.
Device not appearing in Terraform stateDevice was added directly in Catalyst Center, not imported into Terraform stateImport the device into Terraform with terraform import or create a matching catalystcenter_inventory_device resource in HCL and run terraform plan to adopt it.
YAML data not picked up by modulemodule configured with wrong yaml_directories or file pathConfirm yaml_directories = ["data/"] and that the site YAML is in data/sites.yaml. Run terraform plan to see parsed data.

Key Takeaways

  • Model sites, buildings, and floors as data (YAML) and the provisioning logic as code (HCL). This separation reduces accidental inconsistencies and eases bulk updates.
  • Terraform interacts with Catalyst Center via the provider; plan shows the delta, apply implements changes. Always review the plan before applying in production.
  • Consistent naming across sites is important for downstream automation (onboarding templates, wireless mapping, policy).
  • Keep credentials and secrets out of HCL in production; use secret managers or environment variables — the lab uses admin / Lab@123 for reproducibility.

Tip: In production pipelines, include a validation stage that lints the YAML data (naming conventions, required attributes) before running Terraform to prevent malformed site objects from being created.