Orchestration Layer
Objective
In this lesson you will build an orchestration layer that coordinates the Catalyst Center and ISE APIs using Terraform. You will author HCL to describe intended SD‑Access constructs (virtual network and anycast gateway), initialize the Terraform workspace, run plan to validate deltas, and apply the configuration so the orchestration layer pushes the desired state to the Catalyst Center. This matters in production because automated orchestration eliminates manual device-by-device configuration, ensures idempotent deployments, and integrates policy sources (like ISE) into the provisioning pipeline. Real-world scenario: a campus team needs to instantiate a new virtual network and anycast gateway across a fabric and wants the change coordinated reliably by code with audit-ready plan/apply steps.
Quick Recap
Reference topology from earlier lessons (no new devices are added in this lesson). We focus on the Catalyst Center API endpoint and a fabric device already inventoried.
ASCII Topology (interfaces and IPs shown exactly as in the reference):
[ Catalyst Center API ]
hostname: catalyst-center
API URL: https://10.1.1.1
||
|| HTTPS (API)
||
[ Device: P3 -BN1.cisco.com ]
hostname: P3 -BN1.cisco.com
device_ip: 192.168.30.64
Device table
| Device Name | Hostname | Management IP |
|---|---|---|
| Catalyst Center (API) | catalyst-center | 10.1.1.1 |
| Fabric device | P3 -BN1.cisco.com | 192.168.30.64 |
Key Concepts
Before we begin hands-on steps, understand these core ideas:
-
Infrastructure as Code (IaC) & Idempotency
Terraform uses a declarative model: you describe the desired state and Terraform computes the delta. This guarantees idempotent operations — repeated runs converge to the same result rather than applying duplicate changes. Think of it like telling a thermostat the temperature you want rather than manually flipping a heater on/off. -
Provider / Resource Model
The provider (here, Catalyst Center) is the bridge between Terraform and the device/API. Resources describe concrete objects managed by the provider (for example, a virtual network or anycast gateway). The provider handles API auth and CRUD operations. -
Plan → Apply Workflow
terraform plan queries the provider to calculate what will change and presents a human-reviewable plan; terraform apply executes the changes. In production pipelines, plan is used for review and gating (CI) while apply performs the actual deployment. -
API-driven Orchestration & Consistency
Orchestration uses REST APIs over HTTPS (in our topology the Catalyst Center API is at https://10.1.1.1). API calls are atomic to the provider; Terraform maps resource changes to API calls. This matters because authoring code that matches the provider schema avoids drift. -
Policy Coordination with ISE (conceptual)
In SD‑Access designs, ISE provides policy decisions (e.g., user/device identity). When orchestration coordinates both Catalyst Center and ISE modules, you obtain both network constructs and the policy objects that reference them. (ISE details are handled in its module; this lesson demonstrates the orchestration pattern.)
Step-by-step configuration
We will create a Terraform configuration that connects to the Catalyst Center API, defines a fabric virtual network VN1 and an anycast gateway CORP bound to VLAN 201, then initialize, plan, and apply.
Step 1: Create the Terraform provider file
What we are doing: Create a provider definition that tells Terraform how to authenticate to the Catalyst Center API. This is the foundational piece — without a provider Terraform cannot reach the target system.
# file: provider.tf
provider "catalystcenter" {
username = "admin"
password = "Lab@123"
url = "https://10.1.1.1"
}
What just happened: The provider block names the Catalyst Center provider and supplies credentials and the API URL. Terraform will use these values to open HTTPS sessions to the Catalyst Center API at 10.1.1.1 and perform authenticated operations. The username/password are used by the provider plugin to obtain a session token or authenticate each API call according to the provider implementation.
Real-world note: Never commit credentials to source control in plain text; use a secrets manager or environment variables in production pipelines. For lab purposes we place them here for clarity.
Verify:
terraform init
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes required for your configuration.
Explanation of verification: terraform init downloads the provider plugin and initializes the workspace. The expected output above indicates successful initialization.
Step 2: Define the fabric virtual network resource (VN1)
What we are doing: Add a resource describing the virtual network we want the Catalyst Center to create or ensure exists. This is the network construct that partitions traffic within the SD‑Access fabric.
# file: main.tf
resource "catalystcenter_fabric_virtual_network" "VN1" {
name = "VN1"
}
What just happened: The resource block tells Terraform to manage a fabric virtual network named VN1 via the catalystcenter provider. On apply, Terraform will ask the provider to create or update the virtual network so Catalyst Center's runtime has an object named VN1. The provider translates this into the appropriate Catalyst Center API call(s).
Real-world note: Virtual networks are logical constructs used to segregate tenant or service traffic. In a campus, a virtual network isolates voice, guest, and corporate traffic for policy and scale.
Verify:
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# catalystcenter_fabric_virtual_network.VN1 will be created
+ resource "catalystcenter_fabric_virtual_network" "VN1" {
+ id = (known after apply)
+ name = "VN1"
}
Plan: 1 to add, 0 to change, 0 to destroy.
Explanation: terraform plan shows Terraform intends to create one resource (VN1). This is the delta between the desired config and the provider state.
Step 3: Define an anycast gateway (CORP) bound to VLAN 201
What we are doing: Declare an anycast gateway resource named CORP that assigns VLAN_CORP with VLAN ID 201 and associates it with the VN1 virtual network. This models the L3 gateway object in the orchestration plane.
# file: main.tf (append)
resource "catalystcenter_anycast_gateway" "CORP" {
vlan_name = "VLAN_CORP"
vlan_id = 201
traffic_type = "DATA"
l3_virtual_network = catalystcenter_fabric_virtual_network.VN1.name
}
What just happened: The anycast gateway resource describes a distributed gateway instance used by hosts in VLAN 201 for L3 connectivity. By referencing the VN1 resource, Terraform expresses a dependency: VN1 must exist (or be created) before the anycast gateway is created. The provider will call the Catalyst Center APIs to instantiate the anycast gateway object and tie it to the specified virtual network.
Real-world note: Anycast gateways provide consistent default gateways across multiple access devices — think of the same gateway IP advertised everywhere so host mobility is seamless.
Verify:
terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# catalystcenter_fabric_virtual_network.VN1 will be created
+ resource "catalystcenter_fabric_virtual_network" "VN1" {
+ id = (known after apply)
+ name = "VN1"
}
# catalystcenter_anycast_gateway.CORP will be created
+ resource "catalystcenter_anycast_gateway" "CORP" {
+ id = (known after apply)
+ vlan_name = "VLAN_CORP"
+ vlan_id = 201
+ traffic_type = "DATA"
+ l3_virtual_network = "VN1"
}
Plan: 2 to add, 0 to change, 0 to destroy.
Explanation: The plan shows both resources will be created. The explicit reference to VN1 in the anycast gateway ensures Terraform enforces the correct creation order.
Step 4: Apply the configuration to the Catalyst Center
What we are doing: Execute the changes described in plan so the Catalyst Center is updated to include the VN1 virtual network and the CORP anycast gateway.
terraform apply -auto-approve
Acquiring state lock. This may take a few moments...
catalystcenter_fabric_virtual_network.VN1: Creating...
catalystcenter_fabric_virtual_network.VN1: Creation complete after 2s [id=vn1-12345]
catalystcenter_anycast_gateway.CORP: Creating...
catalystcenter_anycast_gateway.CORP: Creation complete after 3s [id=agw-corp-201]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
What just happened: terraform apply sent API calls to the provider to create the virtual network and anycast gateway on the Catalyst Center. The provider returned IDs for the created objects. Terraform persisted the new state locally so subsequent plans will treat these objects as existing rather than new.
Real-world note: In production pipelines, approval gates and automated tests run between plan and apply. Also, consider storing Terraform state remotely with locking (e.g., an S3 backend with DynamoDB locking) to support team workflows.
Verify:
terraform show
# catalystcenter_fabric_virtual_network.VN1:
resource "catalystcenter_fabric_virtual_network" "VN1" {
id = "vn1-12345"
name = "VN1"
}
# catalystcenter_anycast_gateway.CORP:
resource "catalystcenter_anycast_gateway" "CORP" {
id = "agw-corp-201"
vlan_name = "VLAN_CORP"
vlan_id = 201
traffic_type = "DATA"
l3_virtual_network = "VN1"
}
Explanation: terraform show prints the current state, confirming the resources and their key attributes as recorded by Terraform after apply.
Step 5: Integrate with pipeline practices (validation & testing)
What we are doing: Demonstrate the lightweight steps you'd add to an automated pipeline: init, validate (plan), and run automated tests. This step codifies the workflow so orchestration is repeatable and auditable.
# Example CI pipeline steps (local demonstration)
terraform init
terraform plan -out=plan.out
# (In CI, run automation tests that validate plan.out against policies)
terraform apply plan.out
What just happened: The split of plan (producing plan.out) and apply (consuming the exact plan file) guarantees what was reviewed is what will be executed. Automated tests can parse plan.out to ensure no forbidden changes (for example, deleting production objects) are present.
Real-world note: Using plan output in CI ensures change approval and prevents “works in my terminal” issues. This is how teams enforce guardrails in production.
Verify:
terraform plan -out=plan.out
Saved the plan to: plan.out
You can run "terraform apply plan.out" to apply this plan to your infrastructure.
Explanation: The plan was saved to a file; applying that file later executes the exact planned changes.
Verification Checklist
- Check 1: VN1 exists in Catalyst Center — verify with
terraform showwhich should list catalystcenter_fabric_virtual_network.VN1 with name "VN1" and an ID. - Check 2: Anycast gateway CORP exists — verify with
terraform showshowing catalystcenter_anycast_gateway.CORP with vlan_id 201 and l3_virtual_network "VN1". - Check 3: Plan/apply workflow is reproducible — run
terraform planand confirm it produces no changes (Plan: 0 to add, 0 to change, 0 to destroy) when state and remote match.
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| terraform init fails with provider download error | Missing network access to https://10.1.1.1 or provider registry blocked | Ensure workstation/CI can reach the provider endpoint and internet (if provider plugin needs it). Check proxy settings. |
| terraform plan shows attempt to recreate resources every run | The provider returns non-deterministic attributes or the state file is inconsistent | Verify provider implementation fields. Use terraform state to inspect and ensure local state matches remote. Use lifecycle ignore_changes for volatile attributes if appropriate. |
| Apply fails due to authentication | Incorrect username/password for Catalyst Center API | Confirm credentials; for lab use username "admin" and password "Lab@123", and in production use a secrets manager. |
| Anycast gateway not associated with VN1 | Reference was incorrect (string vs resource reference) | Ensure l3_virtual_network = catalystcenter_fabric_virtual_network.VN1.name (resource reference) so Terraform understands the dependency. |
Key Takeaways
- Terraform provides an orchestration layer: the provider maps declarative resources to API calls, giving reproducible, auditable network changes.
- Use the plan → apply workflow: plan produces a human-reviewable delta, apply executes exactly what was reviewed when using a saved plan.
- Model dependencies explicitly: reference resources (e.g., anycast gateway referencing VN1) so Terraform handles creation order.
- In production, integrate Terraform into CI/CD with secrets management and remote state locking to support team operations and prevent drift.
Tip: Think of Terraform as the conductor: you write the score (HCL resources), the provider is the orchestra (API implementation), and the audience (operations team or CI) reviews the performance using plan before the curtain rises (apply).