AI-Generated Terraform/Ansible
Objective
In this lesson you will use AI-generated artifacts to produce Infrastructure-as-Code for network intent: a tenant with bridge domains and subnets, and supporting validation. You will convert natural-language requirements into declarative YAML (the canonical data model), generate Ansible and Terraform stubs, and run schema validation before any device changes. This matters in production because automated generation plus pre-change validation eliminates many human errors and greatly speeds safe change rollout — for example, provisioning a tenant and bridge domains in an SDN fabric with consistent defaults across hundreds of objects.
Topology (Quick Recap from Lesson 1)
Only topology elements relevant to this lesson are shown below. Refer to Lesson 1 for the full physical fabric.
ASCII Topology (showing exact IPs present in the reference material)
[APIC Controller] [Leaf: d2d]
mgmt: lab.nhprep.com interfaces:
lo0
lo1
lo250
Eth1/1
Eth1/2
Eth1/3
Bridge Domain / Subnet assigned on tenant ABC:
- IPv4 SVI / subnet: 1.1.1.1/24
- IPv6 SVI / subnet: fd00:0:abcd:1::1/64
Fabric anycast for PIM/underlay:
- pim_anycast_ip: 10.250.250.1
Note: Interface names (lo0, lo1, lo250, Eth1/1, Eth1/2, Eth1/3) and IPs (1.1.1.1/24, fd00:0:abcd:1::1/64, 10.250.250.1) are taken exactly from the reference model.
Device Table
| Device | Role |
|---|---|
| APIC / Controller | Controller where the YAML model is applied |
| d2d (leaf switch) | Fabric leaf with listed interfaces (lo0, lo1, lo250, Eth1/1–Eth1/3) |
Key Concepts (before hands-on)
- Infrastructure as Code (IaC) data model: The canonical representation of intent is YAML. This single source describes tenants, bridge domains, subnets, VRFs, and per-object defaults. In production, IaC is the input to automation pipelines (Ansible/Terraform) and the basis for pre-change validation.
- Schema validation (pre-change checks): A tool such as the referenced nac-validate verifies that YAML conforms to the expected schema, catching syntax and semantic errors before devices are touched. This dramatically reduces misconfigurations — in large networks a single bad character can create outages.
- AI-generated code: AI is used to convert natural language into YAML/Terraform/Ansible artifacts. Think of AI as an assistant that maps intent -> validated model. Humans must still review and run schema validation.
- Pre- and post-change validation: The workflow is: (1) generate YAML, (2) run schema validation (nac-validate), (3) convert to automation (Ansible/Terraform), (4) run simulated/application validation. Packet-level behavior: when the bridge-domain has a primary IP (1.1.1.1/24), the controller will ensure the fabric programs the SVI and associated ARP/ND behaviors per the model (e.g., igmp_querier settings).
- Consistent defaults and reuse: The YAML model supports defaults (e.g., name_suffix) to avoid name mismatches. In production this avoids subtle human errors like inconsistent suffixes on object names.
Step-by-step configuration
Step 1: Compose the natural-language intent for AI
What we are doing: Capture the human intent in plain English so an AI can generate a compliant YAML model. Clear intent reduces ambiguity and produces predictable IaC output. We include exact IPs and desired flags (e.g., unicast_routing, igmp_querier).
# Natural-language prompt provided to the AI assistant
# (This is the user intent; not a device CLI)
"Create an APIC tenant named 'ABC' with a bridge domain 'BD1' (alias 'ABC_BD1').
Add two subnets:
- IPv4 1.1.1.1/24, description 'My Desc', primary_ip true, public false, igmp_querier true
- IPv6 fd00:0:abcd:1::1/64, description 'My IPv6 Desc', primary_ip true, public true, igmp_querier true
Set vrf to 'VRF1' and unicast_routing true. Use defaults for name_suffix '_bd' and anycast gateway MAC '12:34:56:78:90:00'."
What just happened: You defined unambiguous requirements referencing exact IPs and object names. Good intents ensure generated YAML maps one-to-one with network objects and their expected operational behavior (e.g., a primary IP on the bridge domain produces SVI programming in the fabric).
Real-world note: In production, capture intent in templates (questions & choices) so AI outputs remain consistent and auditable.
Verify:
# Verify that the natural language prompt contains required fields
# (This is a manual verification step; the AI-generated YAML below must match intent)
Step 2: Generate the YAML data model (AI output)
What we are doing: Produce the canonical YAML model describing the APIC tenant and bridge-domain exactly as required. This YAML is the artifact that automation (Ansible/Terraform) consumes.
apic:
tenants:
- name: ABC
bridge_domains:
- name: BD1
alias: ABC_BD1
mac: 00:22:BD:F8:19:FE
virtual_mac: 00:23:BD:F8:19:12
ep_move_detection: true
arp_flooding: false
ip_dataplane_learning: false
limit_ip_learn_to_subnets: false
multi_destination_flooding: encap-flood
unknown_unicast: proxy
unknown_ipv4_multicast: flood
unknown_ipv6_multicast: flood
unicast_routing: true
clear_remote_mac_entries: true
advertise_host_routes: true
l3_multicast: false
multicast_arp_drop: false
vrf: VRF1
nd_interface_policy: "ND_INTF_POL1"
subnets:
- ip: 1.1.1.1/24
description: My Desc
primary_ip: true
public: false
shared: false
virtual: false
igmp_querier: true
nd_ra_prefix: true
no_default_gateway: false
- ip: fd00:0:abcd:1::1/64
description: My IPv6 Desc
primary_ip: true
public: true
shared: false
virtual: false
igmp_querier: true
nd_ra_prefix: true
no_default_gateway: false
What just happened: The AI created structured YAML mapping the natural-language elements to explicit configuration fields. Each YAML field (for example, unicast_routing: true) corresponds to controller behaviors and fabric programming (routing, IGMP querier behavior, ND advertisement). Having this model ensures deterministic automation downstream.
Real-world note: Always review AI output for exact fields and unintended defaults; automated validators catch many but human review is essential for policy compliance.
Verify:
# Run schema validation to ensure the generated YAML matches the expected model
nac-validate -s .schema ./data
Expected output:
nac-validate -s .schema ./data
Valid
Format: YAML
Schema: .schema
Compliance: All checks passed
Step 3: Generate Ansible and Terraform stubs from the YAML
What we are doing: Convert the canonical YAML into automation artifacts (Ansible playbook and a Terraform resource stub). These are the mechanisms that actually apply changes to controllers or devices in a pipeline.
# Ansible playbook stub (AI-generated)
- name: Apply APIC tenant ABC
hosts: apic_controllers
gather_facts: false
vars:
tenant_name: ABC
bridge_domain: BD1
tasks:
- name: Push bridge domain configuration
apic:
host: "{{ inventory_hostname }}"
username: admin
password: Lab@123
tenant: "{{ tenant_name }}"
config: "{{ lookup('file', 'apic_abc_bd1.yaml') }}"
# Terraform stub (AI-generated)
provider "apic" {
# provider configuration would be filled in by the operator
username = "admin"
password = "Lab@123"
endpoint = "https://lab.nhprep.com"
}
resource "apic_tenant" "ABC" {
name = "ABC"
bridge_domains = [
{
name = "BD1"
alias = "ABC_BD1"
vrf = "VRF1"
subnets = [
{ ip = "1.1.1.1/24", primary_ip = true, igmp_querier = true },
{ ip = "fd00:0:abcd:1::1/64", primary_ip = true, igmp_querier = true }
]
}
]
}
What just happened: AI translated the data model into executable stubs that can be placed in a CI/CD pipeline. Ansible tasks call into an APIC module (placeholder), and Terraform declares resources. These stubs make the human intent repeatable and auditable.
Real-world note: In production, provider credentials and endpoints are stored in vaults; never commit plaintext secrets. For labs we used Lab@123 and lab.nhprep.com as placeholders.
Verify:
# Lint or dry-run the Ansible and Terraform artifacts where possible.
# For this lesson we run schema validation on the YAML input used by both artifacts.
nac-validate -s .schema ./data
Expected output:
nac-validate -s .schema ./data
Valid
Format: YAML
Schema: .schema
Compliance: All checks passed
Step 4: Pre-change validation and default enforcement
What we are doing: Ensure defaults (e.g., name_suffix, anycast MAC) are consistently applied and validate the configuration once more before committing to the controller. This reduces drift and naming anomalies.
# Example defaults.yaml (AI-generated using defaults from the reference)
defaults:
apic:
tenants:
bridge_domains:
name_suffix: _bd
unicast_routing: false
d2d:
anycast_gateway_mac: 12:34:56:78:90:00
What just happened: Defaults cascade into object creation. For example, if AI generates a BD named "BD1" and defaults append "_bd", the operator sees "BD1_bd" consistently across objects. Consistent naming is crucial in large-scale automation to avoid referencing mismatches.
Real-world note: Defaults can be centrally managed and overridden only when explicitly required; this reduces manual suffix errors.
Verify:
nac-validate -s .schema ./data
Expected output:
nac-validate -s .schema ./data
Valid
Format: YAML
Schema: .schema
Compliance: All checks passed
Step 5: Simulate a post-change state and re-validate
What we are doing: After a simulated change (for example, marking the IPv4 subnet as public/shared or toggling unicast_routing), re-run validation to catch policy violations or unintended consequences.
# Simulated post-change YAML excerpt (from the reference "post-change validation" example)
apic:
tenants:
- name: ABC
bridge_domains:
- name: BD1
alias: ABC_BD1
unicast_routing: true
vrf: VRF1
subnets:
- ip: 1.1.1.1/24
description: My Desc
primary_ip: true
public: true
shared: true
virtual: false
igmp_querier: true
- ip: fd00:0:abcd:1::1/64
description: My IPv6 Desc
primary_ip: true
public: true
What just happened: The YAML now represents a post-change state where the IPv4 subnet is set public/shared. Running validation here checks whether such a change violates policy (for example, if shared subnets are only allowed under specific conditions).
Real-world note: Pre-deploy checks should include custom rules (e.g., disallow shared:true for certain VRFs). These rules can be implemented as custom python validators as part of the validation pipeline.
Verify:
nac-validate -s .schema ./data
Expected output (example if the model complies):
nac-validate -s .schema ./data
Valid
Format: YAML
Schema: .schema
Compliance: All checks passed
(If a custom policy is violated, output would include explicit errors pointing to the offending fields and suggested fixes.)
Verification Checklist
- Check 1: The generated YAML contains tenant ABC, BD1, with IPv4 1.1.1.1/24 and IPv6 fd00:0:abcd:1::1/64 — verify by opening the YAML file and inspecting the subnets section.
- Check 2: Schema validation returns "Valid" — verify by running:
Expected: "Valid" and "All checks passed".nac-validate -s .schema ./data - Check 3: Ansible/Terraform stubs reference lab.nhprep.com and use Lab@123 for lab credentials in placeholders — verify by inspecting the generated stubs.
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| Validation fails with schema error on "subnets.ip" | The generated YAML used an incorrect IP format or typo (e.g., missing colon in IPv6) | Correct the IP line to exact format: fd00:0:abcd:1::1/64 and re-run nac-validate |
| AI-generated name mismatch (e.g., 'BD1' vs 'BD1_bd' referenced) | Defaults (name_suffix) were not applied uniformly | Ensure defaults.yaml is applied before generation or explicitly include suffixed names in AI prompt |
| Secrets are committed in automation stubs | Generated playbooks/terraform included plaintext credentials | Replace credentials with references to vaults/variables and rotate secrets; do not commit plaintext Lab@123 into repos |
| Post-change policy violation (e.g., shared:true disallowed) | Custom policy rules were not included in the pre-change validation | Add custom validators (python rules) to the validation pipeline to catch these before deployment |
Key Takeaways
- Use AI to translate clear, unambiguous natural-language intent into a canonical YAML model; this provides a single source of truth for automation.
- Always run schema validation (nac-validate) before applying changes to production — it catches syntax and semantic errors and enforces policy compliance.
- Keep defaults centralized (e.g., name_suffix, anycast MAC) so generated artifacts are consistent and human-error-prone naming differences are minimized.
- AI-generated Ansible/Terraform stubs accelerate delivery, but operators must review outputs and use secret management (vaults) and custom validators in CI/CD for safe deployment.
Tip: Treat AI output as a first pass. The model + schema validator are your safety net; automation is powerful only when paired with rigorous pre- and post-change validation.