Network as Code: Cross-Technology ACI and FMC Automation
Introduction
Imagine managing a data center where your ACI fabric and Firepower Management Center configurations are always in sync, version-controlled, and deployed through automated pipelines with zero manual CLI intervention. That is the promise of Network as Code -- a methodology that treats your entire network infrastructure, from switching fabrics to security policies, as declarative code that can be validated, tested, and deployed just like software.
For years, network engineers have struggled with a set of persistent operational challenges: configurations that drift from their intended design, changes that consume too many resources and introduce too many errors, skill shortages that slow innovation, and reactive firefighting that leaves no time for strategic improvements. The traditional change management mindset treats change as risky and complex, relying on manual processes with limited feedback systems. Network as Code flips this paradigm entirely by embracing the DevOps mindset -- where failure is embraced as a learning opportunity, change is good, collaboration is active, accountability is empowered, and automation is the default.
This article provides a comprehensive exploration of how Network as Code principles can be applied to cross-technology automation spanning both Cisco ACI (Application Centric Infrastructure) and Cisco FMC (Firepower Management Center). You will learn the evolution of Infrastructure as Code, how Terraform providers and modules work for ACI and FMC, how YAML-based data models abstract complexity, and how CI/CD pipelines tie the entire workflow together. Whether you are preparing for automation-focused certification exams or looking to modernize your network operations, this guide delivers the deep technical coverage you need.
What Is Network as Code and Why Does It Matter?
Network as Code is a set of Terraform modules designed for managing and automating ACI and FMC environments following Infrastructure as Code (IaC) principles. At its core, it aims to reduce the time to value by lowering the barrier of entry to network orchestration through simplification, abstraction, and curated examples.
The framework allows users to instantiate and configure network infrastructure in minutes using an easy-to-use, opinionated data model. It takes away the complexity of having to deal with references, dependencies, or loops. Users can focus on describing the intended configuration while using a set of maintained and tested Terraform modules without the need to understand the low-level ACI object model or FMC object references.
The Challenges Network as Code Solves
Network operations teams consistently face a common set of pain points:
- Difficulty applying automation -- Teams know they should automate but struggle to get started or scale their efforts beyond simple scripts.
- Questioning network and security compliance -- Without a single source of truth, it is difficult to verify whether the network matches the intended design.
- Design-to-reality drift -- The network as designed rarely matches the network as deployed, and every network looks different even when the design should be identical.
- Resource-intensive changes with high error rates -- Manual configuration changes consume too many resources and introduce too many errors.
- Reactive operations -- Engineers spend more time reacting to network needs than proactively improving the infrastructure.
- Skill shortages and low innovation -- Traditional CLI-heavy workflows create bottlenecks around a few senior engineers.
Network as Code addresses every one of these challenges by providing a declarative, version-controlled, automated approach to network configuration management.
How Has Infrastructure as Code Evolved Over Time?
Understanding where Network as Code fits requires appreciating the evolution of IaC over the past two decades. The journey can be broken into three distinct eras:
| Era | Source | Configuration | Verification |
|---|---|---|---|
| ~20 years ago | Excel or source files on some storage | Configuration may or may not be correct and complete | None or manual |
| ~10 years ago | Templates and data separately stored and maintained | Human triggers a script with no or limited modify/remove scenario support to translate and apply configuration via an API | Manual or semi-automated; changes trigger a review workflow and a complex script merging template + data and trying to handle all add/modify/remove scenarios |
| IaC today | Data model with data | Data-model-based tools perform pre-change validations and compute and apply minimal diffs for all add/modify/remove scenarios | Automated end-to-end tests and health checks; configuration is working, complete, versioned, and auditable |
The Early Days (~20 Years Ago)
In the earliest approaches, network configurations were tracked in spreadsheets or flat files stored on shared drives. There was no structured validation, no version control, and verification was either nonexistent or entirely manual. The configuration itself might or might not be correct and complete -- there was simply no systematic way to know.
The Scripting Era (~10 Years Ago)
The next generation introduced templating engines and basic API interaction. Engineers would maintain templates and data files separately, then use scripts to merge them and push configurations. However, these scripts struggled with modify and remove scenarios, often only handling the initial "add" case well. A human still had to trigger the process, and the merging logic grew increasingly complex as it tried to handle all possible add/modify/remove scenarios.
Modern IaC (Today)
Today's approach, exemplified by Network as Code, uses data-model-based tools that perform pre-change validations and compute minimal diffs. The configuration is always working, complete, versioned, and auditable. Automated end-to-end tests and health checks replace manual verification. This is the paradigm that Network as Code for ACI and FMC fully embraces.
Pro Tip: The key differentiator of modern IaC is the shift from imperative scripts ("do these steps") to declarative data models ("this is what I want the end state to look like"). The tooling figures out how to get there.
What Is the DevOps Mindset for Network as Code?
Adopting Network as Code is not just a technology shift -- it requires a fundamental change in how teams think about network changes. The reference framework draws a clear distinction between two mindsets:
Traditional Change Management Mindset
- Avoid failure at all costs
- Change is risky and complex
- Empowered accountability exists but within rigid processes
- Limited feedback systems
- Manual processes dominate
DevOps Mindset
- Embrace failure as a learning opportunity
- Change is good and expected
- Active collaboration across teams
- Empowered accountability with fast feedback loops
- Automation is the default approach
The DevOps mindset does not mean being reckless with production networks. Instead, it means building the automated safety nets -- validation, testing, rollback capabilities -- that make frequent, small changes safe and reliable. When your network configuration is fully version-controlled and every change goes through automated validation, the risk of any individual change drops dramatically.
This cultural shift is often the hardest part of adopting Network as Code. The technology is well-proven, but getting teams to trust automated pipelines over manual change windows requires experience, training, and incremental adoption.
How Does the Network as Code Pipeline Work?
The target Network as Code flow follows a structured CI/CD pipeline pattern that ensures every change is validated, tested, and deployed consistently. The pipeline consists of several distinct stages:
Stage 1: Declare
NetDevOps engineers declare the desired state of the network by writing or modifying YAML data files. These files describe what the ACI fabric and FMC configuration should look like, not how to get there. This is the "Services as Code" layer where engineers focus on intent.
Stage 2: Commit, Validate, and Build
Once changes are ready, the engineer pushes to a feature branch in Git. This triggers the first automated pipeline:
- Input validation and linting -- The YAML data is checked for syntax errors and validated against the schema definition.
- Pre-change analysis -- Semantic validation ensures the declared configuration is logically consistent.
- Terraform Plan -- Terraform computes the delta between the desired state and the current state, producing a detailed plan of what will change.
- Notification -- The team is notified of the planned changes for review.
Stage 3: Deploy
After the feature branch is merged to the main branch, a second pipeline triggers:
- Deployment -- Terraform Apply executes the planned changes against the live infrastructure.
- Delta analysis -- The actual changes are compared against the expected changes.
- Automated testing -- Post-deployment tests verify that the configuration was applied correctly and the network is functioning as expected.
- Test reports and notification -- Results are published and the team is notified of the deployment outcome.
GIT push to feature branch
--> GitLab triggers pipeline
--> Input validation and linting
--> Pre-Change Analysis
--> Terraform Plan
--> Notification
GIT merge to main branch
--> GitLab triggers pipeline
--> Deployment
--> Delta Analysis
--> Automated Testing
--> Test Reports
--> Notification
Pro Tip: The separation between the feature branch pipeline (plan only) and the main branch pipeline (apply) is critical. It ensures that no changes reach production without going through code review and automated validation first.
The Three-Step Developer Experience
From the individual engineer's perspective, the workflow simplifies to three steps:
- Declare -- Write YAML describing the desired configuration
- Commit, Validate, Build -- Push to Git and let the pipeline validate
- Deploy -- Merge to main and let the pipeline deploy
This simplicity is by design. The complexity of API interactions, dependency management, and state reconciliation is handled entirely by the Terraform modules and the pipeline infrastructure.
How Does Terraform Power Network as Code?
Terraform is the engine that drives the Network as Code framework. Understanding its core concepts is essential for working with ACI and FMC automation.
What Is Terraform?
Terraform is an open-source infrastructure provisioning tool. Key characteristics include:
- Ships as a single executable binary ready to run
- Available for Linux, Windows, and MacOS
- Has zero server-side dependencies -- it is a strictly client-side architecture
- Uses a declarative language called HCL (HashiCorp Configuration Language)
- Is extended via plugins (providers) created by HashiCorp or IT vendors directly
The client-side architecture is particularly important for network automation. There is no server to install, no agent to deploy on network devices, and no additional infrastructure to maintain. Terraform runs on the engineer's workstation or in a CI/CD pipeline runner and communicates directly with the target APIs.
Terraform Providers
Providers are the plugins that allow Terraform to interact with specific platforms and APIs. There are three tiers of providers:
| Tier | Description |
|---|---|
| Official | Owned and maintained by HashiCorp |
| Partner | Owned and maintained by a technology company that maintains a direct partnership with HashiCorp |
| Community | Owned and maintained by individual contributors |
For Network as Code, the relevant providers are the FMC provider and the ACI provider, which enable Terraform to manage configurations on those platforms through their respective APIs.
Desired State vs. Actual State
One of Terraform's most powerful concepts is its state management. Terraform maintains a state file that records the current (actual) state of the infrastructure it manages. The workflow operates as follows:
- Terraform Refresh -- Polls the actual infrastructure and updates the state file
- Terraform Plan -- Compares the desired state (from your HCL/YAML files) against the actual state (from the state file) and computes the delta
- Terraform Apply -- Pushes the delta changes to the infrastructure and updates the state file
Desired State (.tf files, YAML data)
|
v
Terraform Plan <--> State File (.tfstate)
| ^
v |
Terraform Apply --------->|
|
v
Infrastructure (ACI, FMC)
Pro Tip: Never manually change infrastructure that is managed by Terraform. If you make a manual change, Terraform will detect the drift on the next plan and attempt to revert it to match the declared desired state. Similarly, never manually modify the Terraform state file.
Network as Code for FMC Automation
The FMC (Firepower Management Center) provider for Terraform enables declarative management of firewall policies, objects, device configurations, and more. The Network as Code framework builds on top of this provider with a module that provides an inventory-driven, YAML-based approach.
Initializing the FMC Provider
Setting up the FMC Terraform provider begins with initialization. When you run terraform init, Terraform downloads and installs the specified provider:
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding ciscodevnet/fmc versions matching "2.0.0-beta1"...
- Installing ciscodevnet/fmc v2.0.0-beta1...
- Installed ciscodevnet/fmc v2.0.0-beta1
Terraform has been successfully initialized!
You can verify which providers are configured for your project:
terraform providers
Providers required by configuration:
.
provider[registry.terraform.io/ciscodevnet/fmc] 2.0.0-beta1
The provider is sourced from the Terraform registry under ciscodevnet/fmc and falls under the Partner tier, meaning it is maintained by the technology vendor with a direct partnership with HashiCorp.
FMC Data Sources
Data sources allow Terraform to use information defined outside of Terraform. In the case of the FMC provider, these are objects that already exist on the device -- either predefined or previously configured. Data sources are read-only; they let you reference existing resources without managing them.
For example, to reference an existing host object named "Gateway":
data "fmc_host" "example" {
name = "Gateway"
}
After running Terraform, you can inspect what was retrieved:
terraform state show data.fmc_host.example
# data.fmc_host.example:
data "fmc_host" "example" {
id = "005056B0-A31C-0ed3-0000-004294967348"
ip = "10.1.1.1"
name = "Gateway"
overridable = false
type = "Host"
}
This shows that Terraform has retrieved the host object's ID, IP address, name, overridable flag, and type. These attributes can then be referenced in other resources using the standard Terraform interpolation syntax (e.g., data.fmc_host.example.id).
FMC Resources
Resources are the most important element in the Terraform language. Each resource block describes one or more infrastructure objects such as network objects, device configurations, or access control policy configurations.
Here is an example of creating a network group that references both an existing host object (via a data source) and a literal network:
resource "fmc_network_groups" "example" {
items = {
My_Net_Grp1 = {
objects = [
{
id = data.fmc_host.example.id
}
]
literals = [
{
value = "10.1.1.0/24"
}
]
}
}
}
When Terraform plans this resource, it shows exactly what will be created:
# fmc_network_groups.example will be created
+ resource "fmc_network_groups" "example" {
+ id = (known after apply)
+ items = {
+ "My_Net_Grp1" = {
+ id = (known after apply)
+ literals = [
+ {
+ value = "10.1.1.0/24"
},
]
+ objects = [
+ {
+ id = "005056B0-A31C-0ed3-0000-004294967348"
},
]
},
}
}
Notice how the data source reference (data.fmc_host.example.id) is automatically resolved to the actual object ID. This is the power of Terraform's dependency management -- you declare relationships, and Terraform handles the resolution.
The FMC Terraform Module
While providers and resources give you the building blocks, the FMC Terraform module provides a higher-level abstraction. The module supports an inventory-driven approach where a complete FMC configuration or parts of it are modeled in one or more YAML files or natively using Terraform variables.
Here is how the same network group from the previous example looks when expressed through the module's YAML data model:
existing.yaml (referencing pre-existing objects):
fmc:
domains:
- name: Global
objects:
hosts:
- name: Gateway
demo.yaml (declaring desired configuration):
fmc:
domains:
- name: Global
objects:
network_groups:
- name: My_Net_Grp1
objects:
- Gateway
literals:
- 10.1.1.0/24
Compare the YAML approach to the raw Terraform HCL resource definition. The YAML version is dramatically simpler: you reference the "Gateway" host by name rather than by ID, and the module handles the lookup and dependency wiring automatically. This is the abstraction that makes Network as Code accessible to network engineers who may not be Terraform experts.
Network as Code for ACI Automation
The ACI (Application Centric Infrastructure) side of Network as Code follows the same principles and architecture as the FMC module, providing a YAML-driven, declarative approach to managing the entire ACI fabric configuration.
ACI Data Model Structure
The ACI data model organizes configuration into logical YAML files, each covering a specific section of the fabric configuration:
$tree -L 2
.
├── aci_data
│ ├── access_policies.yaml
│ ├── fabric_policies.yaml
│ ├── node_policies.yaml
│ ├── pod_policies.yaml
│ ├── node_1001.yaml
│ ├── node_101.yaml
│ ├── node_102.yaml
│ └── tenant_PROD.yaml
└── main.tf
This structure provides several advantages:
- Logical separation -- Access policies, fabric policies, node policies, and tenant configurations each live in their own files, making them easier to find, review, and modify.
- Per-node configuration -- Individual node files (e.g.,
node_101.yaml,node_102.yaml) allow node-specific settings without cluttering the shared configuration files. - Tenant isolation -- Tenant-specific configuration is cleanly separated, making multi-tenant environments manageable.
A simple ACI tenant with VRFs declared in YAML looks like this:
apic:
tenants:
- name: CiscoLive
vrfs:
- name: VRF1
- name: VRF2
This concise YAML declaration replaces what would otherwise require navigating multiple screens in the APIC GUI or writing complex API calls. The ACI Terraform module translates this YAML into the appropriate ACI object model calls, handling all the internal references and dependencies.
Cross-Technology Architecture
What makes the Network as Code framework particularly powerful is its cross-technology capability. The framework includes both an ACI Module and an FMC Module that can operate together or independently:
| Component | Purpose |
|---|---|
| ACI Module | Manages ACI fabric configuration (tenants, VRFs, BDs, EPGs, contracts, access/fabric/node/pod policies) |
| FMC Module | Manages FMC configuration (domains, objects, policies, devices, routing, system settings) |
| Semantic Validation | Pre-change validation layer that checks logical consistency |
| Automated Testing | Post-deployment testing that verifies the configuration works as intended |
The modules share the same declarative, YAML-driven philosophy, meaning that an engineer who learns the ACI data model can quickly become productive with the FMC data model and vice versa. The consistency in approach across technologies is a significant advantage for teams managing both data center fabric and security infrastructure.
Understanding the Data Model and Schema Validation
The data model is the heart of the Network as Code framework. It describes the structure and format of the input data that defines the desired state of the infrastructure.
FMC Data Model Structure
The FMC data model organizes configuration into purpose-specific YAML files:
$tree -L 2
.
├── fmc_data
│ ├── devices.yaml
│ ├── policies.yaml
│ ├── objects.yaml
│ ├── routing.yaml
│ └── system.yaml
└── main.tf
This maps naturally to the FMC's own organizational structure:
| YAML File | FMC Scope |
|---|---|
devices.yaml | Device management, interfaces, platform settings |
policies.yaml | Access control policies, intrusion policies |
objects.yaml | Hosts, networks, ports, network groups |
routing.yaml | Static routes, gateway definitions |
system.yaml | System-level settings |
Schema Validation
The framework includes a schema definition that validates input data before any changes are attempted. The schema is defined in a dedicated file:
.
├── data
│ ├── demo.yaml
│ └── existing.yaml
├── defaults
│ └── defaults.yaml
├── schemas
│ └── schema.yaml
└── main.tf
The schema defines the expected structure, data types, and requirements for every field. For example, a portion of the FMC schema:
fmc:
domains: list(include("domains"), required=False)
name: str(required=True)
system: include("system", required=False)
This tells the validation engine that:
- The
fmcobject must have anamefield (string, required) - It may optionally have a
domainslist - It may optionally include a
systemconfiguration section
Schema validation catches errors before they ever reach Terraform or the target devices, providing a fast feedback loop that saves time and prevents misconfigurations.
Default Values
The framework supports a defaults mechanism that reduces repetition in the data model. Defaults are defined in a separate file:
defaults:
fmc:
domains:
devices:
devices:
physical_interfaces:
enabled: true
mode: "NONE"
This example sets the default values for physical interfaces on FMC-managed devices: enabled is true and mode is "NONE". There is no need to repeat these values for every interface in every device definition. If the default is correct, simply omit the field; if a specific interface needs a different value, override it in the data file.
Pro Tip: Defaults should represent the most common configuration in your environment. Well-chosen defaults can reduce your YAML data files by 50% or more, making them easier to read, review, and maintain.
How Does a Complete FMC Device Declaration Look?
Bringing together data models, schemas, and defaults, here is what a complete FMC device declaration looks like in the YAML data model:
fmc:
name: MyFMCName1
domains:
- name: Global
devices:
devices:
- name: MyDeviceName1
host: 10.62.158.85
registration_key: cisco123
access_policy: MyAccessPolicyName1
performance_tier: FTDv5
ipv4_static_routes:
- name: MyDefaultRoute1
gateway: GW1
interface: outside
metric: 1
selected_networks:
- any-ipv4
This single YAML block declares:
- The FMC instance by name (
MyFMCName1) - The domain (
Global) under which the device is managed - The device with its management IP (
10.62.158.85), registration key, associated access policy, and performance tier - A static route with its gateway, interface, metric, and destination network
Without Network as Code, achieving this same configuration would require:
- Logging into the FMC GUI
- Navigating to Devices, then Device Management
- Adding the device with registration details
- Configuring interfaces (with defaults applied automatically by the framework)
- Creating and applying access policies
- Configuring static routes under the device's routing section
The declarative YAML approach consolidates all of these steps into a single, reviewable, version-controllable file.
FMC API and Object Structure
The FMC follows a hierarchical data structure that the data model mirrors:
| API Category | Sub-Categories |
|---|---|
| Domain | Policies, Devices, Objects, System Settings |
| Policies | Access Control, Intrusion |
| Devices | Standalone, Cluster |
| Objects | Hosts, Networks, Ports |
Understanding this hierarchy helps when structuring your YAML data files. Each level in the YAML corresponds to a level in the FMC's own object model, making the mapping intuitive for engineers already familiar with the FMC GUI or API.
Network as Code Best Practices for Cross-Technology Automation
Successfully implementing Network as Code across both ACI and FMC requires attention to several best practices drawn from operational experience.
Version Control Everything
Every piece of your network configuration -- YAML data files, Terraform modules, schemas, defaults, and pipeline definitions -- should live in a Git repository. This provides:
- Audit trail -- Every change is tracked with who made it, when, and why
- Rollback capability -- Any previous configuration state can be restored
- Code review -- Changes go through peer review before reaching production
- Branch-based workflows -- Feature branches allow experimentation without risk to production
Separate Data from Logic
The Network as Code framework enforces a clean separation between the data (YAML files describing desired state) and the logic (Terraform modules that implement the changes). This separation means:
- Network engineers can modify configurations by editing YAML without understanding Terraform internals
- Module maintainers can update the automation logic without touching operational data
- Testing can be performed independently on both layers
Use the Schema Validation Layer
Always run semantic and syntax validation before executing Terraform plan. The schema validation catches structural errors that would otherwise only surface during the Terraform apply phase, when they are more expensive to debug and potentially disruptive.
Manage Existing Resources Carefully
When adopting Network as Code in a brownfield environment (where configurations already exist), use the existing data files to reference pre-existing objects without attempting to manage them. This prevents Terraform from trying to recreate or modify objects that were configured outside of the automation framework.
Leverage Defaults Effectively
Define sensible defaults that reflect your organization's standard configurations. This reduces YAML file size, enforces consistency, and makes deviations from the standard explicitly visible in code review.
Pro Tip: Start your Network as Code adoption with a single, low-risk tenant or policy set. Prove the workflow end-to-end before expanding to critical production configurations. The investment in getting the pipeline, schemas, and defaults right pays dividends as you scale.
How Does Network as Code Compare to Traditional Approaches?
To fully appreciate the value of Network as Code for cross-technology ACI and FMC automation, it helps to see a direct comparison with the approaches it replaces:
| Aspect | Manual / GUI | Custom Scripts | Network as Code |
|---|---|---|---|
| Source of truth | Tribal knowledge, spreadsheets | Templates + data files | YAML data model in Git |
| Change process | Login, click, configure | Run script manually | Git commit triggers pipeline |
| Validation | Visual inspection | Limited or none | Schema validation + Terraform plan |
| Testing | Manual verification | Manual or basic scripts | Automated end-to-end tests |
| Rollback | Reconfigure manually | Revert and rerun script | Git revert + pipeline deploy |
| Auditability | Change tickets (if filed) | Script logs (if captured) | Full Git history |
| Cross-technology | Separate workflows per platform | Separate scripts per platform | Unified YAML model + modules |
| Skill requirement | Deep CLI/GUI knowledge per platform | Programming + platform APIs | YAML + Git basics |
| Error rate | High (human error) | Medium (script bugs) | Low (automated validation) |
| Scalability | Poor (linear with staff) | Medium (script maintenance) | High (data model scales) |
The most significant advantage in a cross-technology context is the unified approach. With Network as Code, managing ACI fabric policies and FMC security policies follows the same workflow, uses the same tools, and goes through the same pipeline. Engineers do not need to context-switch between entirely different automation approaches for different platforms.
Frequently Asked Questions
What is the difference between the Terraform provider and the Terraform module in Network as Code?
The Terraform provider is the plugin that enables Terraform to communicate with a specific platform's API (such as the FMC or ACI API). It provides the raw resources and data sources. The Terraform module is a higher-level abstraction built on top of the provider that accepts YAML data models and translates them into the appropriate provider resource calls. The module handles dependency resolution, default values, and the mapping between the simplified YAML syntax and the full API object model. Most Network as Code users interact primarily with the module's YAML data model rather than writing raw provider resources.
Do I need to be a Terraform expert to use Network as Code for ACI and FMC?
No. The Network as Code framework is specifically designed to lower the barrier of entry. Network engineers primarily interact with YAML data files that describe the desired configuration in a human-readable format. The Terraform modules handle the complexity of API interactions, state management, and dependency resolution. However, a basic understanding of Terraform concepts (init, plan, apply, state) is helpful for troubleshooting and understanding what the pipeline is doing. The framework abstracts away most of the Terraform-specific complexity, allowing engineers to focus on the network configuration itself rather than the automation tooling.
Can I use Network as Code in an environment where some configuration was already done manually?
Yes. The framework supports brownfield deployments through the concept of existing data. You can define objects that already exist on the FMC or ACI in a separate existing.yaml file. Terraform will read these objects as data sources (read-only) and allow you to reference them in your managed configuration without attempting to recreate or modify them. This is the recommended approach for gradual adoption: start by referencing existing objects and incrementally bring more of the configuration under Network as Code management.
What happens if someone makes a manual change to infrastructure managed by Network as Code?
Terraform tracks the actual state of managed infrastructure in its state file. On the next terraform plan, it will detect the drift between the desired state (your YAML data) and the actual state (the live infrastructure). The plan will show the changes needed to bring the infrastructure back into alignment with the declared desired state. This is why the framework strongly advises against making manual changes to managed infrastructure -- Terraform will detect and attempt to revert them. The proper workflow is to modify the YAML data files and let the pipeline deploy the changes.
How does schema validation work in the Network as Code framework?
The framework includes a schema definition file (schema.yaml) that specifies the expected structure, data types, required fields, and optional fields for every section of the data model. Before Terraform plan is executed, the pipeline runs input validation and linting against this schema. This catches errors like missing required fields, incorrect data types, or invalid structure before any API calls are made. Schema validation provides a fast feedback loop that prevents misconfigurations from reaching the Terraform planning stage, saving time and reducing risk.
Is Network as Code suitable for both small and large-scale deployments?
Yes. The data-model-driven approach scales naturally because adding new devices, tenants, policies, or objects is simply a matter of adding entries to YAML files. The Terraform modules compute minimal diffs, so even in large environments, only the changed portions of the configuration are modified. The YAML file structure supports splitting configuration across multiple files by logical function (access policies, fabric policies, node policies, tenant configurations), which keeps individual files manageable regardless of the overall scale. Small environments benefit from the consistency and auditability, while large environments benefit from the scalability and reduced operational overhead.
Conclusion
Network as Code represents a fundamental shift in how network and security infrastructure is managed. By applying Infrastructure as Code principles to cross-technology environments spanning both ACI and FMC, organizations can achieve consistency, auditability, and automation that manual processes simply cannot match.
The key takeaways from this deep dive are:
- Network as Code is a framework of Terraform modules that provides an opinionated, YAML-driven data model for managing ACI and FMC configurations declaratively.
- The DevOps mindset is as important as the technology -- embracing change, automation, and feedback systems is essential for success.
- Terraform providers and modules work together to abstract the complexity of platform APIs, letting engineers focus on what the network should look like rather than how to configure it.
- YAML data models with schema validation provide a human-readable, machine-parseable way to declare infrastructure state that can be version-controlled and reviewed like software code.
- CI/CD pipelines tie everything together, ensuring that every change goes through validation, planning, and testing before reaching production.
- Cross-technology consistency means the same workflow, tools, and principles apply whether you are managing ACI fabric policies or FMC security policies.
The journey to Network as Code starts with understanding the concepts, practicing with small deployments, and progressively expanding automation coverage. Whether you are managing a single ACI fabric with an FMC or operating a multi-site data center environment, the principles remain the same: declare your desired state, validate it automatically, and deploy it through a trusted pipeline.
To build the foundational skills needed for network automation and Infrastructure as Code, explore the automation and data center courses available at NHPREP. Hands-on practice with technologies like ACI, FMC, and Terraform is the fastest path to operational confidence with Network as Code.