Back to Blog
CCIE Security23 min read

ACI Multi-Site Architecture and Deployment Guide

A
Admin
March 26, 2026
ACI multi-siteNexus Dashboard OrchestratorCCIE Enterprisedata center networkingVXLAN EVPN

ACI Multi-Site Architecture and Deployment

Introduction

Imagine you are responsible for managing multiple data center fabrics spread across different geographic locations. Each fabric runs its own independent ACI domain, yet your organization demands seamless policy enforcement, workload mobility, and disaster recovery capabilities across all of them. How do you tie these separate fabrics together without creating a single, brittle control plane that collapses under the weight of its own complexity? The answer lies in ACI multi-site architecture -- a design model purpose-built for loosely coupled data centers that need unified orchestration without sacrificing autonomy.

ACI multi-site has become a cornerstone of modern enterprise and service provider data center designs. It enables organizations to interconnect multiple ACI fabrics, each operating with its own independent APIC cluster, while a centralized Nexus Dashboard Orchestrator manages cross-fabric policy definitions. This architecture addresses critical use cases including data center interconnect (DCI), disaster recovery, IP mobility, compartmentation for scale, and centralized orchestration for autonomous fabrics.

In this article, we will take a deep dive into the ACI multi-site architecture and its deployment considerations. We will cover the evolution of ACI architectural options, the role of the Inter-Site Network (ISN), how the Nexus Dashboard Orchestrator fits into the picture, control and data plane mechanics, MTU considerations (including the TCP-MSS adjust functionality), migration from legacy Multi-Site Orchestrator to NDO, and the most common use cases driving adoption. Whether you are preparing for a CCIE Enterprise certification or planning a real-world multi-fabric deployment, this guide will give you the technical depth you need.

What Are the ACI Architectural Options?

Before diving into multi-site specifics, it is essential to understand where ACI multi-site fits within the broader spectrum of ACI architectural options. The ACI fabric and policy domain has evolved through several deployment models, each addressing different requirements for scale, latency, and administrative separation.

Single Fabric, Single Controller Domain

The first two architectural models operate under a single controller domain:

  1. ACI Single Pod Fabric -- This is the foundational deployment model. A single ACI pod operates under a single APIC cluster. All leaf and spine nodes reside within one fabric, sharing a unified policy domain and control plane. This model is ideal for single data center deployments where all workloads are co-located.

  2. ACI Multi-Pod Fabric -- When a single controller domain needs to span multiple locations or pods, Multi-Pod extends the fabric across an Inter-Pod Network (IPN). Multiple pods connect via MP-BGP EVPN, but they all remain under the governance of a single APIC cluster. This model works well when low latency exists between pods and a single administrative domain is acceptable.

  3. ACI Remote Leaf -- This extends the single controller domain to remote locations by placing leaf nodes at a remote site while the APIC cluster and spine nodes remain at the main data center. Remote leaf nodes connect back to the main fabric through a routed network.

Multiple Fabrics, Multiple Controller Domains

  1. ACI Multi-Site -- This is the focus of this article. Multiple independent ACI fabrics, each with its own APIC cluster, are interconnected through an Inter-Site Network (ISN). A Nexus Dashboard Orchestrator provides centralized policy management across all fabrics while preserving each fabric's autonomy.
ArchitectureController DomainLatency SensitivityUse Case
ACI Single PodSingle APIC clusterN/A (single site)Single data center
ACI Multi-PodSingle APIC clusterLatency-sensitiveCampus/metro DCs
ACI Remote LeafSingle APIC clusterModerateBranch/remote sites
ACI Multi-SiteMultiple APIC clustersNo latency limitationGeographically distributed DCs

The key differentiator for ACI multi-site is that there is no latency limitation between fabrics. Each fabric operates independently with its own APIC cluster, making this the ideal architecture for loosely coupled data centers where geographic distance, administrative boundaries, or fault isolation requirements preclude a single controller domain.

Pro Tip: Choosing between Multi-Pod and Multi-Site depends on your latency requirements and administrative model. If you need a single controller domain and have low-latency connectivity between sites, Multi-Pod may suffice. If you need independent fault domains with no latency constraints, Multi-Site is the right choice.

How Does ACI Multi-Site Architecture Work?

The ACI multi-site architecture is built on the principle of maintaining separate, autonomous ACI fabrics while providing a unified orchestration layer for cross-fabric policy definition and enforcement. Let us break down the core components and how they interact.

Separate Fabrics with Independent APIC Clusters

Each site in an ACI multi-site deployment runs its own ACI fabric with its own APIC cluster. This means that each site has full autonomy over its local configuration, fault handling, and operations. If one site's APIC cluster experiences an issue, the other sites continue operating independently. This separation provides a natural fault isolation boundary that is critical for disaster recovery and high availability.

Nexus Dashboard Orchestrator (NDO)

Sitting above the individual APIC clusters is the Nexus Dashboard Orchestrator. NDO pushes cross-fabric configuration to multiple APIC clusters, providing scoping of all configuration changes. It communicates with each site's APIC cluster via REST API, ensuring that policies defined at the orchestrator level are consistently applied across all participating fabrics.

MP-BGP EVPN Control Plane

The control plane between sites uses MP-BGP EVPN. This is the same protocol family used within individual ACI fabrics, extended across the Inter-Site Network to exchange endpoint reachability information between sites. The spine nodes at each site establish MP-BGP EVPN peering relationships across the ISN to share routing and endpoint information.

VXLAN Data Plane

The data plane uses VXLAN encapsulation for traffic flowing between sites. When an endpoint at Site 1 needs to communicate with an endpoint at Site 2, the traffic is VXLAN-encapsulated by the local spine node, traverses the ISN, and is decapsulated by the remote spine node at Site 2. This provides end-to-end policy definition and enforcement across sites.

Topology Overview

In a typical ACI multi-site topology, the components are arranged as follows:

  • Site 1 and Site 2 each have their own ACI fabric with leaf and spine nodes, connected to an L3 network
  • An Inter-Site Network (ISN) provides Layer 3 connectivity between the spine nodes of each site
  • The Nexus Dashboard Orchestrator connects to each site's APIC cluster via REST API and GUI
  • MP-BGP EVPN sessions run between spine nodes across the ISN
  • VXLAN tunnels carry data plane traffic between sites

This architecture delivers end-to-end policy definition and enforcement while keeping each fabric operationally independent.

What Are the Most Common ACI Multi-Site Use Cases?

Understanding the use cases that drive ACI multi-site adoption helps you determine whether this architecture fits your organization's needs. There are three primary categories of use cases.

Data Center Interconnect (DCI)

The most traditional use case for ACI multi-site is extending connectivity and policy between loosely coupled data center sites. This directly supports:

  • Disaster Recovery (DR): By stretching bridge domains and EPGs across sites, workloads can fail over from one data center to another while maintaining their network identity and policy enforcement
  • IP Mobility: Virtual machines or containers can migrate between sites without requiring IP address changes, as the ACI fabric provides Layer 2 extension across the ISN with VXLAN encapsulation

Compartmentation and Scale

ACI multi-site is not limited to geographically separated data centers. It can also be used to build multiple fabrics inside a single data center for compartmentation and scale purposes:

  • Optimized and controlled Layer 2/Layer 3 connectivity, including optimized and controlled BUM (Broadcast, Unknown unicast, Multicast) forwarding
  • Scale out the total number of leaf nodes beyond what a single fabric can support -- this is particularly relevant for service provider deployments where leaf node counts may exceed single-fabric limits

Service Provider 5G Telco DC/Cloud

For service provider environments, ACI multi-site enables:

  • Centralized DC orchestration for autonomous fabrics -- each cell site or edge location runs its own autonomous fabric, while NDO provides centralized management
  • Optional SR-MPLS/MPLS handoff on border leaf nodes for integration with the service provider transport network

Pro Tip: The compartmentation use case is often overlooked but can be extremely valuable in large enterprise data centers. If your single fabric is approaching scale limits or you need strict fault isolation between different business units within the same physical data center, ACI multi-site gives you that separation while maintaining unified orchestration.

Inter-Site Network (ISN) Deployment Considerations

The Inter-Site Network is the Layer 3 transport that connects the spine nodes of different ACI fabrics. While it is a critical component of the ACI multi-site architecture, the ISN is not managed by APIC or NDO. It must be independently configured as a day-0 activity before the multi-site fabric can become operational.

ISN Functional Requirements

The ISN has several specific functional requirements that must be met for ACI multi-site to operate correctly:

  1. Routing Protocol Peering: The ISN must support OSPF or BGP to peer with the spine nodes and exchange TEP (Tunnel Endpoint) address reachability. BGP peering with spine nodes requires ACI release 5.2(1) or later. The IP topology within the ISN can be arbitrary -- it is not mandatory to connect all spine nodes to the ISN.

  2. Sub-interface with VLAN Tag 4: The connections from spine nodes toward the ISN must use sub-interfaces with VLAN tag 4. This is a fixed requirement of the ACI multi-site design.

  3. No Multicast Requirement: Unlike some other fabric interconnect technologies, ACI multi-site does not require multicast support in the ISN for BUM traffic forwarding across sites. BUM traffic is handled through ingress replication, simplifying ISN design significantly.

  4. Increased MTU Support: The ISN must support an increased end-to-end MTU of at least 50 to 54 extra bytes beyond the standard endpoint MTU to accommodate VXLAN encapsulation overhead.

ISN RequirementDetails
Routing ProtocolOSPF or BGP (BGP requires ACI 5.2(1)+)
Sub-interface VLANVLAN tag 4 (mandatory)
MulticastNot required
MTU OverheadAt least 50-54 bytes additional
ManagementIndependent (not managed by APIC/NDO)

Pro Tip: The fact that the ISN does not require multicast is a significant operational advantage. Many enterprise WAN and DCI networks do not support multicast, so this design choice makes ACI multi-site deployable over a much wider range of transport networks.

ACI Multi-Site and MTU Size Considerations

MTU configuration is one of the most critical -- and most commonly misconfigured -- aspects of an ACI multi-site deployment. There are two distinct types of MTU to consider, and confusing them can lead to connectivity failures, silent packet drops, or degraded performance.

Data-Plane MTU

The data-plane MTU refers to the MTU of traffic generated by endpoints (servers, routers, service nodes, etc.) connected to ACI leaf nodes. When this traffic needs to traverse the ISN between sites, it gets VXLAN-encapsulated, which adds 50 bytes of overhead. Therefore, if your endpoints are generating traffic with a 1500-byte MTU, the ISN must support at least 1550 bytes to carry the encapsulated packets without fragmentation.

Control-Plane MTU

The control-plane MTU applies to CPU-generated traffic such as the MP-BGP sessions that run between sites. Importantly, control plane traffic is not VXLAN-encapsulated, so it does not incur the 50-byte overhead. The default control-plane MTU value is 9000 bytes, but this can be tuned on APIC to match the maximum MTU value supported in the ISN.

MTU TypeDescriptionVXLAN OverheadDefault Value
Data-Plane MTUEndpoint-generated traffic+50 bytesMatches endpoint MTU
Control-Plane MTUCPU-generated traffic (MP-BGP)None9000 bytes (tunable)

What If the ISN Supports Only 1500-Byte MTU?

This is a common real-world challenge. Many enterprise WAN links and DCI circuits are limited to a 1500-byte MTU. Since VXLAN encapsulation adds 50 bytes of overhead, endpoint traffic with a 1500-byte MTU would produce 1550-byte packets on the ISN -- exceeding the 1500-byte limit and causing drops.

Prior to ACI release 6.0(3)F, the primary workaround was to reduce endpoint MTU sizes to accommodate the VXLAN overhead. This was often impractical in environments with thousands of endpoints or where applications required full 1500-byte MTU.

TCP-MSS Adjust Functionality in ACI Multi-Site

ACI release 6.0(3)F introduced the TCP-MSS adjust functionality, which elegantly solves the ISN MTU limitation problem for TCP-based traffic. This feature dynamically adjusts the TCP Maximum Segment Size (MSS) on SYN and SYN/ACK packets to ensure that TCP data packets fit within the ISN MTU after VXLAN encapsulation.

How TCP-MSS Adjust Works

The TCP-MSS adjust policy is enabled at the System Settings level on APIC. It supports different TCP-MSS adjust settings for IPv4 and IPv6, and provides three scope options:

  1. Global: Applies to all flows including Multi-Pod, Multi-Site, and Remote Leaf traffic
  2. RL and Msite: Applies specifically to Multi-Site and Remote Leaf flows only
  3. RL Only: Applies only to Remote Leaf flows

The supported TCP-MSS values range from 688 to 9104 bytes.

SYN Packet Processing

Let us walk through how TCP-MSS adjust handles a TCP connection establishment between endpoints at two different sites connected by an ISN with a 1500-byte MTU.

Consider the following scenario:

  • Site 1 has an O-UTEP (Overlay Unicast TEP) address of 172.16.100.1 and a TEP pool of 10.1.0.0/16
  • Site 2 has an O-UTEP address of 172.16.200.1 and a TEP pool of 10.2.0.0/16
  • The ISN MTU is 1500 bytes
  • Host at Site 1 has an MTU of 1500 bytes; host at Site 2 has an MTU of 9000 bytes

When an endpoint at Site 1 sends a TCP SYN packet with MSS=1460 bytes (calculated as MTU minus IP header minus TCP header: 1500 - 20 - 20 = 1460):

  1. The SYN packet is VXLAN-encapsulated and sent across the ISN to Site 2
  2. At the egress leaf node at Site 2, the leaf examines the source IP in the VXLAN header
  3. The source IP is 172.16.100.1 (Site 1's O-UTEP), which is not part of Site 2's local TEP pool (10.2.0.0/16)
  4. Because the source IP is from a remote site, the leaf performs TCP-MSS adjustment
  5. The SYN packet is punted to the CPU, and the MSS value is adjusted down to 1400 bytes
! TCP-MSS adjust is configured at System Settings level
! Supported values: 688-9104 bytes
! Three scope options: Global, RL and Msite, RL Only

SYN/ACK Packet Processing

The same logic applies in the reverse direction for the SYN/ACK packet:

  1. The host at Site 2 (MTU 9000 bytes) responds with a TCP SYN/ACK with MSS=8960 bytes
  2. This packet is VXLAN-encapsulated and sent to Site 1
  3. At the egress leaf node at Site 1, the leaf examines the source IP in the VXLAN header
  4. The source IP is 172.16.200.1 (Site 2's O-UTEP), which is not part of Site 1's local TEP pool (10.1.0.0/16)
  5. The MSS value is adjusted down to 1400 bytes

Result: Properly Sized Data Packets

After the MSS negotiation completes, both endpoints generate TCP data packets for that communication with a total MTU of 1440 bytes (1400 MSS + 20 IP header + 20 TCP header), regardless of their local host MTU settings. When these 1440-byte packets are VXLAN-encapsulated (adding 50 bytes), the resulting 1490-byte packets fit comfortably within the 1500-byte ISN MTU.

Key characteristics of the TCP-MSS adjust functionality:

  • TCP-MSS adjust is always performed on the egress leaf node
  • It adjusts TCP MSS values on both SYN and SYN/ACK packets
  • The leaf checks the source IP in the VXLAN header to determine if the traffic is from a remote site
  • If the source IP is not part of the fabric's internal TEP pool, MSS adjustment is performed

Pro Tip: The TCP-MSS adjust feature is a game-changer for environments where upgrading ISN MTU is not feasible. However, remember that it only works for TCP traffic. UDP and other non-TCP protocols will still need MTU considerations addressed through other means, such as reducing endpoint MTU or upgrading ISN path MTU.

Nexus Dashboard Orchestrator (NDO) Deployment and Evolution

The orchestration layer is the brain of the ACI multi-site architecture. Understanding its evolution, deployment options, and current recommendations is essential for both new deployments and migrations of existing environments.

From Multi-Site Orchestrator to Nexus Dashboard Orchestrator

The orchestration platform has undergone a significant transformation:

  • Cisco Multi-Site Orchestrator (MSO) was the original name for the orchestration platform
  • Starting from release 3.2(1), it was renamed to Cisco Nexus Dashboard Orchestrator (NDO)
  • The rebranding reflects its integration into the broader Cisco Nexus Dashboard platform

Original VM-Based Deployment (Now End-of-Life)

The original Multi-Site Orchestrator was deployed as a VM-based cluster:

  • Supported from MSO release 1.0(1) onward
  • Each MSO node was packaged as a VMware vSphere virtual appliance (OVA)
  • For high availability, each MSO virtual machine should be deployed on its own VMware ESXi host
  • Requirements for MSO release 1.2(x) and above:
    • VMware ESXi 6.0 or later
    • Minimum of 8 virtual CPUs (vCPUs)
    • 48 GB of memory
    • 100 GB of disk space
  • MSO 3.1(1) was the last supported release with this form factor and is now End-of-Life/End-of-Sale
MSO VM RequirementsSpecification
HypervisorVMware ESXi 6.0+
vCPUs8 minimum
Memory48 GB
Disk100 GB
HA Model3 nodes on separate ESXi hosts
Last ReleaseMSO 3.1(1) -- now EoL/EoS

Nexus Dashboard Platform

NDO now runs as an application on the Cisco Nexus Dashboard platform. The Nexus Dashboard is a unified, agile platform that powers multiple applications beyond just the Orchestrator:

  • Nexus Dashboard Orchestrator -- for multi-site/multi-fabric policy orchestration
  • Nexus Dashboard Insights -- for analytics and assurance
  • Nexus Dashboard Fabric Discovery -- for fabric discovery operations
  • Nexus Dashboard Fabric Controller -- for fabric management
  • Nexus Dashboard SAN Controller -- for storage networking
  • Nexus Dashboard Data Broker -- for traffic monitoring

Nexus Dashboard Deployment Options

The Nexus Dashboard platform itself can be deployed in multiple form factors:

  1. Physical Nexus Dashboard Platform Cluster -- dedicated hardware appliances
  2. Virtual Nexus Dashboard Platform Cluster -- supported on ESXi and KVM hypervisors with specifications of 16 vCPUs, 64 GB RAM, and 500 GB disk (these are app node specifications for Orchestrator; different specifications apply for Insights)
  3. Cloud Nexus Dashboard Cluster -- supported for AWS and Azure cloud deployments

Pro Tip: When sizing your Nexus Dashboard virtual cluster for NDO, remember that the specifications (16 vCPUs, 64 GB RAM, 500 GB disk) are specifically for the Orchestrator application. If you plan to run Nexus Dashboard Insights alongside NDO on the same cluster, you will need to account for the additional resource requirements of that application.

How Do You Migrate from MSO to Nexus Dashboard Orchestrator?

Migration from the legacy Multi-Site Orchestrator to Nexus Dashboard Orchestrator is a structured process that requires careful planning and execution. This is not a simple in-place upgrade -- it involves a specific migration procedure.

Migration Procedure Overview

The migration from MSO to NDO follows these steps:

  1. Export a backup configuration file from MSO -- This captures the entire MSO configuration including all site associations, templates, and policies
  2. Import the backup file on NDO -- The backup is restored onto the new Nexus Dashboard Orchestrator instance
  3. Rollback configuration to the backup file -- This step includes a specific "database cleaning" procedure that is important for removing any stale objects that may exist in the original MSO database
  4. Check for configuration drifts -- After the migration, verify that the configuration on NDO matches the intended state and that no drifts have been introduced

Important Migration Considerations

  • A migration procedure is required between any old MSO release and NDO
  • There is no direct upgrade path from MSO or NDO 3.7 to ND 3.2 -- intermediate steps may be necessary
  • The Nexus Dashboard itself may need to be upgraded first before NDO can be upgraded
  • Upgrading or migrating to an NDO 4.x release may involve a template transformation -- be aware of this and ideally test it in a lab environment before performing the production migration

Recommended Releases per Scenario

The following table outlines the recommended target releases based on your current deployment:

Current ReleaseTarget Release
MSO/NDO 1.1(x) to 3.7(2)ND 3.2(1i) with NDO 4.4.1.1012
NDO 4.0(1) to 4.2(2)ND 3.2(1i) with NDO 4.4.1.1012
Greenfield (new deployment)ND 3.2(1i) with NDO 4.4.1.1012

For environments already running NDO 3.x on an older Nexus Dashboard version, there is the option to upgrade the Nexus Dashboard platform from version 2.x/3.x to Nexus Dashboard 3.2.

Pro Tip: Always perform the migration in a lab environment first. The template transformation that occurs during an NDO 4.x migration can introduce unexpected changes. Validate your policies and site associations thoroughly before and after migration.

NDO Provisioning for Autonomous Sites

One of the lesser-known capabilities of ACI multi-site is the ability to use NDO purely as a provisioning tool for autonomous sites -- without requiring an ISN or VXLAN EVPN for east-west communication between sites.

How Autonomous Site Provisioning Works

When fabrics are operated as independent, autonomous sites:

  • NDO serves as a single point of provisioning, pushing consistent configuration to multiple APIC clusters
  • There is no use of ISN and VXLAN EVPN for east-west communication between sites
  • Inter-site Layer 3 communication remains possible via the L3Out data path at each site
  • NDO can "replicate" configuration across sites by associating the same autonomous template to up to 100 fabrics

This is particularly valuable in service provider and large enterprise environments where you have dozens or even hundreds of small, independent ACI fabrics (such as cell sites or branch offices) that all need identical configuration. Instead of manually configuring each APIC cluster, you define the configuration once in NDO and push it to all associated fabrics.

Autonomous vs. Stretched Templates

The distinction between autonomous and stretched templates is fundamental to NDO operations:

  • Autonomous Templates: Configuration is replicated independently to each associated site. No cross-site networking is established. Each site operates in isolation from a data plane perspective.
  • Stretched Templates: Configuration creates cross-site constructs (stretched bridge domains, EPGs, etc.) that enable Layer 2 and Layer 3 communication between sites via the ISN.

Understanding this distinction is critical because it determines whether your deployment requires an ISN infrastructure or not.

ACI Multi-Site Control and Data Plane Deep Dive

The control and data plane operations are what make ACI multi-site function as a cohesive architecture despite the physical separation of fabrics. Let us examine how these planes operate across the ISN.

Control Plane: MP-BGP EVPN

The control plane uses MP-BGP EVPN to exchange endpoint and routing information between sites. The spine nodes at each site act as BGP speakers, establishing EVPN peering sessions across the ISN. Through these sessions, sites share:

  • Endpoint MAC and IP address information
  • Bridge domain and subnet reachability
  • Policy information for cross-site EPGs

The control plane traffic (MP-BGP sessions) is not VXLAN-encapsulated as it traverses the ISN. This is an important distinction from the data plane and has direct implications for MTU sizing, as discussed earlier. The default control-plane MTU is 9000 bytes, which can be adjusted on APIC if the ISN does not support jumbo frames.

Data Plane: VXLAN Encapsulation

When endpoint traffic needs to cross site boundaries, it is VXLAN-encapsulated by the originating site's infrastructure and forwarded across the ISN. The VXLAN encapsulation adds approximately 50 bytes of overhead to each packet. Key aspects of the data plane include:

  • Each site is assigned an O-UTEP (Overlay Unicast Tunnel Endpoint) address that serves as the source IP for VXLAN-encapsulated traffic leaving that site
  • The O-UTEP address is distinct from the site's internal TEP pool addresses
  • VXLAN tunnels between sites carry both unicast and BUM traffic
  • No multicast is required in the ISN -- BUM traffic is handled through ingress replication

O-UTEP Addressing

The O-UTEP addresses play a crucial role in the TCP-MSS adjust functionality and in general inter-site traffic identification. For example:

  • Site 1 might use O-UTEP 172.16.100.1 with internal TEP pool 10.1.0.0/16
  • Site 2 might use O-UTEP 172.16.200.1 with internal TEP pool 10.2.0.0/16

When a leaf node at Site 2 receives VXLAN-encapsulated traffic with source IP 172.16.100.1, it recognizes this address as outside its local TEP pool (10.2.0.0/16), identifying the traffic as inter-site and triggering any applicable inter-site processing such as TCP-MSS adjustment.

Connecting to the External Layer 3 Domain

While ACI multi-site focuses on interconnecting ACI fabrics, real-world deployments must also integrate with the external Layer 3 routing domain. Each site in a multi-site deployment can maintain its own L3Out connections to external networks such as campus networks, WAN edge routers, the internet, or service provider networks.

Inter-Site L3 Communication via L3Out

Even in autonomous site configurations where no ISN or VXLAN EVPN is used for east-west traffic, inter-site Layer 3 communication remains possible through the L3Out data path. In this model:

  • Each site advertises its subnets to the external routing domain via its local L3Out
  • Traffic between sites flows through the external Layer 3 network rather than through the ISN
  • This approach leverages existing WAN infrastructure without requiring dedicated ISN links

For fully stretched multi-site deployments with ISN connectivity, the L3Out at each site can be coordinated through NDO to provide consistent external routing policies across all sites.

Network Services Integration in ACI Multi-Site

Network services such as firewalls, load balancers, and intrusion prevention systems are integral to most data center deployments. In an ACI multi-site environment, these services can be integrated at each site independently while maintaining consistent policy through NDO.

The orchestrator ensures that service graph templates and contracts referencing network services are appropriately scoped and deployed. Whether services are deployed locally at each site or centralized at a specific site, the policy framework accommodates both models through the template and contract constructs managed by NDO.

Frequently Asked Questions

What is the difference between ACI Multi-Pod and ACI Multi-Site?

ACI Multi-Pod uses a single APIC cluster to manage multiple pods connected via an Inter-Pod Network, while ACI Multi-Site uses separate APIC clusters for each fabric connected via an Inter-Site Network. Multi-Pod has latency constraints between pods since they share a controller, whereas Multi-Site has no latency limitation between fabrics. Multi-Site provides stronger fault isolation since each fabric operates independently, making it the preferred architecture for geographically distributed or loosely coupled data centers.

Does ACI Multi-Site require multicast in the Inter-Site Network?

No. ACI multi-site does not require multicast support in the ISN for BUM (Broadcast, Unknown unicast, Multicast) traffic forwarding across sites. BUM traffic is handled through ingress replication. This significantly simplifies ISN design and makes ACI multi-site deployable over enterprise WAN links and DCI circuits that typically do not support multicast.

What MTU should the ISN support for ACI Multi-Site?

The ISN should support an MTU that accounts for at least 50 to 54 bytes of VXLAN encapsulation overhead beyond the endpoint MTU. For endpoints using a 1500-byte MTU, the ISN should ideally support at least 1550 bytes. If the ISN is limited to 1500-byte MTU, the TCP-MSS adjust functionality introduced in ACI release 6.0(3)F can dynamically reduce the TCP MSS value to ensure data packets fit within the ISN MTU after encapsulation. The supported TCP-MSS adjust values range from 688 to 9104 bytes.

How many fabrics can NDO manage in autonomous mode?

NDO can associate the same autonomous template to up to 100 fabrics. This makes it an excellent centralized provisioning tool for large-scale deployments such as service provider 5G telco DC/cloud environments where dozens or hundreds of autonomous ACI fabrics need consistent configuration.

What is the recommended NDO release for new deployments?

For greenfield deployments as well as migrations from older MSO/NDO releases, the recommended target is Nexus Dashboard 3.2(1i) with NDO 4.4.1.1012. This applies whether you are starting fresh, migrating from MSO/NDO 1.1(x) through 3.7(2), or upgrading from NDO 4.0(1) through 4.2(2).

Why was VLAN tag 4 chosen for ISN sub-interfaces?

The spine-to-ISN connections must use sub-interfaces with VLAN tag 4. This is a fixed design requirement of the ACI multi-site architecture and must be configured as part of the day-0 ISN setup. The ISN itself is not managed by APIC or NDO, so this VLAN tag configuration must be performed independently on both the spine nodes and the ISN devices.

Conclusion

ACI multi-site architecture represents the most flexible and resilient approach to interconnecting multiple data center fabrics under a unified policy framework. By maintaining separate APIC clusters at each site while centralizing orchestration through the Nexus Dashboard Orchestrator, organizations achieve the best of both worlds: operational independence and consistent policy enforcement.

The key takeaways from this guide are:

  • ACI multi-site is designed for loosely coupled data centers with no latency limitations between fabrics, using MP-BGP EVPN for control plane and VXLAN for data plane
  • The ISN requires careful planning around routing protocol peering (OSPF or BGP), VLAN tag 4 sub-interfaces, and MTU sizing -- but does not require multicast
  • TCP-MSS adjust in ACI 6.0(3)F solves the 1500-byte ISN MTU challenge for TCP traffic by dynamically adjusting MSS values on SYN/SYN/ACK packets at egress leaf nodes
  • NDO has evolved from a standalone VM appliance to a Nexus Dashboard application, with a clear migration path and recommended releases for every scenario
  • Autonomous site provisioning allows NDO to manage up to 100 independent fabrics without requiring ISN connectivity, making it ideal for large-scale service provider and enterprise deployments

Mastering ACI multi-site architecture is essential for anyone pursuing advanced data center certifications or managing production multi-fabric environments. The architecture continues to evolve with each release, adding capabilities like TCP-MSS adjust that address real-world deployment challenges. Stay current with the latest releases and always validate new features in a lab environment before deploying in production.

For hands-on practice and deeper exploration of data center networking technologies, visit NHPREP to explore our comprehensive course catalog covering CCIE Enterprise and data center topics.