Lesson 1 of 5

ISE Node Types and Distributed Deployment

ISE Node Types and Distributed Deployment

Introduction

Cisco Identity Services Engine (ISE) is rarely deployed as a single box in production networks. As your organization grows and the number of endpoints increases, you need to understand how ISE distributes its workload across multiple nodes, each running a specific persona. Getting the deployment architecture right from the start saves you from painful redesigns later.

In this lesson, you will learn the four ISE node personas and the role each one plays. You will understand the three deployment sizes — small, medium, and large — along with their node limits and scaling rules. You will also explore latency requirements between nodes, maximum concurrent session numbers for different hardware platforms, and the critical difference between steady-state and peak-demand capacity planning. By the end, you will be able to design an ISE deployment that matches your organization's scale and redundancy requirements.

Key Concepts

ISE Node Personas

Every ISE node runs one or more personas. A persona defines what job that node performs within the deployment. There are four personas to understand:

PersonaAbbreviationRole
Policy Administration NodePANCentral management point for all ISE configuration. All policy changes are made on the PAN and replicated to other nodes.
Policy Service NodePSNMakes policy decisions and handles authentication. Acts as the RADIUS and TACACS+ server that network access devices (NADs) communicate with directly.
Monitoring and Troubleshooting NodeMnTReporting and logging node. Serves as the syslog collector for all other ISE nodes in the deployment.
pxGrid NodepxGridEnables context sharing between ISE and other security products through the Platform Exchange Grid framework.

The PSN is where the real-time work happens. When a switch or wireless controller sends a RADIUS authentication request, it goes to a PSN. The PSN evaluates the request against the policies configured on the PAN and returns an authorization result. The MnT node collects logs and syslog data from every ISE node in the deployment, giving administrators a centralized place for reporting, troubleshooting, and auditing. The PAN is purely administrative — it holds the configuration database and replicates changes outward.

Deployment Sizes

ISE supports three deployment sizes, each with different rules about how personas are assigned to nodes:

AttributeSmallMediumLarge
Maximum nodes2 (optional 3rd)858
PAN placementAll personas on 2 nodesPAN + MnT on same nodeAll personas on dedicated nodes
PSN placementAll personas on 2 nodesPSNs on dedicated nodes (up to 4)Up to 50 dedicated PSNs
pxGridOptional on 3rd nodeCan be enabled on up to 2 nodesUp to 4 pxGrid nodes
Key characteristicRedundancy only, no scale increase from 3rd nodeBalanced cost and scaleMaximum scale and full persona separation

In a small deployment, both nodes run every persona. You can add an optional third node, but it serves only as a dedicated PSN, pxGrid node, or health check node. That third node does not increase scale — it exists purely for redundancy and load sharing purposes.

In a medium deployment, the PAN and MnT personas share a node, while PSNs run on their own dedicated hardware. pxGrid can be added to the PAN/MnT nodes or to up to two PSNs.

In a large deployment, every persona gets its own dedicated node. This is the only way to reach maximum scale — up to 50 PSNs and up to 4 pxGrid nodes, with dedicated PAN and MnT nodes.

How It Works

Centralized versus Distributed Large Deployments

When you move to a large deployment, you have two architectural choices: centralized and distributed.

In a centralized large deployment, a single ISE deployment spans multiple data centers. For example, you place the Primary PAN and MnT in DC1 and the Secondary PAN and MnT in DC2, with PSNs distributed across both sites. All nodes belong to the same ISE deployment and share one configuration database.

In a distributed large deployment using separate "cubes," each location runs its own independent ISE deployment with its own Primary PAN and MnT. Deployment 1 has its own Primary PAN and MnT, and Deployment 2 has its own Primary PAN and MnT. These are fully independent — they do not share configuration or session data.

The choice depends on your latency budget and operational model. If your sites are close enough and you want unified policy management, centralized works well. If sites are geographically distant or you need administrative separation, separate cubes give you isolation.

Latency Requirements

The maximum supported latency between the PAN and any other ISE node in the deployment is 300 milliseconds. This applies to inter-node communication only — not to the link between NADs and PSNs. RADIUS traffic between a NAD and a PSN is more latency-tolerant than the inter-node replication traffic between ISE nodes.

Important: The 300ms latency guidance is not a hard cutoff where things immediately break. It is a guardrail based on what has been tested and validated. In some environments 300ms works fine, while in others even 150ms may cause issues. The endpoint data volume and profiling configuration are the primary determinants of replication requirements. Higher authentication and profiling rates demand lower latency between nodes.

Session Counting and Licensing

ISE licensing is based on active endpoint sessions, not total endpoints or total authentications. Understanding how sessions are counted is critical for capacity planning:

  • Sessions start upon RADIUS Authorization
  • Sessions stop upon one of three events: disconnect, session expiration, or idle timeout
  • RADIUS Accounting defines session start and stop events

This means that a device which authenticates once in the morning and stays connected all day counts as one active session for the entire duration. Only when it disconnects, its session expires, or it times out does it free up a session slot.

Maximum Concurrent Sessions by Platform

The number of concurrent active sessions ISE can support depends on both the hardware platform and the deployment size:

DeploymentSNS 3615SNS 3715/3815SNS 3655SNS 3755/3855SNS 3695SNS 3795/3895
LargeUnsupportedUnsupported500,000750,0002,000,0002,000,000
Medium12,50075,00025,000150,00050,000150,000
Small12,50025,00025,00050,00050,00050,000

Notice that the SNS 3615 and SNS 3715/3815 platforms are unsupported for large deployments. If you plan to scale to large, you must use the SNS 3655 or higher.

The session capacity also changes depending on whether a PSN is dedicated (running only the PSN persona) or shared (running multiple personas):

PSN TypeSNS 3615SNS 3715/3815SNS 3655SNS 3755/3855SNS 3695SNS 3795/3895
Dedicated PSN25,00050,00050,000100,000100,000100,000
Shared PSN12,50025,00025,00050,00050,00050,000

A shared PSN supports exactly half the sessions of a dedicated PSN on the same hardware. This is why large deployments place each persona on its own node — you get double the PSN capacity by not sharing.

Configuration Example

ISE node registration and persona assignment are performed through the ISE administrative GUI rather than through CLI commands. However, understanding the cloud migration workflow illustrates how nodes are added and removed from a distributed deployment in a phased approach.

Phased Cloud Migration Workflow

When migrating an on-premises medium deployment to the cloud, the process follows a careful phased approach to maintain service availability:

Phase 1 — Migrate first pair of PSNs:

  1. Deregister PSN1 and PSN2 from the deployment
  2. Deploy new cloud instances
  3. Install the required patch
  4. Add the new nodes to the deployment
  5. Test authentication and authorization

Phase 2 — Migrate second pair of PSNs:

  1. Deregister PSN3 and PSN4 from the deployment
  2. Deploy new cloud instances
  3. Install the required patch
  4. Add the new nodes to the deployment
  5. Test authentication and authorization

Phase 3 — Migrate the PAN/MnT nodes:

  1. Promote the Secondary PAN/MnT to Primary
  2. Remove the original Secondary PAN/MnT
  3. Deploy a new cloud instance
  4. Install the required patch
  5. Add the node to the deployment

Best Practice: Before deregistering any node, export its certificates. These certificates must be imported before adding the replacement node to the deployment. Additionally, evaluate NAD configurations before each phase — exclude the PSNs being migrated from load-balancer groups, or ensure the high-availability configuration includes the PSNs that remain online.

Real-World Application

Steady-State versus Peak Demand

Capacity planning is not as simple as counting your endpoints and buying hardware to match. You must account for transactions per second (TPS) and the difference between steady-state and peak-demand loads:

  • You will always have a mix of static and mobile endpoints. Static endpoints (desktops, printers, IP phones) authenticate once and maintain long sessions of eight hours or more.
  • Usage patterns cause regional and periodic spikes. Activity "follows the sun" — as each region starts its workday, authentication load increases.
  • Wireless roaming causes spikes on the hour as users move between classrooms and meetings.
  • Mobile endpoints hibernate and roam, generating a 3 to 10 times larger load than static endpoints.
  • Misconfigured devices can generate 100 to 1,000 times the average authentication load, creating unexpected spikes that can overwhelm a PSN.

Design Considerations

When planning your ISE deployment, keep these guidelines in mind:

  • Start with dedicated PSNs in medium and large deployments. The session capacity doubles compared to shared PSNs, and you avoid resource contention between personas.
  • Measure your inter-site latency before choosing centralized versus distributed. If latency between data centers exceeds your comfort level relative to the 300ms guideline, use separate cubes.
  • Plan for peak, not steady state. Your hardware must handle the Monday-morning authentication storm, not just the midday cruise. Factor in the 3-10x multiplier from mobile endpoints.
  • Use phased migration when changing infrastructure. Never deregister all PSNs simultaneously. Migrate in pairs, test after each phase, and always maintain enough online PSNs to handle your authentication load.
  • Monitor for misconfigured devices. A single misconfigured switch or access point generating thousands of authentications per minute can consume an entire PSN's capacity.

Summary

  • ISE uses four node personas — PAN (administration), PSN (policy decisions and RADIUS/TACACS+), MnT (monitoring and logging), and pxGrid (context sharing) — each serving a distinct function in the deployment.
  • Three deployment sizes exist: small (2-3 nodes, all personas shared), medium (up to 8 nodes with dedicated PSNs), and large (up to 58 nodes with full persona separation and up to 50 PSNs).
  • Inter-node latency must stay within 300ms, but higher authentication and profiling rates may require significantly lower latency. RADIUS traffic to NADs is more tolerant of latency than inter-node replication.
  • Dedicated PSNs support double the concurrent sessions of shared PSNs on the same hardware, making persona separation a key scaling strategy.
  • Capacity planning must account for peak demand, including mobile endpoint roaming (3-10x load multiplier) and misconfigured devices (100-1,000x load multiplier).

In the next lesson, we will explore ISE high availability mechanisms, including PAN failover and PSN load balancing, to ensure your deployment remains resilient under failure conditions.