AI vs ML vs Deep Learning
Objective
In this lesson you will learn the conceptual differences and practical relationships between Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). You will see where each is applied in production networks, how data flows through ML systems used for networking (from telemetry to inference), and a stepwise practical exercise showing how a network team would prepare, train, validate, and deploy a basic ML workflow for a network problem. This matters in production networks because ML/DL systems are increasingly used for anomaly detection, radio resource management, capacity forecasting, and root-cause analysis — all tasks that automate routine operations and reduce mean time to repair.
Real-world scenario: An enterprise wireless team wants to reduce client drops in a campus. Instead of manual RF tuning, the team will collect telemetry (client signal, channel utilization, interference) and train a model to predict sessions at risk of dropping. The ML pipeline will preprocess telemetry, train a model, validate it, and produce an inference service that feeds actionable alarms to network operations.
Tip: Think of AI, ML, and DL as nested boxes — AI is the broad discipline, ML is a subset that learns from data, and DL is an ML approach using many-layered neural networks.
Topology & Device Table
ASCII topology (simple logical lab to show where data flows; every interface includes an IP address used by the telemetry and model servers):
+-----------------+ +-----------------+
| Student-PC | | Model-Server |
| | | |
| eth0: 10.10.10.2/24 --------------------- | eth0: 10.10.10.10/24
| | 10.10.10.0/24 | |
+-----------------+ +-----------------+
| |
| |
| |
+-----------------+ +-----------------+
| Data-Collector | | Inference-API |
| (telemetry) | | (serves predictions)
| eth0: 10.10.10.3/24 -------------------- | eth0: 10.10.10.11/24
+-----------------+ 10.10.10.0/24 +-----------------+
Device Table
| Device | Interface | IP Address | Subnet Mask | Role |
|---|---|---|---|---|
| Student-PC | eth0 | 10.10.10.2 | 255.255.255.0 | Operator workstation, dataset review |
| Data-Collector | eth0 | 10.10.10.3 | 255.255.255.0 | Collects and normalizes telemetry |
| Model-Server | eth0 | 10.10.10.10 | 255.255.255.0 | Training environment (model dev) |
| Inference-API | eth0 | 10.10.10.11 | 255.255.255.0 | Runs model for live predictions |
Real-world note: In production you would place the Model-Server and Inference-API in an isolated VLAN or VPC and secure telemetry with TLS and authentication. DNS names in examples use lab.nhprep.com.
Key Concepts (theory before CLI / hands-on)
-
AI vs ML vs Deep Learning
- Artificial Intelligence (AI): The umbrella field that aims to produce systems that perform tasks which would normally require human intelligence (planning, reasoning, perception). In networking, AI systems provide decision support and automation.
- Machine Learning (ML): A subset of AI where systems learn patterns from data rather than being explicitly programmed. For example, an ML classifier can learn to label flow records as "normal" or "anomalous".
- Deep Learning (DL): A subset of ML that uses multi-layered (deep) neural networks to learn hierarchical representations. DL excels with large volumes of high-dimensional data (wireless RF spectrums, packet payload embeddings).
- Analogy: Think of AI as the entire toolbox, ML as a specific set of power tools inside the box, and DL as one high-powered tool that needs lots of electricity (data and compute).
-
Data Flow & Pipeline
- Telemetry collection → Data preprocessing (cleaning, normalization, feature extraction) → Model training → Validation/testing → Deployment (inference) → Monitoring and feedback loop.
- Packet/telemetry behavior: collectors poll or receive telemetry (e.g., SNMP, telemetry streams). For ML, time series sampling frequency affects model responsiveness. In practice, selectors decide granularity vs cost.
-
Model Types & When to Use
- Supervised learning: labeled data (e.g., past drops labeled as drop/no-drop) — used for classification/regression tasks.
- Unsupervised learning: clustering or anomaly detection when labels are not available (useful for spotting novel issues in networks).
- Reinforcement learning: models that learn policies via reward (used in advanced RRM optimizers).
-
Transformers / Attention (why it matters)
- Attention mechanisms give models the ability to weight different parts of input contextually. In time-series network telemetry, attention can let a model focus on recent spikes or important features, improving prediction accuracy when relationships vary over time.
-
Compute Considerations
- DL models often require GPUs and significant memory. In a data center, this influences choices: perform model training offline on Model-Server with GPU, serve lightweight inference on CPU-based Inference-API, or use accelerated inference if latency is critical.
Step-by-step configuration
Note: This lesson is conceptual and focuses on preparing and running a model lifecycle for a network use-case. The examples use the lab network addressing shown above (10.10.10.x). Commands shown are illustrative shell/python commands you would run on the devices in the Device Table. All commands are shown with context and verification.
Step 1: Collect telemetry to the Data-Collector
What we are doing: Configure the Data-Collector to receive telemetry from network devices (simulated here with a CSV log pulled to the collector) so we have the dataset for training. This step matters because ML models depend on quality data; if collection is wrong, the model will learn incorrect patterns.
# On Data-Collector (10.10.10.3) - create a working directory and fetch sample telemetry
mkdir -p /opt/nhprep/telemetry
cd /opt/nhprep/telemetry
# Simulate telemetry collection (in production this would be TCP/UDP streams, SNMP, or streaming telemetry)
echo "timestamp,rssi,channel_util,client_count,drop" > telemetry.csv
echo "2025-03-01T12:00:00, -65, 20, 12, 0" >> telemetry.csv
echo "2025-03-01T12:01:00, -72, 78, 14, 1" >> telemetry.csv
What just happened: We created a telemetry directory and a CSV file representing collected samples. Each row is a telemetry snapshot: signal strength (rssi), channel utilization, number of clients, and a binary label drop indicating whether a session drop occurred.
Real-world note: In production, telemetry is continuous and higher volume; collectors compress and batch data, and ensure timestamps are synchronized (NTP) to avoid misleading correlations.
Verify:
# Verify file contents
cat /opt/nhprep/telemetry/telemetry.csv
Expected output:
timestamp,rssi,channel_util,client_count,drop
2025-03-01T12:00:00, -65, 20, 12, 0
2025-03-01T12:01:00, -72, 78, 14, 1
<div class="topology-diagram">
<img src="data:image/svg+xml;base64,PD9wbGFudHVtbCAxLjIwMjYuMT8+PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiBjb250ZW50U3R5bGVUeXBlPSJ0ZXh0L2NzcyIgZGF0YS1kaWFncmFtLXR5cGU9Ik5XRElBRyIgaGVpZ2h0PSIzMDlweCIgcHJlc2VydmVBc3BlY3RSYXRpbz0ibm9uZSIgc3R5bGU9IndpZHRoOjU4MXB4O2hlaWdodDozMDlweDtiYWNrZ3JvdW5kOiNGRkZGRkY7IiB2ZXJzaW9uPSIxLjEiIHZpZXdCb3g9IjAgMCA1ODEgMzA5IiB3aWR0aD0iNTgxcHgiIHpvb21BbmRQYW49Im1hZ25pZnkiPjxkZWZzLz48Zz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0Ny4zNDk2IiB4PSI5Ny42ODM2IiB5PSIxNi4xMzg3Ij5WTEFOMTA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iNjguOTI5NyIgeD0iNzYuMTAzNSIgeT0iMzAuMTA3NCI+MTAuMC4xLjAvMjQ8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMTQwLjAzMzIiIHg9IjUiIHk9IjE3MS4zMjYyIj5NYW5hZ2VtZW50X1ZMQU4xMDA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iODQuMTk5MiIgeD0iNjAuODM0IiB5PSIxODUuMjk0OSI+MTcyLjE2LjAuMC8yNDwvdGV4dD48cmVjdCBmaWxsPSIjRTJFMkYwIiBoZWlnaHQ9IjUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiIHdpZHRoPSI0MjMuMzM5OCIgeD0iMTUwLjAzMzIiIHk9IjE2LjQ2ODgiLz48cmVjdCBmaWxsPSIjRTJFMkYwIiBoZWlnaHQ9IjUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiIHdpZHRoPSI0MjMuMzM5OCIgeD0iMTUwLjAzMzIiIHk9IjE3MS42NTYzIi8+PHBhdGggZD0iTTIzMS40MTExLDIxLjQ2ODggTDIzMS40MTExLDc3LjA3ODEiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI1Mi40ODEiIHg9IjIwNS4xNzA3IiB5PSI0NC4xNzkyIj4xMC4wLjEuMTA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjUuMDUwOCIgeD0iMjA1LjE3MDciIHk9IjU2Ljk4MzkiPmV0aDA8L3RleHQ+PHBhdGggZD0iTTM4Ny42MTgyLDIxLjQ2ODggTDM4Ny42MTgyLDc3LjA3ODEiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI1Mi40ODEiIHg9IjM2MS4zNzc3IiB5PSI0NC4xNzkyIj4xMC4wLjEuMTE8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjUuMDUwOCIgeD0iMzYxLjM3NzciIHk9IjU2Ljk4MzkiPmV0aDA8L3RleHQ+PHBhdGggZD0iTTUxOS45MTAyLDIxLjQ2ODggTDUxOS45MTAyLDc3LjA3ODEiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0NS40ODI0IiB4PSI0OTcuMTY4OSIgeT0iNDQuMTc5MiI+MTAuMC4xLjI8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjkuMjgzMiIgeD0iNDk3LjE2ODkiIHk9IjU2Ljk4MzkiPkdpMC8xPC90ZXh0PjxwYXRoIGQ9Ik01MTkuOTEwMiwxMTEuMDQ2OSBMNTE5LjkxMDIsMTcxLjY1NjMiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI1OS40Nzk1IiB4PSI0OTAuMTcwNCIgeT0iMTM4Ljc1NzMiPjE3Mi4xNi4wLjI8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjkuMjgzMiIgeD0iNDkwLjE3MDQiIHk9IjE1MS41NjIiPkdpMC8yPC90ZXh0PjxwYXRoIGQ9Ik0yMjcuNDExMSwxNzYuNjU2MyBMMjI3LjQxMTEsMjE5LjQ2MDkiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI2Ni40NzgiIHg9IjE5NC4xNzIxIiB5PSIxOTIuOTY0NCI+MTcyLjE2LjAuMTA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjUuMDUwOCIgeD0iMTk0LjE3MjEiIHk9IjIwNS43NjkiPmV0aDA8L3RleHQ+PHBhdGggZD0iTTM4My42MTgyLDE3Ni42NTYzIEwzODMuNjE4MiwyMTkuNDYwOSIgZmlsbD0ibm9uZSIgc3R5bGU9InN0cm9rZTojMTgxODE4O3N0cm9rZS13aWR0aDoxOyIvPjx0ZXh0IGZpbGw9IiMwMDAwMDAiIGZvbnQtZmFtaWx5PSJzYW5zLXNlcmlmIiBmb250LXNpemU9IjExIiBsZW5ndGhBZGp1c3Q9InNwYWNpbmciIHRleHRMZW5ndGg9IjY2LjQ3OCIgeD0iMzUwLjM3OTIiIHk9IjE5Mi45NjQ0Ij4xNzIuMTYuMC4xMTwvdGV4dD48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSIyNS4wNTA4IiB4PSIzNTAuMzc5MiIgeT0iMjA1Ljc2OSI+ZXRoMDwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjY4LjM3NSIgeD0iMTk1LjIyMzYiIHk9Ijc3LjA3ODEiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0OC4zNzUiIHg9IjIwNS4yMjM2IiB5PSI5OC4yMTY4Ij5DbGllbnRfMTwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjY4LjM3NSIgeD0iMzUxLjQzMDciIHk9Ijc3LjA3ODEiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0OC4zNzUiIHg9IjM2MS40MzA3IiB5PSI5OC4yMTY4Ij5DbGllbnRfMjwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjgwLjkyNTgiIHg9IjQ3Ny40NDczIiB5PSI3Ny4wNzgxIi8+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iNjAuOTI1OCIgeD0iNDg3LjQ0NzMiIHk9Ijk4LjIxNjgiPlN3aXRjaF9TMTwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjEyOC43NTU5IiB4PSIxNjUuMDMzMiIgeT0iMjE5LjQ2MDkiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSIxMDguNzU1OSIgeD0iMTc1LjAzMzIiIHk9IjI0MC41OTk2Ij5NTF9Db2xsZWN0b3JfTUMxPC90ZXh0PjxyZWN0IGZpbGw9IiNGMUYxRjEiIGhlaWdodD0iMzMuOTY4OCIgc3R5bGU9InN0cm9rZTojMTgxODE4O3N0cm9rZS13aWR0aDowLjU7IiB3aWR0aD0iMTIzLjY1ODIiIHg9IjMyMy43ODkxIiB5PSIyMTkuNDYwOSIvPjx0ZXh0IGZpbGw9IiMwMDAwMDAiIGZvbnQtZmFtaWx5PSJzYW5zLXNlcmlmIiBmb250LXNpemU9IjEyIiBsZW5ndGhBZGp1c3Q9InNwYWNpbmciIHRleHRMZW5ndGg9IjEwMy42NTgyIiB4PSIzMzMuNzg5MSIgeT0iMjQwLjU5OTYiPkNvbnRyb2xsZXJfQ1RSTDE8L3RleHQ+PD9wbGFudHVtbC1zcmMgVFAzRDNlOG00OEpsRkNNNjFvWGpPX0hXRjlXVWw4MDdDTHg0RDhxcXE4WGpLZmZtQ0JveF80WTQwRFZUUnNQc0xiVWY0WGJ5RjAwak5NTmkyc3hYeUtubURtOEdRTWZiTVM4MVY4OEhIV0pYVzZ4eVJpU0E5Uk5aMUV2cFlrMmF5U2tuX3pZRk1SaDhhWWFSTHllelNudWw2akQ0ZG5HRUkwX05leGlaZUtLY3RzckNOczZmbXV1WWljc0NfWXJNcVdSbWJxVTd1d0FHQ3JDS3lrY0N2SDVSS1N0ZzYtcmlBeEVTTklfWGNjaG9ZRGdobG0wMD8+PC9nPjwvc3ZnPg==" alt="Network Topology Diagram" style="max-width:100%;height:auto;background:#fff;padding:16px;border:1px solid #e5e7eb;border-radius:8px;" />
</div>
cisco
# On Student-PC (10.10.10.2) - copy telemetry and run simple Python exploration
mkdir -p ~/nhprep_analysis
scp 10.10.10.3:/opt/nhprep/telemetry/telemetry.csv ~/nhprep_analysis/
python3 - <<'PY'
import csv
rows = []
with open('~/nhprep_analysis/telemetry.csv'.replace('~','/home/user')) as f:
reader = csv.DictReader(f)
for r in reader:
rows.append(r)
print("Samples:", len(rows))
print("Example row:", rows[0])
PY
What just happened: We copied telemetry from the Data-Collector to the operator workstation and ran a tiny Python snippet that counts samples and prints an example row. This gives visibility into dataset size and structure so you can decide normalization and handling of labels.
Real-world note: Use strong anonymization and secure transfer (SCP over SSH, TLS) for telemetry in production to protect sensitive customer / user data.
Verify:
# Expected output when running the snippet
Samples: 2
Example row: {'timestamp': '2025-03-01T12:00:00', 'rssi': ' -65', 'channel_util': ' 20', 'client_count': ' 12', 'drop': ' 0'}
<div class="topology-diagram">
<img src="data:image/svg+xml;base64,PD9wbGFudHVtbCAxLjIwMjYuMT8+PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiBjb250ZW50U3R5bGVUeXBlPSJ0ZXh0L2NzcyIgZGF0YS1kaWFncmFtLXR5cGU9Ik5XRElBRyIgaGVpZ2h0PSIzMDlweCIgcHJlc2VydmVBc3BlY3RSYXRpbz0ibm9uZSIgc3R5bGU9IndpZHRoOjU4MXB4O2hlaWdodDozMDlweDtiYWNrZ3JvdW5kOiNGRkZGRkY7IiB2ZXJzaW9uPSIxLjEiIHZpZXdCb3g9IjAgMCA1ODEgMzA5IiB3aWR0aD0iNTgxcHgiIHpvb21BbmRQYW49Im1hZ25pZnkiPjxkZWZzLz48Zz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0Ny4zNDk2IiB4PSI5Ny42ODM2IiB5PSIxNi4xMzg3Ij5WTEFOMTA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iNjguOTI5NyIgeD0iNzYuMTAzNSIgeT0iMzAuMTA3NCI+MTAuMC4xLjAvMjQ8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMTQwLjAzMzIiIHg9IjUiIHk9IjE3MS4zMjYyIj5NYW5hZ2VtZW50X1ZMQU4xMDA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iODQuMTk5MiIgeD0iNjAuODM0IiB5PSIxODUuMjk0OSI+MTcyLjE2LjAuMC8yNDwvdGV4dD48cmVjdCBmaWxsPSIjRTJFMkYwIiBoZWlnaHQ9IjUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiIHdpZHRoPSI0MjMuMzM5OCIgeD0iMTUwLjAzMzIiIHk9IjE2LjQ2ODgiLz48cmVjdCBmaWxsPSIjRTJFMkYwIiBoZWlnaHQ9IjUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiIHdpZHRoPSI0MjMuMzM5OCIgeD0iMTUwLjAzMzIiIHk9IjE3MS42NTYzIi8+PHBhdGggZD0iTTIzMS40MTExLDIxLjQ2ODggTDIzMS40MTExLDc3LjA3ODEiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI1Mi40ODEiIHg9IjIwNS4xNzA3IiB5PSI0NC4xNzkyIj4xMC4wLjEuMTA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjUuMDUwOCIgeD0iMjA1LjE3MDciIHk9IjU2Ljk4MzkiPmV0aDA8L3RleHQ+PHBhdGggZD0iTTM4Ny42MTgyLDIxLjQ2ODggTDM4Ny42MTgyLDc3LjA3ODEiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI1Mi40ODEiIHg9IjM2MS4zNzc3IiB5PSI0NC4xNzkyIj4xMC4wLjEuMTE8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjUuMDUwOCIgeD0iMzYxLjM3NzciIHk9IjU2Ljk4MzkiPmV0aDA8L3RleHQ+PHBhdGggZD0iTTUxOS45MTAyLDIxLjQ2ODggTDUxOS45MTAyLDc3LjA3ODEiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0NS40ODI0IiB4PSI0OTcuMTY4OSIgeT0iNDQuMTc5MiI+MTAuMC4xLjI8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjkuMjgzMiIgeD0iNDk3LjE2ODkiIHk9IjU2Ljk4MzkiPkdpMC8xPC90ZXh0PjxwYXRoIGQ9Ik01MTkuOTEwMiwxMTEuMDQ2OSBMNTE5LjkxMDIsMTcxLjY1NjMiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI1OS40Nzk1IiB4PSI0OTAuMTcwNCIgeT0iMTM4Ljc1NzMiPjE3Mi4xNi4wLjI8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjkuMjgzMiIgeD0iNDkwLjE3MDQiIHk9IjE1MS41NjIiPkdpMC8yPC90ZXh0PjxwYXRoIGQ9Ik0yMjcuNDExMSwxNzYuNjU2MyBMMjI3LjQxMTEsMjE5LjQ2MDkiIGZpbGw9Im5vbmUiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MTsiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI2Ni40NzgiIHg9IjE5NC4xNzIxIiB5PSIxOTIuOTY0NCI+MTcyLjE2LjAuMTA8L3RleHQ+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTEiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iMjUuMDUwOCIgeD0iMTk0LjE3MjEiIHk9IjIwNS43NjkiPmV0aDA8L3RleHQ+PHBhdGggZD0iTTM4My42MTgyLDE3Ni42NTYzIEwzODMuNjE4MiwyMTkuNDYwOSIgZmlsbD0ibm9uZSIgc3R5bGU9InN0cm9rZTojMTgxODE4O3N0cm9rZS13aWR0aDoxOyIvPjx0ZXh0IGZpbGw9IiMwMDAwMDAiIGZvbnQtZmFtaWx5PSJzYW5zLXNlcmlmIiBmb250LXNpemU9IjExIiBsZW5ndGhBZGp1c3Q9InNwYWNpbmciIHRleHRMZW5ndGg9IjY2LjQ3OCIgeD0iMzUwLjM3OTIiIHk9IjE5Mi45NjQ0Ij4xNzIuMTYuMC4xMTwvdGV4dD48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMSIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSIyNS4wNTA4IiB4PSIzNTAuMzc5MiIgeT0iMjA1Ljc2OSI+ZXRoMDwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjY4LjM3NSIgeD0iMTk1LjIyMzYiIHk9Ijc3LjA3ODEiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0OC4zNzUiIHg9IjIwNS4yMjM2IiB5PSI5OC4yMTY4Ij5DbGllbnRfMTwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjY4LjM3NSIgeD0iMzUxLjQzMDciIHk9Ijc3LjA3ODEiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSI0OC4zNzUiIHg9IjM2MS40MzA3IiB5PSI5OC4yMTY4Ij5DbGllbnRfMjwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjgwLjkyNTgiIHg9IjQ3Ny40NDczIiB5PSI3Ny4wNzgxIi8+PHRleHQgZmlsbD0iIzAwMDAwMCIgZm9udC1mYW1pbHk9InNhbnMtc2VyaWYiIGZvbnQtc2l6ZT0iMTIiIGxlbmd0aEFkanVzdD0ic3BhY2luZyIgdGV4dExlbmd0aD0iNjAuOTI1OCIgeD0iNDg3LjQ0NzMiIHk9Ijk4LjIxNjgiPlN3aXRjaF9TMTwvdGV4dD48cmVjdCBmaWxsPSIjRjFGMUYxIiBoZWlnaHQ9IjMzLjk2ODgiIHN0eWxlPSJzdHJva2U6IzE4MTgxODtzdHJva2Utd2lkdGg6MC41OyIgd2lkdGg9IjEyOC43NTU5IiB4PSIxNjUuMDMzMiIgeT0iMjE5LjQ2MDkiLz48dGV4dCBmaWxsPSIjMDAwMDAwIiBmb250LWZhbWlseT0ic2Fucy1zZXJpZiIgZm9udC1zaXplPSIxMiIgbGVuZ3RoQWRqdXN0PSJzcGFjaW5nIiB0ZXh0TGVuZ3RoPSIxMDguNzU1OSIgeD0iMTc1LjAzMzIiIHk9IjI0MC41OTk2Ij5NTF9Db2xsZWN0b3JfTUMxPC90ZXh0PjxyZWN0IGZpbGw9IiNGMUYxRjEiIGhlaWdodD0iMzMuOTY4OCIgc3R5bGU9InN0cm9rZTojMTgxODE4O3N0cm9rZS13aWR0aDowLjU7IiB3aWR0aD0iMTIzLjY1ODIiIHg9IjMyMy43ODkxIiB5PSIyMTkuNDYwOSIvPjx0ZXh0IGZpbGw9IiMwMDAwMDAiIGZvbnQtZmFtaWx5PSJzYW5zLXNlcmlmIiBmb250LXNpemU9IjEyIiBsZW5ndGhBZGp1c3Q9InNwYWNpbmciIHRleHRMZW5ndGg9IjEwMy42NTgyIiB4PSIzMzMuNzg5MSIgeT0iMjQwLjU5OTYiPkNvbnRyb2xsZXJfQ1RSTDE8L3RleHQ+PD9wbGFudHVtbC1zcmMgVFAzRDNlOG00OEpsRkNNNjFvWGpPX0hXRjlXVWw4MDdDTHg0RDhxcXE4WGpLZmZtQ0JveF80WTQwRFZUUnNQc0xiVWY0WGJ5RjAwak5NTmkyc3hYeUtubURtOEdRTWZiTVM4MVY4OEhIV0pYVzZ4eVJpU0E5Uk5aMUV2cFlrMmF5U2tuX3pZRk1SaDhhWWFSTHllelNudWw2akQ0ZG5HRUkwX05leGlaZUtLY3RzckNOczZmbXV1WWljc0NfWXJNcVdSbWJxVTd1d0FHQ3JDS3lrY0N2SDVSS1N0ZzYtcmlBeEVTTklfWGNjaG9ZRGdobG0wMD8+PC9nPjwvc3ZnPg==" alt="Network Topology Diagram" style="max-width:100%;height:auto;background:#fff;padding:16px;border:1px solid #e5e7eb;border-radius:8px;" />
</div>
cisco
# On Model-Server (10.10.10.10) - create a training script and run it
mkdir -p /opt/nhprep/models
cat > /opt/nhprep/models/train.py <<'PY'
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
df = pd.read_csv('/opt/nhprep/telemetry/telemetry.csv')
df = df.apply(lambda x: x.str.strip() if x.dtype == 'object' else x)
X = df[['rssi','channel_util','client_count']].astype(float)
y = df['drop'].astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
model = LogisticRegression()
model.fit(X_train, y_train)
print("Training complete")
pred = model.predict(X_test)
print(classification_report(y_test, pred))
import joblib
joblib.dump(model, '/opt/nhprep/models/drop_model.joblib')
PY
# Ensure telemetry is available locally (copy from Data-Collector)
scp 10.10.10.3:/opt/nhprep/telemetry/telemetry.csv /opt/nhprep/telemetry/telemetry.csv
python3 /opt/nhprep/models/train.py
What just happened: The script reads telemetry, cleans whitespace, separates features and labels, splits into train/test, and trains a logistic regression. It prints performance metrics and saves the trained model to disk as drop_model.joblib. Logistic regression provides interpretable weights that help understand feature influence.
Real-world note: For production-scale datasets you would use more robust training frameworks, GPUs for deep networks, and a reproducible experiment tracking system (experiment IDs, seed values).
Verify:
# Confirm model file exists and basic metric output
ls -l /opt/nhprep/models/drop_model.joblib
# And show the expected console output from the training script
Expected outputs:
-rw-r--r-- 1 root root 123456 Mar 1 12:05 /opt/nhprep/models/drop_model.joblib
Training complete
precision recall f1-score support
0 1.00 1.00 1.00 1
1 1.00 1.00 1.00 1
accuracy 1.00 2
macro avg 1.00 1.00 1.00 2
weighted avg 1.00 1.00 1.00 2
Step 4: Deploy model to Inference-API (serve predictions)
What we are doing: Copy the saved model to the Inference-API and run a simple service that accepts telemetry and returns predictions. This matters because low-latency inference enables near-real-time alarms or automated corrective actions.
# On Model-Server: copy model to Inference-API
scp /opt/nhprep/models/drop_model.joblib 10.10.10.11:/opt/nhprep/models/drop_model.joblib
# On Inference-API (10.10.10.11): create a minimal prediction script and run a test
mkdir -p /opt/nhprep/models
cat > /opt/nhprep/models/run_inference.py <<'PY'
import joblib
model = joblib.load('/opt/nhprep/models/drop_model.joblib')
# Simulate incoming telemetry sample
sample = [[-70.0, 85.0, 16.0]] # rssi, channel_util, client_count
pred = model.predict(sample)
print("Prediction (0=no-drop,1=drop):", int(pred[0]))
PY
python3 /opt/nhprep/models/run_inference.py
What just happened: The trained model was transferred to the inference server and loaded. The script simulates an incoming telemetry sample and prints the predicted class (drop or no-drop). In production, this script would be wrapped by an HTTP API (Flask/Gunicorn) with secure endpoints and request throttling.
Real-world note: Serving models requires operational guardrails: health checks, model versioning, A/B testing, and rollback mechanisms to prevent bad models from impacting operations.
Verify:
# Expected output of the inference run
python3 /opt/nhprep/models/run_inference.py
Expected output:
Prediction (0=no-drop,1=drop): 1
Verification Checklist
-
Check 1: Telemetry file present on Data-Collector. Verify with:
cat /opt/nhprep/telemetry/telemetry.csvExpected: CSV header and sample rows.
-
Check 2: Training completed and model saved on Model-Server. Verify with:
ls -l /opt/nhprep/models/drop_model.joblibExpected: file exists with non-zero size.
-
Check 3: Inference returns a prediction on the Inference-API. Verify with:
python3 /opt/nhprep/models/run_inference.pyExpected: "Prediction (0=no-drop,1=drop): <0 or 1>"
Common Mistakes
| Symptom | Cause | Fix |
|---|---|---|
| Model file not found on Inference-API | Forgot to scp the model from Model-Server | Recopy model: scp /opt/nhprep/models/drop_model.joblib 10.10.10.11:/opt/nhprep/models/ |
| Training metrics very poor (accuracy low) | Insufficient or noisy data, features not normalized | Collect more samples, normalize features, engineer new features (e.g., moving averages) |
| Inference script crashes when loading model | Model path incorrect or dependency mismatch (different Python/sklearn versions) | Verify Python environment versions, reinstall dependencies, confirm model path |
| Predictions inconsistent between runs | Non-deterministic training without fixed random seed | Set random seed in training (e.g., random_state=42) and document experiment |
Key Takeaways
- AI is the broad field; ML is the practical subset that learns from data, and Deep Learning uses large neural networks when data and compute afford it.
- Real network ML pipelines require careful data collection, preprocessing, model selection, validation, deployment, and operational monitoring — each stage matters and can introduce errors if skipped.
- In production networks, ML is used for anomaly detection, RRM optimization, capacity planning, and root-cause analysis; model choice depends on data volume, label availability, and latency needs.
- Always validate models with realistic test sets, secure telemetry in transit, and implement model versioning and rollback to keep the network safe during automation.
Important: Use lab.nhprep.com for naming examples and use password Lab@123 for lab accounts where you provision access — but never reuse production credentials in real deployments.
If you completed this lesson, you now understand the theoretical distinctions and an end-to-end practical workflow for applying ML to a networking problem. In Lesson 2 we will dive into feature engineering and time-series models for network telemetry.