Skip to main content
See Security Labs

SEC401 - Vulnerability Management and Response

Lab 3.1 - Network Discovery

Solo, Lab

Focus: Network Security

Level: SEC401

Date: Apr 2026

Artifacts: Sanitized screenshots of Nmap scans, ndiff baseline comparison, post-discovery SSH and iptables review

TL;DR

  • Enumerated a /24 lab network with Nmap ping, port, version, and OS scans
  • Baselined results to XML and used ndiff to detect a new service between scans
  • Pivoted to a discovered host to review netstat, iptables rules, and served content

Skills demonstrated

Host discovery with Nmap -snService and version enumeration (-sV)OS fingerprinting (-O, --osscan-guess)XML output and ndiff baseline comparisonPost-scan host triage (netstat, iptables, curl)Recognizing non-standard service ports

Note: Course-provided PCAPs and lab instructions are not shared. Only my own captures and sanitized notes are published.

Why this matters

Network discovery is the first step of almost every attack and almost every defensive asset inventory. Knowing how to surface live hosts, services, and changes between scans is the difference between an asset inventory that reflects reality and a spreadsheet that went stale six months ago.

Context

This lab demonstrates a full network discovery workflow against a lab environment of seven Docker containers on 172.28.14.0/24, then pivots to one of the discovered hosts to validate what the scan found at the OS and firewall layer.

Tools used

Nmapndiffsshnetstatiptablescurl

Steps taken

1Lab environment startup

Launched the lab stack from /sec401/labs/3.1. The start script brought up seven Docker containers: webapp, docs, database, old-database, php-fpm, php-nginx, and a student container used as the scanning host.

$ cd /sec401/labs/3.1/ && ./start_3.1.sh

2Ping sweep: discover live hosts

Ran a host-discovery-only scan against the /24. Nmap reported 7 hosts up in 1.38 seconds, each with a 172.28.14.x address, matching the containers launched by the start script.

$ nmap -sn 172.28.14.0/24
-snping scan, no port scan
172.28.14.0/24256-address lab subnet

3Greppable port sweeps

Demonstrated two greppable-output scans: --top-ports 100 and the -F fast scan. Both ran against the default target (none supplied on the CLI), so they returned zero hosts. The point was to capture the exact port list each scan covers in the header comment for documentation.

$ nmap -v --top-ports 100 -oG -
$ nmap -v -F -oG -
-vverbose
--top-ports 100scan the 100 most common TCP ports
-Ffast scan (~top 100 from nmap-services)
-oG -greppable output to stdout

4Service and version detection

Enumerated services on every live host: OpenSSH 8.9p1 on the docs host, MySQL 5.7.41/5.7.44 on the two database hosts, nginx 1.14.2 on php-nginx, an unknown cslistener on php-fpm:9000, and http/https on the webapp. CPE entries (cpe:/o:linux:linux_kernel) confirm a Linux target fleet.

$ nmap -sV 172.28.14.0/24
-sVprobe open ports for service/version info

5OS detection: strict match

Ran a strict OS fingerprint. Nmap collected a full TCP/IP signature for each host but reported 'No exact OS matches' because containers don't present a clean kernel fingerprint over the network.

$ nmap -O 172.28.14.0/24
-OOS fingerprinting based on TCP/IP stack behavior

6OS detection: aggressive guess

Re-ran with --osscan-guess. Nmap returned probability-weighted matches: Linux 2.6.32 (96%), Linux 3.2-4.9 (96%), with odd long-tail guesses like AXIS 210A network camera and Synology DiskStation. Useful reminder that OS detection degrades badly against virtualized or containerized hosts.

$ nmap -O --osscan-guess 172.28.14.0/24
--osscan-guessprint closest matches even when no exact match

7Baseline scan saved to XML

Saved a second version scan to new_network.xml. XML output is the format ndiff consumes and the format most asset-management pipelines ingest.

$ nmap -sV -oX new_network.xml 172.28.14.0/24
-oXXML output file

8ndiff: detect scan-over-scan change

Compared an older baseline (network.xml, Nov 2023) to new_network.xml (Apr 2026). ndiff surfaced a new listener on the docs host: 8000/tcp open http WSGIServer 0.2 (Python 3.10.12). This is the exact signal an asset inventory wants, a service that wasn't there before, on a host you thought you understood.

$ ndiff network.xml new_network.xml
ndiffNmap-aware diff of two XML scans, lines prefixed with + for added and - for removed

9SSH on a non-standard port

The docs host was advertising SSH on port 80, not 22. Connected with ssh -p 80 and authenticated into the Ubuntu 22.04 container. Non-standard service ports defeat naive scanners that only check well-known ports, which is exactly why -sV matters.

$ ssh -p 80 root@172.28.14.23
-p 80connect to SSH running on port 80

10Post-compromise: netstat on target

From inside the docs host, listed listening sockets. Confirmed python3 listening on 0.0.0.0:8000, sshd on :80, a loopback resolver on 127.0.0.11:39563, and an established SSH session from the scanning host (172.28.14.1).

$ netstat -anp
-aall sockets; -n: numeric addresses; -p: show owning process/PID

11iptables rules for the new service

Reviewed the INPUT chain. Port 8000 is only reachable from 172.28.14.23 (self) and loopback; every other source gets REJECT tcp-reset. That explains why the WSGI service only became visible once the scan originated from inside the lab subnet, it was firewalled from external sources.

$ iptables -n -L
-nnumeric output (no DNS/port name lookup)
-Llist rules

12Retrieve the served page

curled localhost:8000 from the docs host. Response was the 'Alpha Developers' internal documentation portal built with mkdocs-material 9.4.14. Confirms what the port, the process, and the ndiff finding all hinted at: an internal docs site that should never have been exposed beyond loopback.

$ curl localhost:8000

Key findings

7 live hosts on 172.28.14.0/24 mapped to the lab's container stack
SSH running on TCP/80 on the docs host, a classic port-obfuscation pattern
php-fpm:9000 exposed as cslistener, a service usually kept internal
ndiff surfaced a new WSGIServer on docs:8000 between scans
iptables restricted 8000/tcp to the host itself, firewall-layer control working as intended

Outcome / Lessons learned

Produced a reliable asset inventory of the /24, including services, versions, likely OS, and a meaningful diff against an older baseline. The ndiff workflow caught exactly the kind of drift that tends to go unnoticed in real environments, a new internal service quietly appearing on an existing host.

In production I'd schedule the -sV XML scan on a cadence (weekly at minimum), store baselines in version control or an asset-management DB, and pipe ndiff output into a ticketing workflow so every new port/service change produces an owner-assigned ticket. I'd also run authenticated scans where possible, since unauthenticated Nmap misses a lot.

Security controls relevant

  • Asset inventory and CMDB accuracy
  • Host-based firewalls (iptables / nftables) restricting service exposure
  • Change-detection pipelines (ndiff, diff-based alerting)
  • Service hardening: don't expose dev/docs sites on 0.0.0.0
  • Non-standard port hygiene: document, don't rely on, port obscurity

What I took away from this

The part most people skip is the baseline. Running Nmap once gets you an asset list; running it twice and diffing with ndiff gets you a detection. That's the whole game in asset management: the first scan is inventory, the second scan is a signal. Most orgs never do the second scan.

OS detection against containers is nearly useless, and that's worth internalizing. The lab returned 'no exact matches' on every host, then 96% confidence guesses across four different kernel families once --osscan-guess was enabled. In a real environment full of containers and cloud VMs, fingerprinting-based inventory is a weak signal. Better to enumerate via CPE from -sV and cross-reference with agent data.

The SSH-on-80 finding is the kind of thing that trips up junior analysts. A top-1000 scan without -sV would have reported 80/tcp open http and moved on. -sV is what separates 'there's a web server here' from 'there's an SSH daemon pretending to be a web server.' On a real red team, mis-labeled ports are where defenders lose visibility.

Evidence gallery