Writing security assessment reports: Telling people their systems are vulnerable without getting fired¶
Or: How Ponder Learnt To Translate Technical Findings Into Executive Action
From simulator to report¶
The UU P&L simulator teaches vulnerability discovery, exploitation techniques, and attack scenarios. Running the reconnaissance and vulnerability assessment scripts produces concrete findings: unauthenticated Modbus access, anonymous OPC UA browsing, S7 memory reading without authentication, complete lack of protocol security.
What the simulator doesn’t teach is how to communicate these findings to stakeholders who:
Don’t understand industrial protocols
Have limited budgets for security improvements
Need to balance security with operational requirements
Want to know “should I be worried?” not “what is CVE-2023-12345?”
This document explains how to transform technical simulator findings into effective security reports that prompt action rather than defensiveness.
The challenge of report writing¶
The pentest report is where careful work either transforms into genuine security improvements or vanishes into the same filing cabinet as last year’s fire safety audit that nobody read. The challenge is not in finding vulnerabilities (the simulator makes that straightforward). The challenge is explaining them in a way that prompts action rather than arguments.
If you’re expecting busy executives to read your entire technical masterpiece, you’re setting yourself up for disappointment roughly equivalent to expecting the Watch to prevent all crime in Ankh-Morpork simply by having a very comprehensive list of laws.
The solution is not to write less. It’s to write in layers, each one crafted for a different audience with different concerns and attention spans.
What the simulator produces¶
Testing with the simulator generates technical findings:
From reconnaissance scripts:
From vulnerability scripts:
From exploitation scripts:
These technical findings need translation into business language for effective reporting.
The executive summary¶
The executive summary is written for people who have seventeen other meetings today, three budget crises, and a fundamental belief that computers are fundamentally mysterious and probably best left to the IT department. They don’t want to know about CVE numbers or buffer overflows. They want to know: “Should I be worried?” and “How much will this cost?”
Structure for simulator-based findings¶
Scope and methodology: “We assessed the UU P&L simulator environment, which replicates typical industrial control system architecture including turbine control PLCs, reactor safety systems, and SCADA supervisory control. Testing used industry-standard protocols (Modbus, S7, OPC UA, EtherNet/IP) and documented reconnaissance, vulnerability assessment, and proof of concept exploitation techniques.”
Key findings at business level: Not “unauthenticated Modbus access to port 10502” but “unauthorised personnel could remotely modify turbine speed setpoints, potentially causing equipment damage or safety incidents.”
Translate simulator findings into business impact:
Modbus write access → potential for operational disruption
S7 memory reading → intellectual property theft (control algorithms)
Anonymous OPC UA → complete visibility into operations
Protocol diversity → multiple attack paths
Risk summary with context: “Assessment identified 23 high-risk findings, 47 medium-risk findings, and 89 low-risk findings. This is typical for industrial facilities that prioritise operational reliability over security controls. The findings demonstrate vulnerabilities common across the industry, not unique failures.”
High-level recommendations: “Implement network segmentation between IT and OT networks (€150,000, 6 months)” not “configure VLANs with 802.1Q tagging on all switches.” Save the technical details for the technical section.
Clear path forward: “Address critical findings within 30 days, high-risk findings within 90 days, develop longer-term roadmap for systemic improvements.”
Example executive summary¶
Based on simulator testing, an executive summary might read:
“Security assessment of UU P&L’s simulated industrial control systems reveals significant vulnerabilities that, if present in production, would allow moderately skilled attackers with network access to manipulate turbine operations, disrupt power generation, or cause equipment damage.
“The assessment identified 159 findings across the simulated environment, including 23 critical or high-risk vulnerabilities. These findings are representative of typical industrial facility security posture and demonstrate common protocol-level vulnerabilities in OT environments.
“Key findings include unauthenticated access to turbine control systems, lack of network segmentation between operational and corporate networks, and insufficient monitoring to detect malicious activity. These vulnerabilities, if exploited in production, could result in operational disruption, safety incidents, or intellectual property theft.
“Remediation recommendations prioritise network segmentation (€150,000, 6-month timeline), deployment of protocol-aware monitoring systems (€50,000, 3-month timeline), and implementation of secure remote access controls (€25,000, 2-month timeline). Quick wins including password changes and service hardening can be implemented immediately at minimal cost.
“The assessment demonstrates the value of security testing in simulated environments before exposing actual production systems to testing risks. Findings provide a roadmap for improving security posture whilst maintaining operational reliability.”
Technical findings from simulator testing¶
The technical findings section documents what the simulator scripts discovered. Each finding should follow consistent structure:
Finding template¶
Finding title: Clear and specific
“Unauthenticated access to turbine PLCs via Modbus TCP”
Risk rating: Critical, high, medium, or low
High (could cause operational disruption or equipment damage)
Description: What was found
“Turbine control PLCs on ports 10502-10504 accept Modbus TCP connections without authentication. Any system capable of reaching these ports can read sensor values and write control parameters.”
Technical detail: How was it found
“Using modbus_coil_register_snapshot.py script, successfully read all 10000 holding registers and wrote test values to register 0 (speed setpoint). Connection was accepted without credentials, logging, or access control.”
Simulator command:
python scripts/vulns/modbus_coil_register_snapshot.py --host 127.0.0.1 --port 10502
Impact: What could attacker do
“Attacker with network access could modify turbine speed setpoints, trip safety systems, or shut down power generation. In production environment, this could result in cascading failures across distribution network.”
Affected systems: Be specific
“Turbine PLCs T001, T002, T003 (simulator ports 10502-10504)”
Evidence: Screenshots and output
Script output showing successful register reads and writes
Captured Modbus traffic showing lack of authentication
One clear example (not fifty screenshots of the same thing)
Remediation: How to fix
Implement network segmentation restricting access to PLC network
Deploy protocol gateway enforcing authentication for Modbus traffic
Configure monitoring to detect unauthorised PLC access attempts
Translating simulator findings to production risk¶
Simulator findings demonstrate vulnerabilities that would exist in real facilities with similar architecture:
Simulator finding: “Port 4840 allows anonymous OPC UA browsing”
Production risk: “If production SCADA servers similarly allow anonymous access, attackers could enumerate complete facility tag database, revealing operational parameters, control logic, and system architecture.”
Simulator finding: “S7 protocol on port 102 allows unauthenticated memory reads”
Production risk: “If production reactor PLCs lack authentication, attackers could extract complete control programmes including safety logic, intellectual property, and system documentation.”
Simulator finding: “Detection testing shows slow reconnaissance goes unnoticed”
Production risk: “Current monitoring capabilities may not detect patient, methodical reconnaissance. Sophisticated attackers could map entire facility over days or weeks without triggering alerts.”
Risk ratings for OT environments¶
Risk rating in OT extends beyond confidentiality, integrity, and availability of data. Physical consequences, safety system reliability, and operational continuity matter more.
Simulator testing helps calibrate risk ratings:
Critical: Could directly cause injury, death, significant environmental damage, or catastrophic equipment failure
Example: Unauthenticated write access to safety PLC controlling emergency shutdown systems
High: Could cause major operational disruption, significant equipment damage, or enable attacks leading to safety impacts
Example: Unauthenticated write access to turbine control PLCs (demonstrated with turbine_overspeed_attack.py)
Medium: Could cause operational disruption, data exfiltration of operationally sensitive information, or enable reconnaissance for more serious attacks
Example: Anonymous OPC UA access allowing complete tag enumeration (demonstrated with opcua_readonly_probe.py)
Low: Security weaknesses that don’t directly enable attacks but violate best practises or could be chained with other vulnerabilities
Example: Predictable register addressing in Modbus implementations
The key difference from IT risk ratings is emphasis on physical consequences. Vulnerability allowing reading data might be high severity in IT (data exfiltration is bad). The same vulnerability in OT might be medium or low if data doesn’t help attacker cause physical harm or operational disruption.
What not to say¶
Balance between being clear about security problems and avoiding language that alienates the people who need to fix them.
Phrases to avoid:
“Catastrophically insecure” unless things are actually on fire. OT systems are often patchwork of decades-old equipment, vendor lock-in, and compromises. Use proportionate language.
“Trivially exploitable” is both insulting and often inaccurate. Yes, simulator scripts exploit it easily, but you’re using modern tools and techniques. Context matters.
“Industry standard” when recommending solutions. Be specific about what you’re recommending and why, not just that it’s “standard.”
“Nation-state level threat” unless you have specific threat intelligence. Not everything needs to be Stuxnet. Most OT security should focus on opportunistic attackers, disgruntled insiders, and ransomware.
What to write instead:
“The current OT security architecture reflects typical challenges in industrial environments: legacy systems designed before modern security threats existed, vendor limitations on security controls, and operational requirements that sometimes conflict with security best practises.”
Remediation recommendations¶
Transform criticism into action. Each finding needs corresponding remediation guidance that is specific, practical, and acknowledges OT constraints.
Bad recommendation: “Implement proper security controls on all OT systems.”
This tells them nothing. What controls? How? When?
Good recommendation: “Implement network segmentation between IT and OT networks using firewall with deep packet inspection for industrial protocols. This should include: (1) Physical separation using dedicated network infrastructure, (2) Firewall rules permitting only necessary traffic, (3) Industrial protocol filtering to prevent unauthorised Modbus, EtherNet/IP, and OPC UA traffic. Estimated cost: €150,000 capital expenditure. Implementation time: 4-6 months. Downtime required: 8-hour maintenance window for final cutover.”
Good recommendation tells them what to do, how to do it, what it costs, how long it takes, and operational impact.
Recommendations based on simulator findings¶
Quick wins (0-30 days, low cost):
Change default passwords on all accessible systems
Disable unnecessary services on HMI workstations
Implement basic Windows hardening on engineering workstations
Deploy passive network monitoring
Medium-term improvements (30-90 days, moderate cost):
Deploy passive network monitoring on OT networks
Implement application whitelisting on HMI systems
Establish secure remote access procedures
Strategic initiatives (6-12 months, significant investment):
Complete network segmentation project
Implement industrial protocol firewall
Deploy SIEM with OT-specific correlation rules
Establish asset inventory and configuration management
Each recommendation includes priority rating, estimated cost range, expected implementation time, and dependencies.
From simulator to production report¶
The simulator provides safe environment to:
Discover vulnerabilities without risking production
Develop exploitation proofs of concept
Test detection capabilities
Practice reporting skills
Production reports add:
Actual facility context
Specific system details
Real stakeholder concerns
Budget constraints
Operational requirements
Simulator experience provides technical foundation. Production reporting requires understanding organisational dynamics, budget realities, and operational constraints.
Ponder’s perspective¶
Ponder’s testing journal included notes about reporting:
“The simulator teaches me what vulnerabilities look like and how to exploit them. The challenge isn’t technical discovery, it’s communication.
“Technical findings are straightforward: port 10502 accepts unauthenticated Modbus commands. The hard part is explaining why this matters to the Archchancellor who cares about budgets and the city’s lights staying on, not Modbus function codes.
“The simulator provides evidence: turbine_overspeed_attack.py demonstrates that speed setpoints can be modified. The report must translate this into ‘unauthorised access could cause equipment damage requiring weeks of repairs and hundreds of thousands in costs.’
“Technical accuracy matters. Business impact matters more. The goal isn’t impressive technical documentation. The goal is actual security improvements, which requires both accurate findings and effective communication.”
Resources for effective reporting¶
The simulator teaches vulnerability discovery. These resources teach effective communication:
Report templates:
Executive summary structures
Technical finding templates
Remediation roadmap formats
Risk rating frameworks:
IEC 62443 security levels
OT-specific risk matrices
Business impact assessment methods
Communication guidance:
Technical to business translation
Stakeholder engagement approaches
Budget justification templates
Use the simulator to discover vulnerabilities and practise exploitation. Use production experience to learn effective reporting, stakeholder communication, and organisational navigation.
Further reading:
Implementing Fixes - Turning recommendations into reality
Prioritising Remediation - Deciding what to fix first
Attack Walkthroughs - Complete attack scenarios
The simulator teaches what to report. Real facilities teach how to report it effectively. Both are essential for successful OT security assessment.