Writing the pentest report

Telling people their baby is ugly without getting fired.

The pentest report is where all your careful work either transforms into genuine security improvements or vanishes into the same filing cabinet as last year’s fire safety audit that nobody read. The challenge is not in finding vulnerabilities, that’s the easy part. The challenge is explaining them in a way that prompts action rather than defensiveness, budget discussions rather than arguments about terminology, and actual fixes rather than another round of “we’ll look into that.”

At UU P&L, our final report ran to 147 pages. The Archchancellor read page one, skimmed page two, and asked his secretary to “summarise the important bits.” This is not unusual. In fact, if you’re expecting busy executives to read your entire technical masterpiece, you’re setting yourself up for disappointment roughly equivalent to expecting the Watch to prevent all crime in Ankh-Morpork simply by having a very comprehensive list of laws.

The solution is not to write less, it’s to write in layers, each one crafted for a different audience with different concerns and attention spans.

The executive summary

The executive summary is written for people who have seventeen other meetings today, three budget crises, and a fundamental belief that computers are fundamentally mysterious and probably best left to the IT department. They don’t want to know about CVE numbers or buffer overflows. They want to know: “Should I be worried?” and “How much will this cost?”

At UU P&L, the executive summary opened with:

“The current security posture of UU P&L’s industrial control systems presents significant risk to operational continuity, regulatory compliance, and personnel safety. While no active exploitation was observed during testing, the identified vulnerabilities would allow a moderately skilled attacker with network access to manipulate turbine operations, disrupt power generation, or cause equipment damage.”

Note what this achieves. It’s serious without being alarmist. It mentions real consequences (equipment damage, disruption) without resorting to Hollywood scenarios. It acknowledges the current state (no active attacks) while explaining the risk. And it’s written in language that a university bursar or company CFO can understand without a CompSci degree.

The executive summary should include:

  • A clear statement of scope and methodology. “We tested X systems over Y days using Z approach.” This establishes what you did and didn’t test, which prevents the inevitable “but you didn’t look at our cloud systems” response when they’re not in scope.

  • Key findings at business level. Not “default Modbus passwords” but “unauthorised personnel could remotely shut down power generation.” Translate technical findings into business impact. The Archchancellor doesn’t care about CVE-2023-12345, but he does care about “could cause a repeat of the Great Brownout of ‘23 that left the Library in darkness for six hours.”

  • Risk summary with context. “We identified 23 high-risk findings, 47 medium-risk findings, and 89 low-risk findings.” But also: “This is typical for industrial facilities of this age and complexity.” Giving context prevents panic and helps them understand they’re not uniquely terrible, just typically terrible.

  • High-level recommendations. “Implement network segmentation between IT and OT networks (€150,000, 6 months)” not “configure VLANs with 802.1Q tagging on all switches.” Save the technical details for the technical section.

  • A clear path forward. “We recommend addressing critical findings within 30 days, high-risk findings within 90 days, and developing a longer-term roadmap for systemic improvements.” This gives them something actionable rather than just a list of problems.

At UU P&L, we also included a one-page “dashboard” with colour-coded risk levels, a simple bar chart of findings by severity, and three bullet points of “quick wins” that could be implemented immediately. The Archchancellor’s secretary photocopied this page for every senior manager. The other 146 pages went to engineering. Everyone was happy, or at least no more unhappy than usual.

Technical findings

The technical findings section is for the people who will actually fix things, the OT engineers, the control systems specialists, the network administrators who understand what VLANs are and why they matter. This is where you can include all the technical detail you carefully documented.

Each finding should follow a consistent structure:

  • Finding title. Clear and specific. “Unauthenticated access to turbine PLCs” not “Security issue in industrial controllers.”

  • Risk rating. Critical, high, medium, or low. More on this in a moment, but be consistent and justify your ratings.

  • Description. What did you find? “The turbine control PLCs (192.168.10.10-12) are accessible via Modbus TCP without authentication. Any system on the OT network can read sensor values and write control parameters.”

  • Technical detail. How did you find it? “We connected to each PLC using a standard Modbus client and successfully read holding registers 1000-1050 containing operational setpoints. We then wrote a test value to register 1000 (speed setpoint) which was accepted without authentication or authorisation.”

  • Impact. What could an attacker do with this? “An attacker with network access could modify turbine speed setpoints, trip safety systems, or shut down power generation. If exploited during peak demand, this could result in cascading failures across the distribution network.”

  • Affected systems. Be specific. “Turbine PLCs T001, T002, T003 (192.168.10.10-12), running firmware version 3.2.1.”

  • Evidence. Screenshots, packet captures, command output. But keep it relevant. Nobody needs fifty screenshots of the same vulnerability on different systems. One clear example and a list of affected systems is sufficient.

  • Remediation. How do they fix it? “1. Implement network segmentation to restrict access to PLC network (see recommendation SEC-001). 2. Deploy a protocol gateway that enforces authentication for Modbus traffic (see recommendation SEC-003). 3. Configure PLC firewalls to allow connections only from authorised HMI systems.”

At UU P&L, we discovered that the turbine PLCs could be accessed from the university’s general IT network. This meant that anyone with a laptop connected to the staff WiFi, including visiting academics, students using the library computers, and that postgraduate student who definitely seemed to be running a bitcoin mining operation from his accommodation, could theoretically modify turbine operations.

The finding included packet captures showing successful Modbus connections, screenshots of register reads and writes, and a very carefully worded impact statement that mentioned “potential for operational disruption” rather than “potential for exploding turbines” because while both were technically possible, one would result in productive discussions and the other would result in panic.

Risk ratings for OT environments

Risk rating in OT is more complex than in IT because the consequences extend beyond data confidentiality and system availability. A high-severity vulnerability in a corporate email server might result in data exfiltration. A high-severity vulnerability in a turbine control system might result in someone getting hurt.

The traditional CVSS scoring system, beloved of IT security teams everywhere, is not particularly well suited to OT environments. CVSS focuses heavily on confidentiality, integrity, and availability of data. But in OT, we care more about physical consequences, safety system reliability, and operational continuity.

At UU P&L, we adapted the risk ratings to reflect OT concerns:

  • Critical. Could directly cause injury, death, significant environmental damage, or catastrophic equipment failure. “Unauthenticated write access to safety PLC that controls emergency shutdown systems” is critical because bypassing safety systems is how people get hurt.

  • High. Could cause major operational disruption, significant equipment damage, or enable attacks that might lead to safety impacts. “Unauthenticated write access to turbine control PLCs” is high because while it doesn’t directly bypass safety systems, an attacker could create conditions that trigger them (or worse, conditions that should trigger them but don’t because they’ve also disabled the safety monitoring).

  • Medium. Could cause operational disruption, data exfiltration of operationally sensitive information, or enable reconnaissance for more serious attacks. “No authentication on historian database” is medium because while an attacker can’t directly control anything, they can learn exactly how the systems work, what the normal parameters are, and plan more sophisticated attacks.

  • Low. Represents security weaknesses that don’t directly enable attacks but violate security best practices or could be chained with other vulnerabilities. “Default SNMP community strings on network switches” is low because alone it just exposes network topology, but combined with other vulnerabilities it might enable more sophisticated attacks.

The key difference from IT risk ratings is the emphasis on physical consequences. A vulnerability that allows reading data might be high severity in IT (data exfiltration is bad). The same vulnerability in OT might be medium or even low if the data doesn’t help an attacker cause physical harm or operational disruption.

We also included a “compensating controls” section for each finding. This is particularly important in OT where immediate patching often isn’t possible. If you can’t patch the vulnerable PLC firmware (because downtime isn’t acceptable and there’s no test environment), what can you do? Network segmentation, protocol firewalls, and enhanced monitoring might not eliminate the vulnerability but they significantly reduce the risk of exploitation.

At UU P&L, several critical findings had existing compensating controls we hadn’t initially recognised. The turbine PLCs might not require authentication, but they were supposed to be on an isolated network segment. The fact that this segmentation had been bypassed by connecting a temporary laptop directly to the OT network (three years ago, now permanent) didn’t make the vulnerability less severe, but it did change the remediation from “redesign the entire PLC security model” to “fix the network segmentation that should already exist.”

What not to say

There’s a delicate balance in pentest reports between being clear about security problems and being so blunt that you alienate the people who need to fix them. The systems you’re testing were usually designed by people who are still working there, implemented by engineers who did their best with limited budgets and competing priorities, and approved by managers who are reading your report.

Phrases to avoid:

  • “Built by idiots” or any variation thereof. Even if true, this doesn’t help. The systems were built by people working with the knowledge, tools, and budgets available at the time. Being dismissive of their work doesn’t encourage them to trust your recommendations.

  • “Catastrophically insecure” unless things are actually on fire. OT systems are often a patchwork of decades-old equipment, vendor lock-in, and compromises between security and operational requirements. Use proportionate language.

  • “Trivially exploitable” is both insulting (implying they should have found it themselves) and often inaccurate. Yes, you exploited it easily, but you’re a penetration tester with modern tools and techniques. Context matters.

  • “Industry standard” when recommending solutions. There are very few actual industry standards in OT security, and the ones that exist are often ignored because they’re expensive or impractical. Be specific about what you’re recommending and why, not just that it’s “standard.”

  • “Nation-state level threat” unless you have specific threat intelligence suggesting nation-state interest. Not everything needs to be Stuxnet. Most OT security should focus on opportunistic attackers, disgruntled insiders, and ransomware rather than assuming APT groups are targeting your client’s widget factory.

What we wrote for UU P&L:

“The current OT security architecture reflects typical challenges in industrial environments: legacy systems designed before modern security threats existed, vendor limitations on security controls, and operational requirements that sometimes conflict with security best practices.”

Remediation recommendations

The recommendations section is where you transform criticism into action. Each finding needs corresponding remediation guidance that is specific, practical, and acknowledges the constraints of OT environments.

Bad recommendation: “Implement proper security controls on all OT systems.”

This tells them nothing. What controls? How? When?

Good recommendation: “Implement network segmentation between IT and OT networks using a firewall with deep packet inspection capabilities for industrial protocols. This should include: (1) Physical separation using dedicated network infrastructure, (2) Firewall rules permitting only necessary traffic (see Appendix C for rule templates), (3) Industrial protocol filtering to prevent unauthorised Modbus, EtherNet/IP, and OPC-UA traffic. Estimated cost: €150,000 capital expenditure. Implementation time: 4-6 months. Downtime required: 8-hour maintenance window for final cutover.”

The good recommendation tells them what to do, how to do it, what it will cost, how long it will take, and what impact it will have on operations. It’s specific enough to be actionable but not so prescriptive that it locks them into a particular vendor or solution.

At UU P&L, we provided three tiers of recommendations:

  • Quick wins (0-30 days, low cost). “Change default passwords on all accessible systems, disable unnecessary services on HMI workstations, implement basic Windows hardening on engineering workstations.” These build confidence that security improvements don’t always mean massive projects and downtime.

  • Medium-term improvements (30-90 days, moderate cost). “Deploy passive network monitoring on OT networks, implement application whitelisting on HMI systems, establish secure remote access procedures for vendor support.” These address high-risk findings and provide immediate risk reduction.

  • Strategic initiatives (6-12 months, significant investment). “Complete network segmentation project, implement industrial protocol firewall, deploy SIEM with OT-specific correlation rules, establish asset inventory and configuration management system.” These are the big projects that fundamentally improve security posture but require planning, budget, and coordination.

Each recommendation included a priority rating (critical, high, medium, low), estimated cost range, expected implementation time, and dependencies. “You can’t deploy the protocol firewall until network segmentation is complete” is important information for project planning.

We also included a section on compensating controls for findings that couldn’t be immediately remediated. The turbine PLC firmware vulnerability couldn’t be patched (no patch available, vendor recommends “upgrade to new hardware” at €200,000 per turbine). The compensating controls were network segmentation (preventing network access to vulnerable systems), enhanced monitoring (detecting exploitation attempts), and documented procedures for detecting compromise (regular integrity checks of PLC programs).

The conversation after the report

The written report is only the beginning. At UU P&L, we presented findings to three different audiences:

  • Executive briefing (30 minutes, Archchancellor and senior leadership). We walked through the executive summary, showed three demonstrations of the most serious vulnerabilities (chosen for dramatic impact and business relevance), and discussed the remediation roadmap and budget requirements. The Archchancellor’s main question was “Are we going to be in the newspaper?” which we answered honestly: “Not if we implement the critical recommendations, but yes if we don’t and there’s an incident.”

  • Technical deep-dive (3 hours, OT engineering team). We went through every finding, answered questions about testing methodology, explained how we exploited vulnerabilities, and discussed remediation approaches. This session was less about convincing them problems existed (they knew) and more about collaborative problem-solving. The senior engineer’s comment of “We told management this would be a problem five years ago” was both vindication and frustration.

  • Remediation planning workshop (full day, engineering, IT, and management). We worked through the recommendations, prioritised based on risk and feasibility, developed a timeline with milestones, and identified resource requirements and potential obstacles. This is where the report transforms from “list of problems” into “project plan for fixes.”

The written report provides documentation and evidence. The conversations provide context, collaboration, and commitment. You need both.

At the end of our engagement, UU P&L had a clear security roadmap, executive buy-in for the necessary budget, and an engineering team that understood both the problems and the solutions. Six months later, they’d implemented network segmentation, deployed basic monitoring, and patched most of the high-risk findings. Eighteen months later, their OT security posture had improved from “typical for the industry” (which is to say, concerning) to “better than most” (which is to say, only moderately concerning).

The report was 147 pages. The executive summary was one page. The recommendation roadmap was five pages. Those six pages drove €800,000 in security investments and genuine improvements in operational security.

Write for your audience, be specific in your recommendations, and remember that the goal is not to produce an impressive document but to produce actual security improvements. Everything else is just paperwork.