← Back to blog
6 min read

People, Process, Technology — and Now AI: The Security Operating Model Needs an Update

Anyone who has worked in security management over the past 25 years knows the framework: People, Process, Technology (PPT). It originates from the 1960s, formulated by Harold Leavitt as the Diamond Model for organizational change, and has since established itself as the standard model for structuring security programs — from information security through physical security to business continuity.

The premise is simple: effective security only emerges when all three dimensions work together. The best technology is useless without the processes to operate it. The best processes fail if people don't understand or follow them. For decades, this was a valid and useful simplification.

In 2026, this simplification no longer suffices.

Why PPT Is Reaching Its Limits

The PPT model implicitly assumes that technology is a passive tool — a means that people deploy through defined processes. A camera monitors. A sensor alerts. A SIEM correlates. The decision always rests with a human.

Artificial intelligence fundamentally breaks this logic. AI systems are not passive tools. They analyze, evaluate, prioritize, and act — with increasing autonomy. An AI-powered SIEM independently decides which alerts to escalate and which to suppress. An AI-based access control system detects behavioral anomalies and locks access in real time. An AI agent in the Security Operations Center triages incidents faster than any analyst.

This is no longer technology in the traditional sense. It is an independent actor within the security system.

AI as the Fourth Dimension

Extending PPT to PPTA — People, Process, Technology, AI — is not a marketing gimmick. It is conceptually necessary because AI occupies a qualitatively different role than traditional technology:

Technology executes what has been configured. A firewall blocks according to rules. A camera records what is in front of the lens.

AI interprets, learns, and decides. It generates new insights from data that no human could analyze in a reasonable timeframe. And it changes its behavior based on experience — without explicit reprogramming.

This distinction has concrete implications for each of the three classic pillars:

Impact on People

AI does not replace security professionals — but it fundamentally changes their role. The SOC analyst shifts from alert processor to supervisor of AI agents. The security consultant uses AI-powered assessment tools that analyze in minutes what previously took days. The CISO must understand how AI models evaluate risks — and where their blind spots lie.

Required competencies are shifting: less routine monitoring, more critical thinking about AI outputs. Less manual data collection, more strategic interpretation. And a new, uncomfortable skill: the ability to question and correct AI decisions.

Practical tip: Start by identifying in each security process where AI already influences decisions — even if it's just alert prioritization. Then clarify: Who validates these decisions? Who is accountable when the AI is wrong?

Impact on Process

Processes designed for human actors don't work for AI agents. An incident response process that begins with "analyst evaluates the alert" must be rethought when the initial assessment is already performed by AI.

This affects fundamental questions:

  • Escalation logic: When does a human take over the decision from AI?
  • Traceability: How does AI document its decision pathways?
  • Error handling: What happens when AI systematically mispriotizes?
  • Governance: Who is responsible for training and configuring AI models?

Organizations that simply "plug" AI into existing processes are, at best, automating inefficiency. At worst, they are automating errors — at higher speed and greater scale.

Impact on Technology

Paradoxically, AI also transforms the technology pillar itself. Traditional security technology was deterministic: same input, same output. AI-based systems are probabilistic. They deliver probabilities, not certainties.

This means: integrating AI requires a new layer in the technology architecture — a governance layer that monitors AI models, validates their outputs, and measures their performance. The industry calls this AI Security Posture Management (AI-SPM) or Model Monitoring. It is the equivalent of patch management, but for algorithms instead of software.

The PPTA Framework in Practice

What does practical application look like? An example from physical security:

Classic PPT model:

  • People: Guard personnel patrol the premises
  • Process: Patrol rounds every 60 minutes on a defined route
  • Technology: Cameras and motion detectors support surveillance

Extended PPTA model:

  • People: Security professional monitors dashboard and responds to escalations
  • Process: AI-driven anomaly detection triggers automated initial assessment; human validation when confidence < 85%
  • Technology: Cameras, sensors, access control as data sources
  • AI: Video analytics detect unusual behavior, correlate with access data and shift schedules, prioritize notifications by risk score

The difference is not cosmetic. AI changes when people intervene, what they focus on, and how decisions are made.

What This Means for Security Leaders

Extending the operating model has strategic implications:

Budgeting: AI is not simply a line item under "technology." It requires dedicated investment in training, governance, monitoring, and — often underestimated — in upskilling personnel.

Risk assessment: AI systems themselves become risk objects. They can be manipulated (adversarial attacks), they can deliver systematically flawed assessments, and they can — with inadequate governance — cause regulatory violations.

Compliance: The EU AI Act, NIS2, and upcoming regulations set explicit requirements for AI deployment in security-critical areas. Organizations that silently subsume AI under "technology" will overlook these requirements.

Organizational structure: Who is responsible for AI in the security program? In many organizations, this accountability is missing. AI is neither purely IT nor purely security — it is a cross-cutting function that requires its own governance.

Not an Either-Or

The point is not to discard PPT. The framework remains useful as a mental model. But it must be extended to reflect the reality in which AI is not merely a better tool, but an independent factor that changes the interplay of all other dimensions.

Organizations that make this extension early will be better positioned to integrate AI effectively and responsibly into their security programs. Organizations that continue treating AI as merely a technology upgrade will notice the blind spots only when it's too late.

Conclusion: From Three Pillars to Four

The PPT model was the right framework for a world where technology was a tool. In a world where AI is an actor, we need PPTA:

  • People — with new competencies for collaboration with AI
  • Process — with governance mechanisms for autonomous systems
  • Technology — as infrastructure and data source
  • AI — as an independent dimension with its own risks, its own governance, and its own value contribution

The question is not whether your organization will make this transition. The question is whether you will shape it — or be overtaken by it.


Looking to systematically integrate AI into your security program? Siegel Resilience supports you in developing governance structures, risk assessments, and operating models that meet the new reality — independent, pragmatic, and standards-aligned. Get in touch →

Share: