Security Architect. AI Builder.
Published Author.
25 years designing security architecture for governments, banks, and critical infrastructure across four countries. Now researching and building AI systems for high-stakes regulated environments — and the accountability frameworks that make them trustworthy.
How I Think About Security and AI
Frameworks aren't academic exercises. They're tools for making better decisions under uncertainty. These are the mental models I've developed over 25 years of building, breaking, and rebuilding security architectures — now applied to the challenge of making AI trustworthy in high-stakes environments.
Cyber Hormesis
Applying Antifragility to Security Architecture
Cyber Hormesis
Applying Antifragility to Security ArchitectureNassim Taleb argues that some systems get stronger from stress. Most security architectures are designed to resist stress. The interesting question is: can you design a security architecture that improves from controlled adversarial pressure?
Cyber Hormesis is a framework for building security systems that don't just survive attacks — they learn from them. Drawing from biological hormesis (where small doses of toxins strengthen organisms), the framework proposes that deliberate, controlled exposure to adversarial inputs makes security architectures more robust than purely defensive designs.
This has practical implications for AI security: instead of trying to prevent all adversarial inputs to AI systems (which is impossible), you design the system to metabolise adversarial inputs and strengthen its defences in response.
Key principles:
- Controlled stress exposure: red-teaming as a continuous practice, not an annual event
- Adaptive response mechanisms: security controls that reconfigure based on observed attack patterns
- Anti-fragile monitoring: detection systems that improve their sensitivity with each incident
- Graceful degradation by design: failure modes that preserve critical functions while sacrificing non-essential ones
The opposite of resilience isn't fragility. It's the illusion of resilience — systems that look robust until the first novel stress reveals they were merely rigid.
Read on LinkedIn →
The Grey Duck
The Risks That Hide in Plain Sight
The Grey Duck
The Risks That Hide in Plain SightTaleb gave us the Black Swan: the unpredictable, high-impact event that no one saw coming. But in my experience, organisations rarely fail because of events no one could have predicted. They fail because of events everyone could see but no one acted on.
I call these Grey Ducks. They're not black swans (unpredictable) or white swans (expected and planned for). They're the visible, acknowledged, discussed-in-meetings risks that somehow never get resourced, never get mitigated, and never get escalated — until they cause exactly the damage everyone quietly knew they would.
Examples in cybersecurity:
- The legacy system everyone knows is unpatched but "too critical to touch"
- The third-party vendor whose security posture is "concerning" but whose contract renewal is "already signed"
- The AI deployment that "probably needs a governance framework" but "we'll get to it after launch"
The most dangerous phrase in any organisation: "We're aware of it."
Read on LinkedIn →
Continuous Coherence Protocol
Keeping AI Systems Aligned Over Time
Continuous Coherence Protocol
Keeping AI Systems Aligned Over TimeAI systems drift. Not catastrophically, not overnight, but gradually — in ways that are invisible until the output no longer matches the intent. The Continuous Coherence Protocol is a monitoring and correction framework designed to detect and address coherence drift in AI systems operating in regulated environments.
The core insight: alignment isn't a one-time calibration. It's a continuous process that requires architectural support. CCP defines monitoring checkpoints, coherence metrics, and correction mechanisms that keep AI systems operating within their intended parameters over time.
The evolution to NS-CCP (Neuro-Symbolic Coherence Protocol) introduces symbolic reasoning constraints alongside statistical monitoring, enabling hybrid verification approaches for AI systems where pure statistical monitoring is insufficient.
Relevant in any environment where AI decisions have consequences that compound: financial services, healthcare, critical infrastructure, legal compliance.
Read on LinkedIn →
Organizational Context Protocol
Teaching AI Systems to Understand Business Context
Organizational Context Protocol
Teaching AI Systems to Understand Business ContextMost AI implementations fail not because the model is wrong but because the model doesn't understand the organisation it's operating in. OCP is a structured approach for embedding organisational context — business rules, cultural norms, regulatory constraints, communication patterns — into AI system design.
Designed to bridge the gap between generic AI capabilities and organisation-specific deployment requirements.
Read on LinkedIn →
Cyber Resilience
Beyond Prevention to Adaptive Recovery
Cyber Resilience
Beyond Prevention to Adaptive RecoverySecurity architectures focused solely on prevention are incomplete. Cyber Resilience is a framework for designing systems that maintain critical operations during and after security incidents, with structured recovery paths and adaptive learning from each event.
Read on Substack →Publications
Selected Articles
-
Peer-Reviewed
IT Governance
Peer-reviewed publication in ISACA's Control Journal. Also indexed in the German National Library of Science (TIB.eu) under "Global Perspectives: IT Governance — Fonseca, Dante Ferrini et al. | 2006."
View on Wayback Machine → -
Information Security and its Relation to Accounting
-
Information Security Plan
-
Business Continuity
-
Security Standards
-
What to do with SPAM?
-
VoIP Security
About
I've spent 25 years in cybersecurity across four countries, working at every scale from co-founding a security consultancy to advising government agencies to designing cloud security architectures for critical infrastructure.
My career started at Deloitte, took me through IBM, Hewlett Packard, and a decade building security practices for financial institutions in Latin America, before I relocated to the UAE in 2018. Along the way, I've co-founded a cybersecurity company, built managed security operations centres, published two books, and earned every certification the industry offers for someone who refuses to specialise in just one thing.
In the last two years, my work has shifted to the intersection that I think will define the next decade of cybersecurity: building AI systems that are secure, governable, and accountable by design. Not advising on AI governance. Building the systems.
I write about it. I build it. And I publish the frameworks so others can build it too.
Career
VP, Security Architecture
Major real estate enterprise, UAE
AI governance portal design and deployment. Security architecture strategy.
CurrentCloud Security Lead Architect
Digital14 / CPX Holding, Abu Dhabi
Created Cloud Security Services division. Delivered security frameworks for energy, government, financial services.
Senior Manager
DarkMatter, Abu Dhabi
Security solution architecture for government and enterprise clients.
Security Services Director
Kinakuta, Mexico City
Led cybersecurity audit and consulting for large financial entities. Created PCI-DSS services business unit.
Master CSO Consultant
Hewlett Packard, Mexico City
Virtual CISO for financial, manufacturing, and aerospace clients in US and Mexico.
Technical Solutions Manager
IBM, Mexico City
Led technical solutioning for managed services contracts across Latin America and Europe.
Partner, IT Audit
DFK (De la Paz Costemalle), Mexico City
Built the IT assurance practice from zero to 20 people. Financial institution audits.
Co-Founder / Chief Solutions Architect
Seginf, Mexico City
Built and led a cybersecurity consultancy. Designed and operated an MSSP SOC.
Enterprise Risk Services Consultant
Deloitte, Mexico City
Cybersecurity consulting for financial institutions, manufacturing, insurance.
Projects
LobSec
A Paranoid-level Hardened Wrapper for OpenClaw
Policy enforcement layer between agent reasoning and action execution. Comprehensive audit logging with decision traceability. Configurable rollback mechanisms for reversible operations. Designed for environments with zero tolerance for uncontrolled agent behaviour.
View on GitHub →Credentials
Let's Talk
I'm always open to interesting conversations about AI security, governance architecture, or the frameworks on this site.