/* ── POST HERO ── */ The Human Attack Surface: Why AI-Powered Social Engineering Is the Biggest Cyber Threat of the Next Five Years - Fredian Shield

The Human Attack Surface: Why AI-Powered Social Engineering Is the Biggest Cyber Threat of the Next Five Years

When the CSO Security Summit asked me what I believe is the biggest challenge for cybersecurity leaders over the next five years, my answer wasn’t ransomware, wasn’t nation-state attacks, and wasn’t the usual suspects you’d find on a CISO’s risk register.

It was this:

“As AI democratises persona mapping and reply generation, the greatest challenge for cybersecurity leaders will be defending against human-targeted threats crafted from mined personal data — where anyone can weaponise insight into influence.”

That’s worth unpacking, because I think a lot of organisations are still thinking about AI-driven threats in the wrong frame.

We’ve Been Focused on the Wrong Attack Vector
The cybersecurity conversation has historically centred on technical vulnerabilities — unpatched systems, misconfigured cloud environments, weak credentials. And those still matter. But the most effective attacks have always gone around technical defences, not through them. They target people.

What AI changes is the scale and precision at which that’s now possible.

What AI-Powered Social Engineering Actually Looks Like
Until recently, crafting a convincing targeted attack — a spear-phishing email, a voice call impersonating a colleague, a fake LinkedIn message that mirrors someone’s actual communication style — required significant time, skill, and intelligence gathering. That limited who could do it and how often.

AI removes those constraints. Here’s what that enables:

Persona mapping at scale. Public data — LinkedIn profiles, social media posts, conference appearances, published interviews, company websites — can be scraped, analysed, and used to build detailed psychological and behavioural profiles of individuals. Not just executives. Anyone with a digital footprint.

Hyper-personalised content generation. Once you have a profile, AI can generate communications that mirror someone’s actual relationships, reference real events in their professional life, and replicate the tone and style of people they trust. A phishing email that used to take hours to craft can now be generated in seconds, at volume, for thousands of targets simultaneously.

Reply-chain attacks. AI can now engage in multi-turn conversations — not just send a single malicious email, but sustain a dialogue that feels increasingly credible over time. The patience that used to protect people from social engineering is no longer a reliable defence.

The result is that the barrier to executing a sophisticated, human-targeted attack has collapsed. What was previously a nation-state capability is now available to anyone with access to the right tools.

Why This Is Fundamentally a Leadership Challenge
Technical controls alone cannot solve this. You cannot patch human psychology. What you can do is build an organisation where:

Security culture is genuine, not performative. Annual compliance training that nobody engages with doesn’t build the kind of instinctive scepticism that protects against sophisticated social engineering. Leaders need to invest in ongoing, contextual, behavioural security education — not tick-box exercises.

Verification behaviours are normalised. Organisations that build a culture where it’s not just acceptable but expected to verify unusual requests — regardless of apparent seniority — are meaningfully more resilient. The embarrassment of double-checking should always feel smaller than the cost of not doing so.

Boards ask the right questions. How are we educating our people about AI-driven social engineering? What does our detection capability look like for human-targeted attacks? How do we verify identity for high-value transactions? These aren’t technical questions — they’re governance questions.

Threat intelligence informs people risk. Understanding what personal data is publicly available about your key personnel, and how that could be weaponised, should be part of your threat modelling. Most organisations aren’t doing this systematically.

The Harder Truth
The organisations most at risk are not necessarily the ones with the weakest technology. They’re the ones where leaders haven’t yet accepted that their people are the primary attack surface — and that protecting them requires sustained investment in culture, not just controls.

AI has made every organisation’s human layer more exposed than it’s ever been. The response has to match that reality.

This article draws on insights shared at the CSO Security Summit. Neil Manfred is the founder of Fredian Shield, a specialist consultancy helping regulated organisations adopt AI and technology responsibly. He is a Certified Director of the Institute of Directors and a Non-Executive Director in public education.

NM
Neil Manfred
Founder, Fredian Shield

Executive IT leader, IoD Certified Director, and Non-Executive Director in public education. Founder of Fredian Shield — helping regulated organisations adopt AI responsibly. 30+ years at the sharp end of technology leadership.

in Connect on LinkedIn
← Previous Article Joining the CSO Security Summit London: Cyber Resilience Best Practices
Next Article → Speaking at CSO ThreatScape Summit UK: What I’ll Be Bringing to the Conversation

Want to Continue the Conversation?

Get in touch directly — every enquiry is handled personally by Neil.

Get in Touch