/* ── POST HERO ── */ What Good AI Governance Actually Looks Like — And Why Most Organisations Don't Have It Yet - Fredian Shield

What Good AI Governance Actually Looks Like — And Why Most Organisations Don’t Have It Yet

In the first post in this series, I laid out why the EU AI Act is a board-level issue that can’t be deferred. In this one, I want to get practical — because the honest truth is that when I work with organisations on AI governance, the challenge is rarely a lack of awareness that governance matters. It’s a gap between intention and implementation.
Most organisations have one of two problems.
The first is that they have no AI governance framework at all. No policy, no process, no oversight structure. AI is being used — often extensively — and nobody has drawn a line around it. This is more common than it should be, and it represents meaningful regulatory and reputational exposure.
The second problem is subtler and, in some ways, harder to fix. The organisation has a policy document. Someone wrote it, the board approved it, it was circulated. And then it sat in a shared drive and changed absolutely nothing about how AI is actually used day to day. This is governance as theatre — and the EU AI Act will not be satisfied by it.
What the Act requires, and what genuinely protects organisations, is governance that is embedded. That means it shapes decisions, informs behaviour, and leaves an evidence trail that demonstrates active oversight rather than passive approval.
The Four Components of Embedded AI Governance
In my work with institutions, I structure AI governance around four interconnected workstreams. Each one builds on the last — and skipping any of them is where organisations tend to find themselves exposed.
1. Understanding What You’re Actually Dealing With
Before you can govern AI, you need to know what AI you have. This sounds obvious. It is almost never done properly.
A genuine AI risk and readiness assessment maps every AI system in use across the organisation — including the ones that came in through procurement without anyone asking the right questions, the tools that staff are using informally, and the student-facing platforms that embed AI functionality without advertising it. It assesses where each system sits against the EU AI Act’s risk classification framework and produces a clear, prioritised picture of what needs to change and in what order.
This is the foundation. Without it, everything else is built on assumptions.
2. A Governance Structure That Actually Makes Decisions
AI governance needs a structure — clear roles, defined responsibilities, and decision-making pathways that don’t rely on the right person happening to ask the right question at the right time.
That structure needs to operate at board level, not just within IT or operations. Boards need to be able to approve AI policy with genuine understanding of what they’re approving, exercise oversight of high-risk AI use, and hold leadership accountable for the evidence that governance is working.
The policy suite that underpins this isn’t complicated, but it needs to be complete. An AI Acceptable Use Policy covering both staff and students. An AI in Teaching and Assessment Policy that addresses the specific complexities of the education context. An AI Procurement and Vendor Risk Policy that ensures new AI tools are assessed before they arrive rather than after. And an Ethical AI Principles document that grounds everything in the institution’s values.
Each of these is a practical document designed to shape behaviour — not a compliance exercise filed and forgotten.
3. Processes That Embed Governance in Operations
Policy without process is aspiration. The governance framework only works if it changes what actually happens when someone wants to use a new AI tool, when a high-risk decision is made, or when something goes wrong.
That means AI use-case approval workflows — so that new AI applications go through a structured assessment before deployment. Risk assessment templates that make it straightforward for staff to document and escalate AI-related concerns. And audit-ready assurance processes that generate the evidence trail the EU AI Act requires.
This is where most governance frameworks fall short. The policy says the right things. The process doesn’t exist to make those things happen. When the regulator or the inspector asks for evidence of active oversight, there is nothing to show them.
4. Staff Who Actually Understand What’s Expected of Them
The final component — and the one most directly connected to day-to-day risk — is people.
The biggest AI risk in most organisations isn’t malicious use. It’s well-intentioned staff using tools they don’t fully understand, without a framework to guide them, making decisions they don’t realise carry institutional implications. Student data being processed through unvetted tools. Assessment integrity being compromised by AI use that falls outside policy. Personal data being submitted to public AI platforms without consideration of GDPR obligations.
Mandatory AI compliance training isn’t optional — it’s how you close the gap between having a policy and having an organisation that actually behaves in accordance with it. And beyond compliance, practical AI skills development creates the conditions for responsible innovation rather than either blanket prohibition or unmanaged risk.
The Governance Gap in Plain Terms
When I do a readiness assessment with an institution, the gaps I find most consistently are:
No comprehensive map of AI in use across the organisation. Policies that exist but aren’t embedded in process or behaviour. No clear ownership at board level for AI governance as a distinct responsibility. Staff who don’t know what the policy says, let alone why it matters. And no evidence infrastructure to demonstrate compliance if required to do so.
None of these gaps are difficult to close — but they require deliberate effort, structured delivery, and a commitment to embedment rather than documentation.

“The organisations that will navigate the EU AI Act well are not necessarily the ones with the most sophisticated AI programmes. They are the ones that have built the governance infrastructure to manage AI responsibly — whatever form that AI takes.”

What the Act Requires You to Be Able to Show
For boards and senior leaders, the practical test is this: if your institution were asked today to demonstrate its AI governance framework to a regulator, an inspector, or your own audit committee — what would you be able to produce?
A policy document is a start. Evidence of active oversight, structured risk assessment, embedded process, and trained staff is what the Act actually requires.
If there’s a gap between what you could produce today and what that list describes, the time to close it is now — while the transition period is still running and the pressure is comparative rather than absolute.

In the final post in this series, I set out what the preparation window looks like in practice — and how Fredian Shield works with organisations to build this capability. Or if you’d prefer to have that conversation directly, get in touch here.

Neil Manfred is the founder of Fredian Shield, a specialist consultancy helping regulated organisations adopt AI responsibly. He is a Certified Director of the Institute of Directors and a Non-Executive Director in public education.

NM
Neil Manfred
Founder, Fredian Shield

Executive IT leader, IoD Certified Director, and Non-Executive Director in public education. Founder of Fredian Shield — helping regulated organisations adopt AI responsibly. 30+ years at the sharp end of technology leadership.

in Connect on LinkedIn
← Previous Article 🎙️ “Why Brakes Make You Faster: The Hidden Power of Control in Technology and Leadership”
Next Article → The EU AI Act Preparation Window Is Open. Here’s Exactly How to Use It.

Want to Continue the Conversation?

Get in touch directly — every enquiry is handled personally by Neil.

Get in Touch