Let’s be direct about something.
The EU AI Act is not a future problem. It is not something to put on next year’s risk register, add to a governance committee agenda, or delegate to the IT department to figure out. It is live, it has a compliance timeline, and the window to prepare properly is closing faster than most organisations realise.
I’ve sat in enough board meetings and senior leadership sessions to know how this tends to go. A new regulation lands. Someone adds it to the horizon-scanning section of a paper. It gets noted. A working group is suggested. And then the organisation carries on exactly as before — until the point where not having acted becomes a problem that’s very difficult to explain.
With the EU AI Act, that moment is arriving sooner than most boards are prepared for.
What the EU AI Act Actually Does
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It came into force in August 2024, and its requirements are being phased in over a transition period that has already begun.
The Act takes a risk-based approach. It classifies AI systems into tiers — unacceptable risk, high risk, limited risk, and minimal risk — and places different obligations on organisations depending on where their AI use falls. For boards, the critical point is this: you don’t have to be building AI to be regulated by this Act. If your organisation uses or deploys AI systems, obligations apply to you.
In the education sector, this matters enormously. AI is already embedded in learning management systems, admissions tools, plagiarism detection, HR processes, and student support platforms. Most institutions haven’t done a systematic review of what they’re actually running, let alone assessed where it sits under the Act’s classification framework.
That gap is a governance gap. And governance gaps at board level have consequences.
The Risk-Based Framework — What Boards Need to Understand
Under the Act, high-risk AI systems — which include systems used in education and employment contexts — attract the most significant obligations. These include requirements around risk management, data governance, transparency, human oversight, and accuracy.
The institutions that will struggle most are not the ones using the most AI. They are the ones using AI without knowing they’re using it, or without the governance infrastructure to demonstrate responsible deployment.
What the Act effectively requires is that organisations can answer three questions confidently:
What AI systems are we using or deploying? Most organisations cannot answer this comprehensively right now. Shadow AI — staff and students using tools informally, without institutional knowledge or oversight — is widespread. A board cannot govern what it hasn’t mapped.
How have we classified and assessed the risk of those systems? The Act creates a legal obligation to understand the risk profile of your AI use. That requires a structured assessment process, not a best guess.
What controls, oversight mechanisms, and documentation do we have in place? For high-risk systems, the Act requires evidence. Not a policy document. Evidence of active governance, human oversight, and ongoing monitoring.
The Compliance Timeline Most Organisations Are Ignoring
The Act’s transition periods mean that different requirements come into force at different points. But the practical implication is clear: organisations that wait for full enforcement before beginning to prepare will find themselves building governance infrastructure under pressure, with inadequate time, and in the middle of regulatory scrutiny rather than ahead of it.
The organisations that will navigate this well are the ones that treat the transition period as preparation time — not as permission to delay.
“The governance frameworks you build now are not just about compliance. They are about making decisions confidently when AI adoption accelerates — which it will.”
What This Means for Your Board Right Now
Boards have a fiduciary responsibility to understand and manage material risks to their institution. AI regulation — with its potential for reputational damage, legal exposure, and operational disruption — is unambiguously a material risk for most organisations operating today.
That doesn’t mean boards need to become AI experts. It means boards need to ask the right questions, ensure the right frameworks are in place, and hold leadership accountable for demonstrating that AI is being governed responsibly.
The right questions start with: do we actually know what AI is being used in this organisation? And do we have the governance infrastructure to manage it properly?
If the honest answer to either of those is no — or we’re not sure — that is the place to start.
In the next post in this series, I look at what good AI governance actually looks like in practice — and the most common gaps I see when working with boards and senior leadership teams. If you don’t want to wait, get in touch and let’s have a conversation about where your organisation stands.
Neil Manfred is the founder of Fredian Shield, a specialist consultancy helping regulated organisations adopt AI responsibly. He is a Certified Director of the Institute of Directors and a Non-Executive Director in public education.