/* ── POST HERO ── */ The EU AI Act Preparation Window Is Open. Here's Exactly How to Use It. - Fredian Shield

The EU AI Act Preparation Window Is Open. Here’s Exactly How to Use It.

This is the third post in a series on the EU AI Act and what it means for boards and senior leaders. In the first, I made the case for why this is a board-level issue that can’t be deferred. In the second, I laid out what good AI governance actually looks like in practice. In this one, I want to be direct about what to do next.
The EU AI Act’s transition period is not a grace period. It is a preparation window. And there is a meaningful difference between those two things.
A grace period is time to delay. A preparation window is time to build. The organisations that use this period well will have governance frameworks in place before enforcement pressure arrives. They will be making AI decisions with confidence rather than anxiety. They will have boards that can exercise meaningful oversight and leadership teams that can demonstrate compliance when asked.
The organisations that treat this as a grace period will be scrambling — building under pressure, in the middle of regulatory scrutiny, with inadequate time and the wrong starting point.
The window is open now. The question is what you do with it.
Where to Start — Right Now
The starting point for every organisation is the same: understand where you actually stand.
That means a structured AI risk and readiness assessment — a genuine, systematic review of how AI is being used across your institution, what risk profile those uses carry under the Act’s classification framework, and what governance gaps exist between where you are and where you need to be.
This isn’t an internal exercise you can delegate to the person who happens to be most interested in AI. It requires an external perspective, structured methodology, and direct experience of what the Act requires. The output needs to be something a board can act on — a clear, prioritised picture of what needs to change and in what order.
From there, the work falls into three streams that run in parallel: building the governance structure and policy framework, embedding it in operational processes, and training the people who need to understand and apply it.
None of this is particularly complicated. But it does need to be done properly, and it does need to be done now rather than later.
What the Preparation Window Looks Like in Practice
Organisations that engage with this seriously over the next six to twelve months are not just buying compliance. They are building a capability — the ability to make AI decisions confidently, to demonstrate responsible governance to any external party, and to innovate with AI from a position of strength rather than exposure.
That capability compounds. A board that understands AI risk asks better questions. A leadership team with clear governance frameworks makes faster, more defensible decisions. Staff who understand the policies and the reasoning behind them are more likely to flag concerns, follow process, and use AI in ways that benefit the institution rather than expose it.
The cost of not building this capability isn’t just a fine or a regulatory finding. It’s the slower, harder-to-quantify erosion of confidence, reputation, and institutional control that comes from operating without a framework when the landscape is moving as fast as it is.
The Sessions I Run — Bringing This to Your Board Directly
One of the most effective starting points I offer is a focused board-level briefing. It’s designed for exactly this moment — for boards and senior leaders who know they need to get on top of AI governance and want to do so in a structured, practical way rather than through a series of anxious conversations that don’t go anywhere.
Board-Level AI Risk & Governance Briefing
This session equips college boards, governors, and senior leaders with a clear, practical understanding of AI risk, regulation, and governance — designed specifically to enable confident oversight of AI adoption while remaining compliant with emerging legislation.
The session covers the ground that matters most at board level:
We open with an honest overview of how AI is already being used within institutions — often informally and without oversight — and what that means for risk and accountability. We then move into the regulatory context: a board-level explanation of the EU AI Act, what it requires of institutions as users and deployers of AI, and how it intersects with UK GDPR, equality duties, and academic standards. The core of the session addresses board accountability directly — what effective AI oversight looks like, the most common governance gaps across the education sector, and how boards can approve AI policy and strategy without becoming a barrier to legitimate innovation.
Governors leave with a clear set of practical takeaways: which policies must be in place, the right questions to put to institutional leadership, and a shared picture of what good AI governance looks like in practice. A briefing pack is provided to all attendees following the session.
The session is designed to be interactive and discussion-led throughout, drawing on education-specific examples and case references. It can be delivered in-person on your premises or virtually — whatever works best for your board.
Session FormatInvestment90-minute session£1,500120-minute extended session£3,000On-site deliveryTravel expenses charged at cost
For organisations that want to go further, Fredian Shield’s full AI Governance, Policy and Staff Enablement programme takes the work from awareness through to embedded, audit-ready governance — covering risk assessment, policy framework development, process implementation, and staff training in a single, coherent engagement.
The Honest Case for Acting Now
I’ll end this series with the most direct version of the argument I’ve been making across all three posts.
The EU AI Act is not going away. The compliance timeline is running. The organisations that build their governance infrastructure now — during the preparation window, with time to do it properly — will be in a fundamentally better position than those that wait.
The cost of a structured engagement to build this capability is modest relative to the exposure it addresses. The cost of not having it — in regulatory risk, reputational damage, and the loss of board confidence that comes from operating without a framework — is substantially higher.
Most importantly: this is not just about avoiding risk. It’s about being able to say, with genuine confidence, that your organisation is using AI responsibly. That your board understands what it’s overseeing. That your staff have the knowledge and the framework to innovate safely. And that when someone asks you to demonstrate your AI governance — a regulator, an inspector, a parent, a student — you have something real to show them.

If you don’t have that confidence right now, let’s change that. I’d welcome a conversation about where your organisation stands and what a practical path forward looks like. No obligation, no jargon — just clarity.

Get in touch with Neil at Fredian Shield →

This post is the final part of a three-part series on the EU AI Act and what it means for boards and senior leaders. Read Part 1 — The Clock Is Already Ticking and Part 2 — What Good AI Governance Actually Looks Like to get the full picture.

Neil Manfred is the founder of Fredian Shield, a specialist consultancy helping regulated organisations adopt AI responsibly. He is a Certified Director of the Institute of Directors and a Non-Executive Director in public education. He delivers board-level AI governance briefings to colleges, trusts, and regulated organisations across the UK.

NM
Neil Manfred
Founder, Fredian Shield

Executive IT leader, IoD Certified Director, and Non-Executive Director in public education. Founder of Fredian Shield — helping regulated organisations adopt AI responsibly. 30+ years at the sharp end of technology leadership.

in Connect on LinkedIn
← Previous Article What Good AI Governance Actually Looks Like — And Why Most Organisations Don’t Have It Yet
Next Article → 🎙️ “The Chennai Office Move: What Could Go Wrong… Did”

Want to Continue the Conversation?

Get in touch directly — every enquiry is handled personally by Neil.

Get in Touch