Skip to content
All insights
BlogArin Mehta

AI Governance for Boards: What to Ask Before You Deploy

Boards are approving AI deployments without the governance frameworks to manage them. Here are the five questions every board should be able to answer.

Introduction

There is a pattern emerging in boards across the UK, Europe, and the US: leadership is excited about AI's potential, the technology team is deploying AI tools at pace, and the board has no visibility into what's being deployed, what data it's processing, or what happens when something goes wrong.

This isn't just a regulatory risk — though the EU AI Act, the UK's AI framework, and the US executive orders on AI create genuine compliance obligations. It's a governance failure that exposes the organisation to reputational, liability, and strategic risk.

Why AI Governance Is a Board Issue

AI systems make or influence decisions at scale. A flawed AI model used in HR screening can discriminate unlawfully against thousands of candidates before anyone notices. A customer service AI that generates incorrect information creates liability. A fraud detection algorithm that's biased by training data creates regulatory exposure. The board, as the body ultimately accountable for risk management, cannot delegate all responsibility for these outcomes to the technology team.

The EU AI Act makes this explicit: the governance and oversight requirements for high-risk AI systems must be implemented at a management level with appropriate authority and accountability.

The Five Questions Every Board Should Be Able to Answer

1. What AI systems is our organisation currently using?

This includes officially approved tools and shadow IT. The answer frequently surprises boards: employees are using ChatGPT, Copilot, Gemini, and dozens of other AI tools — some processing sensitive company and customer data — that IT and legal have never reviewed. An AI inventory is the starting point for any governance programme.

2. What personal data do those systems process?

AI systems that process personal data are subject to data protection law in addition to AI regulation. If your HR AI processes employee data, your customer service AI processes customer interactions, or your fraud detection system processes financial behaviour data, you need to map the data flows and assess the legal basis for processing.

3. Who is accountable for AI-related decisions?

The board needs to know who, by name, is accountable when an AI system causes harm. This means a named accountable person (often the CISO, CTO, or a designated AI Officer), a clear escalation path, and an incident response procedure.

4. How do we comply with the EU AI Act for our EU-facing systems?

If you have high-risk AI systems used by EU customers or employees, you need conformity assessments, technical documentation, and registration in the EU AI database. These are not optional — and non-compliance carries fines of up to EUR 30 million or 6% of global annual revenue.

5. What is our policy on generative AI use by employees?

Most organisations have employees using generative AI tools informally. Without a policy, you have no visibility into what data is being input, what outputs are being used, or whether those outputs are creating liability. An acceptable use policy is the minimum baseline.

Building a Board-Level AI Governance Framework

A functional AI governance framework for board purposes has four components: an AI register (what systems exist and what they do), a risk classification framework (aligned to EU AI Act tiers), accountable ownership at senior level, and a review cadence (at least annually, or whenever significant new AI systems are deployed).

Chabil Consulting builds AI governance frameworks for organisations at board and management level. Our 8-week programme produces all four components plus a CPRA and GDPR-aligned data processing assessment for existing AI systems. Contact us at hello@chabilconsulting.com

Want to discuss this topic?

Our advisors are available for a no-obligation conversation.