When AI governance fails, organisations typically look for technology explanations. The algorithm was flawed. The model was biased. The system made an error.
These explanations miss the point. In most governance failures, the technology worked exactly as designed. The failure was in how humans set it up, oversaw it, or responded to its outputs.
The 2020 A-level grading controversy in the UK illustrates this pattern. The algorithm Ofqual developed to standardise grades during COVID performed its technical function. It adjusted grades based on historical school performance. The problem was not that the algorithm failed. The problem was that nobody had adequate oversight of what “working as designed” would mean for individual students. Forty percent of grades were downgraded, disproportionately affecting students from disadvantaged backgrounds. The algorithm was withdrawn within days.
The technology did not fail. The governance around the technology failed.
Webinar: AI Governance Managing Risk, Trust & Compliance
The most significant AI governance challenge facing growing businesses is not the AI they knowingly deploy. It is the AI being used without organisational awareness or approval.
Every day, employees use AI tools that exist outside official channels. They paste customer data into ChatGPT to draft responses. They use AI writing assistants that process company information. They experiment with AI image generators, transcription tools, and analysis platforms.
This shadow AI creates risks that governance frameworks cannot address because the organisation does not know the activity exists. Data may be shared with external AI providers without appropriate consent or security review. Outputs may be used for customer-facing purposes without quality control. Regulatory obligations around automated decision-making may be triggered without anyone realising.
The problem is not that employees are acting maliciously. They are typically trying to work more efficiently. The problem is that individual efficiency decisions, made without organisational context, can create collective risk.
Sam Easton observes: “Shadow AI is probably the biggest governance blind spot for most organisations. Your official AI policy might be excellent, but it only governs what you know about. When employees use AI tools without organisational awareness, they create risks that your governance framework cannot see or manage.”
Addressing shadow AI requires more than policies prohibiting unauthorised tools. It requires understanding why employees seek out AI assistance, providing approved alternatives that meet their needs, and creating a culture where AI use can be discussed openly rather than hidden.
AI governance discussions often focus on the AI systems themselves. But effective AI governance depends on something more fundamental: data readiness.
AI systems operate on data. If your data is inconsistent, incomplete, or poorly organised, AI will produce outputs that reflect those problems. No amount of governance controls can fix AI outputs built on problematic data foundations.
Consider three common data issues that undermine AI governance:
Unclear source of truth. When the same information exists in multiple places with conflicting values, AI systems cannot determine which version is correct. Is the customer’s contact information the record in your CRM, the spreadsheet your sales team maintains, or the data in your marketing platform? If these differ, the AI tool processing that data will produce inconsistent results.
Inconsistent lexicon. Different teams often use the same terms to mean different things. What exactly is a “qualified lead” in your organisation? If marketing and sales define this differently, AI trained on historical data will learn conflicting patterns. The resulting predictions may be technically accurate based on the data, but practically meaningless for decision-making.
Undefined signal hierarchy. When multiple data sources provide conflicting information, what takes priority? If a customer’s email marketing preferences differ between your CRM and your email platform, which does the AI trust? Without clear rules, AI systems make arbitrary choices that may not align with customer expectations or compliance requirements.
These are not AI problems. They are data governance problems that AI makes visible. Organisations that lack data clarity will struggle with AI governance regardless of what controls they implement.
In AI research, the “alignment problem” refers to ensuring AI systems pursue goals that match human intentions. In practical business governance, alignment problems are more mundane but equally important.
AI governance fails when different parts of the organisation have different understandings of what AI is doing, why, and how it should be overseen.
Marketing may believe the lead scoring AI helps them prioritise outreach effectively. Sales may distrust the scores because they do not match their intuition about lead quality. Operations may not know lead scoring is happening at all. Finance may be unaware there are compliance implications.
This misalignment creates governance gaps. Nobody feels fully responsible because everyone assumes someone else is handling oversight. Problems are not escalated because people do not recognise them as problems. Opportunities to improve are missed because knowledge is siloed.
Effective AI governance requires alignment across the organisation on what AI is being used, what it does, who owns it, and how concerns should be raised. This alignment is a leadership challenge, not a technology challenge.
Many organisations have attempted AI governance initiatives that did not achieve lasting results. Understanding why helps avoid repeating the same patterns.
Project-based thinking. Governance was treated as a project with a start date and end date, rather than an ongoing operational discipline. Once the project was “complete”, attention moved elsewhere, and governance gradually eroded.
Policy without process. Detailed AI policies were written but not embedded in working practices. People knew policies existed but did not change their behaviour because there was no process requiring them to engage with governance requirements.
Technology without ownership. AI tools were deployed without clear accountability. When nobody owns a system, nobody maintains it, monitors it, or responds when it causes problems.
Compliance framing. Governance was positioned as a compliance burden rather than a business capability. People complied minimally when watched and reverted to previous behaviour when attention shifted.
Insufficient training. People were expected to govern AI they did not understand. Without education on what AI does, how it works, and what risks it creates, governance requirements seemed arbitrary rather than purposeful.
Governance that sticks requires embedding practices into ongoing operations, clear ownership with accountability, understanding of purpose, and sustained attention rather than one-time initiatives.
Real-world AI governance failures provide concrete lessons for organisations building their own approaches.
Air Canada’s chatbot liability. In 2024, Air Canada’s customer service chatbot provided incorrect information about bereavement fare refund policies. When the airline was challenged, it argued the chatbot was a “separate legal entity” for which it was not responsible. The court disagreed emphatically. The company was held liable for its chatbot’s statements, establishing that organisations cannot disclaim responsibility for AI that acts on their behalf.
The governance lesson: you are responsible for what your AI says, regardless of whether humans wrote the specific response. Oversight, monitoring, and clear escalation paths are essential.
Royal Free NHS Trust and DeepMind. In 2017, the Information Commissioner found that the Royal Free NHS Foundation Trust had inappropriately shared patient data with DeepMind for an AI project to detect kidney injury. The AI itself was potentially beneficial. The governance failure was sharing 1.6 million patient records without proper consent mechanisms or data protection assessments.
The governance lesson: good intentions do not override governance obligations. Even beneficial AI applications require appropriate data handling, consent, and oversight.
Facial recognition and South Wales Police. In the Bridges case in 2020, the Court of Appeal ruled that South Wales Police’s use of facial recognition technology was unlawful. The technology worked as designed. The failure was insufficient safeguards around its deployment, including inadequate assessment of who might be flagged and lack of clarity on data retention.
The governance lesson: working technology does not equal lawful deployment. Governance must address not just whether AI functions, but whether it is used appropriately.
Effective AI governance shares common characteristics across successful implementations:
Clarity precedes control. Before implementing controls, organisations achieve clarity on what AI is being used, where, by whom, and for what purpose. You cannot govern what you do not understand.
Ownership is explicit. Every AI capability has a named owner who can answer questions about how it works, what it accesses, and how it is monitored. Ownership without names is not ownership.
Proportionality guides effort. Governance intensity matches risk. Low-risk AI gets lighter oversight. High-risk AI gets closer attention. This prevents both under-governance of significant risks and over-governance that creates unnecessary burden.
Human oversight is meaningful. Review processes exist not as tick-box exercises but as genuine opportunities to catch problems, improve outputs, and maintain accountability. People who review AI outputs have the knowledge and authority to act on what they find.
Culture supports disclosure. People can raise concerns about AI without fear of blame. Problems surface early when the culture rewards identification rather than punishing messengers.
Governance adapts. AI capabilities evolve. Regulations develop. Business needs change. Governance frameworks that worked last year may need adjustment. Regular review and adaptation is built into the approach.
Several questions reliably expose governance weaknesses. If your organisation cannot answer these clearly, you have identified where to focus attention.
Can you list every AI tool and feature currently in use across your organisation? If not, you have a visibility gap.
For each AI system, can you name the person accountable for its appropriate use? If ownership is unclear or distributed, you have an accountability gap.
Can you explain what data each AI system accesses and on what basis? If data flows are unclear, you have a data governance gap.
Is there a documented process for reviewing AI outputs before they affect customers or decisions? If review is informal or inconsistent, you have an oversight gap.
Do employees know how to raise concerns about AI if something seems wrong? If escalation paths are unclear, you have a culture gap.
Could you demonstrate your AI governance approach to a regulator who asked? If documentation is incomplete, you have an evidence gap.
AI governance does not require perfection. It requires honest assessment of current reality and systematic improvement from that baseline.
Most organisations have some governance elements already in place, even if they are not labelled as “AI governance”. Data protection processes, quality assurance reviews, approval workflows, and accountability structures all contribute.
The task is often connecting existing practices into a coherent approach, filling specific gaps, and ensuring the approach adapts as AI use evolves.
Start with visibility. Understand what AI is actually being used. This is the foundation everything else builds on.
Establish ownership. Name specific people responsible for specific capabilities. Accountability without names is not real accountability.
Implement proportionate oversight. Focus human review where risk is highest. Do not try to review everything equally.
Create a sustainable cadence. Build governance into regular operations through quarterly reviews rather than one-time initiatives.
Document as you go. Evidence of governance is almost as important as governance itself. If you cannot demonstrate what you do, you cannot prove appropriate care.
AI governance is not a destination. It is an ongoing discipline that evolves with your AI use, regulatory expectations, and business needs. The organisations that succeed are those that start where they are, improve systematically, and maintain attention over time.
This article is for general information purposes and does not constitute legal advice. Organisations should seek appropriate professional guidance for their specific circumstances.
AutomateNow helps established businesses build AI governance that works. We start with clarity, not configuration. Visit automatenow.uk to learn more.