
4 minute read
FROM COMPLIANCE TO CULTURE: WHY AI GOVERNANCE NEEDS A HUMAN FACE
By Ibitola Akindehin , AI Governance Risk and Compliance Specialist
Bridging the gap between checkboxes and compassion in the age of algorithmic power
“Is this system fair?”
“Can this algorithm harm someone?”
“Would I trust this if I didn’t build it?”
These are not just technical question; they are deeply human ones. And yet so much of AI governance today still feels like it lives in the world of spreadsheets, policy documents and audit trails, as though we can regulate machines without understanding people.
As someone working in governance, risk and compliance (GRC), I’ve seen how organisations often start their AI risk conversations with legal checklists or compliance frameworks. That’s important. But if that’s where it ends, we miss the heart of the issue: AI doesn’t just need governance. It needs empathy. It needs ethics. It needs humans who care not just about regulations, but about real lives.
This article explores the shift from treating AI governance as a compliance function to embedding it into the culture of organisations; a culture where human judgment, inclusion and accountability drive our systems, not just our policies.
WHY COMPLIANCE IS NO LONGER ENOUGH
When companies talk about AI governance the first thing they often reach for is a policy or a control framework: ISO 42001, NIST AI RMF or the EU AI Act. These frameworks are essential; they give us structure, accountability and global standards. But here’s the problem: compliance can be met without actual care.
You can check every box and still deploy an AI system that marginalises communities, reinforces bias or creates harm. If the people using these frameworks don’t understand why fairness matters or don’t feel safe raising concerns internally, then compliance creates a false sense of security.
We’ve seen examples already: a major tech firm scoring well on internal AI ethics reports while its facial recognition system misidentified people of colour; a hiring algorithm screening out qualified female applicants, not because of a malicious coder, but because the historical data was biased. None of these failures were purely technical: they were cultural.
Culture shapes how risk is identified, who gets heard in a room and what gets flagged before deployment. If fairness, transparency and responsibility aren’t part of the everyday conversation , no checklist can save us.
Culture Is What Happens Between The Policies
Let’s be honest: governance documents don’t talk back. People do.
An AI ethics guideline sitting in a shared folder won’t challenge a risky decision. But a data scientist who feels empowered to raise ethical flags — that can change outcomes.
Building an AI governance culture means:
• Creating psychological safety: where junior engineers can ask, “Is this right?” without fear.
• Diversifying decision-making teams: because lived experiences shape how risks are seen and understood.
• Encouraging slow thinking: not every AI model needs to be deployed just because it works. Sometimes, the right move is to pause.
One of the companies I worked with had an internal AI red team, a voluntary cross-functional group that tested models for fairness, security and explainability. Its members were not compliance officers, they were curious employees with diverse perspectives. And guess what? They caught risks that formal audits missed, because they were trained to think like humans, not just auditors.
That’s the power of culture: it lives in conversations, not control sheets.
PUTTING THE ‘GOVERN’ BACK INTO GOVERNANCE
Too often, ‘governance’ sounds like something done to teams, top-down reviews, red tape or latestage vetoes. However, true governance is shared. It’s embedded early. And it empowers.
If you want your AI governance to be more than performative, ask:
• Are governance conversations happening across teams, not just in the legal department?
• Is AI accountability part of onboarding, not just training slides?
• Do users or affected communities have a voice in how systems are built or corrected?
Governance should feel like a conversation, not a constraint. When people understand why certain risks matter, they become champions not resisters.
In African contexts—where community, trust and lived experience are vital—this shift is even more urgent. The systems we build must align with local realities, not just imported policies. A human-centred approach ensures that AI works with us, not around or above us.
Start With People
AI governance is evolving, and so must we. What worked in the era of data protection audits won’t be enough for a world where autonomous systems make hiring decisions, assess credit risk or interact with children.
We need policies, but more importantly, we need people who care enough to challenge those policies when necessary.
Let’s stop thinking of AI governance as something you ‘install’ and start thinking of it as something you nurture. Let’s build cultures where trust, transparency and empathy aren’t buzzwords but behaviours.
Because, in the end, every algorithm affects a human life. And it’s time our governance systems reflected that truth.
www.linkedin.com/in/ibitola-akindehin
