
Boards and executive teams are feeling more pressure than ever to understand, govern and capitalize on AI as the technology accelerates. Many organizations are racing to modernize their oversight models, while there is an ever-growing gap between AI ambition and AI readiness. And this gap is only continuing to widen.
To help leaders cut through the uncertainty, we sat down with Brian Stafford, CEO of Diligent, for his candid view on the state of AI governance. He shared what boards are overlooking, the risks that are emerging faster than most realize and the opportunities leaders can seize right now to stay ahead. These are the kinds of conversations that will shape the agenda at Elevate 2026 in Atlanta. Register now!
Q: What's a blunt truth you'd tell a board or executive team behind closed doors about AI governance and oversight?
A: If AI isn’t already embedded into the core of your board’s operations, you’re not just falling behind — you’re losing.
The challenge has shifted from what AI is to how to use it. Yet only 43% of boards call AI a strategic priority, and just 13% have directors with AI expertise. That's not just a gap; it's a liability.
Your competitors are using it to accelerate decisions, surface risks faster, and operate with a level of insight you simply can’t match manually. If your board’s oversight model still depends on traditional reporting cycles and human‑only analysis, you’re governing with a blindfold on.
Q: What's one thing most organizations are getting wrong or seriously underestimating about AI risk management and compliance?
A: Using AI as an afterthought in risk management and compliance.
Too many teams bolt AI onto traditional processes, relying on manual controls, fragmented reporting, and reactive oversight. Meanwhile, standards can't keep pace with technology, leaving boards guessing at what "good" looks like.
Embedding AI deep into risk management and compliance systems is the answer. It breaks down silos, instantly surfaces emerging threats, and strengthens decision-making. Organizations that continue treating AI as a peripheral tool, rather than a foundational capability, will fall behind.
Q: Looking 12–24 months ahead, what will look materially different in AI governance and board-level AI accountability for boards or executives
A: Every board will have AI literacy expectations, and ignorance won’t be a defensible position.
Within 12 months, AI will move from a ‘future trend’ section to a standing line item in risk and strategy reports. The market recognizes, almost universally, that AI is now a necessity, not an option. And this shift is consistent worldwide.
Boards will demand insight into where AI is deployed, what data it’s touching, how it’s performing and what could go wrong — the same way they look at cyber or financial risk today. The most forward‑thinking boards will add AI board members to the table, systems that deliver expert‑level perspective on technical risk and regulatory change faster than any human director can.
Q: In 2-3 years, what will be considered table stakes in AI governance that feels optional today?
A: What feels progressive today will become a default expectation. The same way no credible company operates without a cybersecurity program, no organization will be taken seriously without a documented, operationalized AI governance framework. Organizations are already compressing teams into lean operations, staying ahead of compliance while freeing talent for strategy.
Customers will demand it, regulators will enforce it, and markets will reward it. The message is simple: early adopters get control, confidence, and speed. Late adopters get scrutiny and surprises.
Q: Where do you see the biggest blind spot between what board leaders think they're doing well and reality in AI oversight?
A: Boards are mistaking AI exposure for AI expertise.
The biggest blind spot I see is that many boards believe they’re effectively overseeing AI because they’ve been briefed on the technology. But they rarely have the depth of expertise needed to challenge management or understand associated risk. Without true AI fluency at the table, boards tend to overestimate their readiness, underestimate the pace of change, and rely too heavily on management’s optimism.
The numbers tell the story: only 13% of boards have directors with AI expertise, and the talent pool with both deep AI knowledge and governance experience is thin. Boards choose between narrow expertise and broad leadership and progress stalls.
That’s why bringing AI expertise into the boardroom — whether through new directors, advisory councils, targeted upskilling, or AI board members — is no longer optional.
Q: Where is the biggest untapped opportunity for boards and executives on AI strategy and governance?
A: The biggest missed opportunity is using AI to improve governance, not just governing AI.
Most conversations are about how to control AI. The real upside is using AI to elevate governance itself – surfacing risks earlier, connecting business units, and giving directors better visibility into what really matters. That includes AI board members analyzing materials 24/7 and flagging risks before they become problems. Boards that use AI‑powered insights to clarify risk and elevate governance will move faster and with more confidence than their peers.
Q: What's one high-impact, realistic action leaders in AI governance could take in the next 90 days to get ahead?
A: Map your top 10 AI use cases and make them board‑visible.
You don’t need a 200‑page AI strategy to start. Identify where AI is actually in production today – the systems, data, vendors and decisions it touches – and connect each to an owner, a risk category and a simple success metric. This builds the foundation: understand where and how AI is used, know what data it touches, ensure the right controls are in place.
Then put that in front of the board as a living inventory that gets updated quarterly. A clear, shared AI use‑case map is the fastest way to turn AI governance from theory into practice.
Q: What are the smartest organizations already doing in AI governance and responsible AI deployment that others haven't caught up to yet?
The smartest organizations are treating AI like a cross‑functional program, not a side project in IT or legal.
They’ve stood up AI councils that bring together risk, legal, compliance, security, data, HR and the business directly to the board. They’re embedding AI into their GRC systems, audit trails, whistleblowing channels and analytics so they can see issues early and respond fast.
The most advanced are testing AI board members that bring technical expertise and monitor regulatory changes in real time, delivering analysis in minutes that takes traditional human advisors’ days.
Leaders have already embedded AI into their governance practice – everyone else is still experimenting.
To hear more insights from Brian Stafford and become an AI leader who elevates governance, clarifies risk and drives transformation, register for Elevate in Atlanta, Georgia April 22 – 24, 2026.