New! AI Board Member: Walk into every meeting knowing nothing was missed. Request early accessarrow_forward
Diligent Logo
Diligent Logo
Products
arrow_drop_down
Solutions
arrow_drop_down
Resources
arrow_drop_down
Diligent AI

Building an AI governance framework: Oversight, accountability and legal guardrails

April 29, 2026
13 min read
Gavel and Robot, Artificial Intelligence and Law Concept

In this article

  • Intro
  • What is an AI governance framework?
  • Why AI governance is now a board-level responsibility
  • AI governance framework best practices
  • The legal risks of AI in governance workflows
  • Managing AI-generated records and retention
  • Protecting privilege in the age of generative AI
  • The boardroom culture question
  • What directors should be asking about AI
  • Why responsible AI governance requires the right infrastructure
  • The role of AI-powered technology in AI governance
  • Conclusion: Governance in the AI era
  • Frequently asked questions
Nithya B. Das

Nithya B. Das

General Manager, Governance and Chief Legal Officer

Artificial intelligence is rapidly reshaping how organizations analyze information and make decisions at board level. As a result, many companies are recognizing the need for a clear AI governance framework to ensure these tools are used responsibly.

From drafting board materials to assessing enterprise risk, AI is becoming increasingly embedded in leadership and legal workflows. These technologies have the potential to overhaul time-consuming processes — eliminating manual tasks, accelerating analysis and giving teams more time to focus on strategic priorities.

At the same time, their use raises complex questions around confidentiality, privilege, data retention and oversight. What guardrails should legal teams put around AI? Which AI-generated materials could become part of the corporate record? And what does responsible AI governance actually look like in practice?

A well-designed AI governance framework helps organizations balance innovation with oversight — allowing them to benefit from AI while maintaining strong controls around sensitive information and corporate decision-making.

These issues were the focus of a recent episode of the Corporate Director Podcast, where Nithya Das of Diligent spoke with Elena Hera and Kaitlin Betancourt of Goodwin about the legal guardrails organizations need as AI adoption accelerates.

Their message was clear: AI governance is quickly becoming a core component of board duty of care.

What is an AI governance framework? 

An AI governance framework is the set of policies, oversight structures and operational processes an organization uses to manage how artificial intelligence systems are deployed, monitored and controlled.

A robust AI governance framework typically includes:

  • Acceptable-use policies that define how employees may use AI tools
  • Clear ownership and accountability for AI risk management
  • Processes for evaluating new AI systems and vendors
  • Documentation and reporting mechanisms for board oversight
  • Safeguards to protect sensitive or privileged information

In other words, an AI governance framework ensures that organizations can benefit from AI innovation while maintaining responsible oversight and risk management.

For boards, this framework provides the structure needed to ensure AI systems are deployed responsibly, with clear safeguards for security, risk management and oversight.

Why AI governance is now a board-level responsibility

Boards have long been responsible for overseeing technology strategy and risk. AI represents an inflection point in that responsibility. Unlike earlier enterprise technologies, AI systems can generate outputs, shape decisions and introduce new forms of risk at scale. As a result, effective oversight requires more than general awareness — it demands deeper visibility into how AI is being used across the organization.

AI systems now influence critical areas such as:

  • customer-facing services
  • financial reporting and forecasting
  • compliance monitoring
  • strategic analysis

Because these applications can materially affect business outcomes, directors must understand not only where AI is being used, but also how the associated risks are managed.

Directors should focus on material AI use cases — systems that meaningfully affect customers, regulatory compliance, safety or financial performance. These applications warrant sustained board attention and documented oversight. 

Equally important is the cadence of that oversight. AI systems evolve quickly, meaning governance cannot be episodic or reactive. Boards benefit from structured reporting that includes performance indicators, risk metrics and incident updates. Over time, this allows directors to identify trends, challenge assumptions and ensure management’s controls are working in practice.

AI governance framework best practices 

For boards to exercise meaningful oversight, organizations must establish a clear AI governance framework at the management level. This framework provides the operational infrastructure that helps organizations identify and mitigate AI risks early, while ensuring emerging issues are escalated quickly when necessary.

Several core elements have emerged as best practices, including:

AI acceptable-use policies 

An AI acceptable-use policy is increasingly considered table stakes. These policies clarify how employees may interact with AI tools, which platforms are approved and what types of information may be entered into those systems.

Without clear guidance, employees may inadvertently expose sensitive or privileged information to external platforms.

Defined ownership and accountability 

AI governance cannot be effective if responsibility is diffuse. Many organizations are establishing cross-functional AI governance committees that include leaders such as the chief legal officer, chief information security officer and chief technology officer. These committees typically operate under formal charters that define responsibilities, reporting lines and escalation thresholds.

This structure ensures AI risks are monitored across functions rather than siloed within a single department.

AI use case evaluation and registers 

Companies also need processes for evaluating new AI tools and use cases. Before deployment, organizations should assess factors such as data sensitivity, regulatory implications and potential customer impact. Many companies now maintain a centralized register of AI applications, allowing leadership to quickly understand where and how the technology is being used across the enterprise.

AI is already becoming part of boardroom workflows. According to the 2026 What Directors Think report from the Diligent Institute, 76% of directors say they are using AI in their board work — yet only 20% report that their organization has a formal policy guiding its use.

This gap highlights why clear governance policies matter. Without defined guardrails, directors and employees may turn to consumer AI tools that lack enterprise-grade security or clear data protections. Establishing approved tools and policies not only improves security and compliance — it also helps organizations ensure AI is used consistently, responsibly and in alignment with governance standards.

One key issue is discoverability. If AI is used to generate board materials, summaries or strategic analysis, those materials — and potentially the prompts used to create them — may become part of the corporate record. In litigation, they could be subject to discovery. 

Organizations should therefore assume that any AI-generated material that meaningfully informs a board decision could ultimately become part of the corporate record. This may include not only final outputs such as summaries or reports, but also inputs used to generate them — including prompts, uploaded drafts, background memoranda and other working materials.

This raises important questions about how organizations manage drafts, prompts, transcripts and other intermediate artifacts created during AI-assisted work. Companies therefore need clear policies that define how AI-generated materials are retained, classified and stored.

Managing AI-generated records and retention

To manage these risks effectively, organizations should establish clear and defensible data retention practices for AI-assisted work.

Key considerations include:

  • Confirm vendor retention practices before deploying AI tools, including whether inputs are retained, how long they are stored and whether they are used for model training.
  • Classify AI artifacts by materiality Materials that reach the board or inform formal decisions should typically be retained according to standard corporate record schedules. Drafting-stage materials such as prompts or interim drafts may follow shorter retention periods if those policies are explicit and auditable.
  • Update legal hold and e-discovery procedures to ensure AI artifacts such as prompt histories, logs and outputs are preserved when litigation or regulatory obligations arise.
  • Treat AI-generated drafts as provisional AI outputs should be reviewed and approved by a human decision-maker — ideally within the legal function — before being circulated or stored as part of the corporate record.
  • Label drafts clearly AI-generated drafts should be marked as preliminary or subject to revision so they are not later interpreted as final conclusions.

The goal is not to avoid creating AI-generated content. Rather, it is to ensure that what gets created and retained reflects the same governance discipline organizations already apply to other sensitive corporate records.

Protecting privilege in the age of generative AI 

Another emerging risk relates to attorney-client privilege. Recent legal developments have highlighted how using commercial AI tools to generate legal content may complicate privilege claims if the data shared with the tool is not adequately protected. Preserving privilege in an AI-assisted environment requires clear operational guardrails. Organizations should treat AI used in legal workflows with the same discipline applied to any privileged communication channel — ensuring that tools, users and workflows are structured in ways that protect confidentiality and maintain defensible privilege claims.

To mitigate these risks, organizations should:

  • Use enterprise-grade AI tools for privileged work Consumer-grade AI tools should not be used with privileged or confidential information. Enterprise platforms with contractual confidentiality protections and restrictions on training data provide a more defensible foundation.
  • Audit vendor terms of service and privacy policies Organizations should review whether AI vendors retain inputs, use data for training or reserve rights to disclose information to third parties.
  • Ensure counsel directs AI-assisted legal work AI use involving privileged analysis should be initiated and supervised by counsel rather than undertaken independently by business teams.
  • Document attorney direction where appropriate When non-lawyers use AI tools at counsel’s direction, that context should be documented in internal records or prompts to preserve the legal context of the work.
  • Separate privileged and non-privileged analysis Privileged legal analysis should not be combined with general business discussions in the same AI session or output.
  • Label and restrict distribution of privileged materials AI-generated content prepared under attorney direction should be labelled appropriately and shared only with individuals who have a legitimate need to review it.
  • Train personnel on responsible AI use Employees, executives and board members should understand the difference between consumer and enterprise tools and the conditions under which AI use could undermine privilege protections.

The boardroom culture question 

AI can also influence the dynamics of board deliberation itself. Some organizations are experimenting with AI tools that record meetings or automatically generate minutes. While these capabilities may improve efficiency and documentation, they also introduce governance trade-offs.

Recording discussions verbatim could create discoverable records of unedited conversations and potentially discourage candid debate among directors.

For this reason, organizations considering these tools should establish clear protocols around consent, access controls and retention policies — and have open conversations with directors about whether such practices align with the board’s culture. 

These protocols should also address how AI-generated transcripts or summaries are stored, who can access them and how privilege or discovery risks will be managed if those records later become part of the corporate record.

What directors should be asking about AI 

As AI becomes more embedded in everyday operations and decision making across the organization, directors should ensure they are asking the right questions of management.

These include:

  • Which AI systems are material to our business strategy or risk profile?
  • What policies govern employee use of AI tools?
  • How are AI risks integrated into our enterprise risk management framework?
  • What safeguards protect privileged legal communications?
  • How are AI prompts, outputs and related materials retained?

Asking these questions helps boards demonstrate informed oversight while ensuring AI adoption aligns with the organization’s risk appetite and governance standards.

Why responsible AI governance requires the right infrastructure 

Ultimately, responsible AI adoption cannot rely on ad hoc processes or disconnected tools.

Effective governance requires infrastructure that supports secure collaboration, transparent oversight and defensible documentation. Boards must be able to review information confidently, maintain clear records and ensure sensitive data remains protected.

Just as important, organizations need systems that allow AI to be used within controlled environments rather than through consumer tools that may expose sensitive information or create unmanaged records.

At a minimum, AI platforms used for governance or legal workflows should operate under clear confidentiality protections that define how data is used, stored and disclosed. If a platform cannot guarantee those protections, it should not be used for sensitive governance or legal work.

Without the right infrastructure in place, even well-designed AI governance policies can be difficult to enforce in day-to-day board and legal workflows.

The role of AI-powered technology in AI governance 

Designing an AI governance framework is only effective if it can be executed in day-to-day board and legal workflows, in addition to the broader use of AI across the organization. That’s where AI-powered governance technology like Diligent Boards becomes critical: it provides the secure infrastructure to apply AI to sensitive board information without undermining confidentiality, privilege or regulatory expectations.

Instead of directors and in-house counsel moving materials into consumer AI tools, a governance platform creates a controlled environment where prompts, drafts and final documents remain within a single, permissions-based system. This allows organizations to set clear access controls, retain defensible audit trails and distinguish working drafts from the official corporate record — all of which are essential in discovery and regulatory scrutiny.

The right technology also helps preserve privilege. When AI capabilities are embedded in a secure governance platform, data does not become training material for public models, user inputs are encrypted and segregated, and workflows can keep a “lawyer in the loop” by design. At the same time, technology does not replace governance. Policies, oversight and accountability must still be defined and maintained by leadership and the board. What the right platform provides is the ability to operationalize those guardrails — ensuring that approved AI tools, secure workflows and governance policies are consistently applied in practice.

In this way, AI-powered governance technology supports responsible AI adoption by helping organizations execute the policies and safeguards defined in their AI governance framework

Conclusion: Governance in the AI era 

AI is already changing the tempo of governance. Boards are moving beyond reactive oversight toward a more predictive model — one that leverages data and analytics to identify risks and opportunities earlier.

At the same time, governing the use of AI itself is becoming a core responsibility for boards and leadership teams. As these technologies become embedded across enterprise workflows, organizations must ensure they are deployed responsibly, securely and in alignment with governance standards. An effective AI governance framework provides the structure to make that possible.

AI may transform governance processes — but human judgment remains the final safeguard. The organizations that succeed in the AI era will be those that pair technological innovation with thoughtful oversight, strong governance frameworks and leaders willing to ask the right questions.

Enhance board management with AI

Learn more about how AI is transforming corporate governance in our comprehensive guide.

better-board-management-with-ai

Frequently asked questions

Why do boards need an AI governance framework?

Boards need an AI governance framework to ensure AI systems are used responsibly and aligned with the organization’s risk tolerance and legal obligations. As AI influences business operations and decision-making, directors must demonstrate informed oversight as part of their fiduciary duty.

What risks should an AI governance framework address?

A strong AI governance framework should address risks such as:

  • data privacy and confidentiality
  • regulatory compliance
  • bias and ethical concerns
  • cybersecurity threats
  • inaccurate or misleading AI outputs
  • improper use of sensitive or privileged information

Managing these risks helps organizations adopt AI responsibly while maintaining stakeholder trust.

How should organizations manage AI-generated records?

Organizations should establish clear policies for how AI-generated materials are classified, stored and retained. This includes considering whether prompts, drafts and outputs used to create board materials could become part of the corporate record and potentially subject to discovery.

How can companies protect privileged information when using AI?

To protect attorney-client privilege, organizations should use secure AI platforms, limit access to sensitive materials and ensure legal professionals review AI-generated legal content before it is shared or stored..

How can organizations implement an AI governance framework?

Organizations typically begin by establishing AI usage policies, assigning oversight responsibilities and integrating AI risk management into existing governance structures. Many companies also create cross-functional AI governance committees and maintain registers of AI tools used across the organization.

AI action plan GRC

Guide

· Apr 22, 2026

· 1 min read

AI action plan worksheet for GRC leaders

Unlock the potential of AI in governance, risk, and compliance with this practical 90-day action plan worksheet designed for GRC leaders. This guide helps you assess AI maturity, prioritize use cases, and establish essential guardrails, ensuring a structured and accountable approach to AI implementation in your organization. Download now to transform AI discussions into a clear, actionable strategy.

Business people having a discussion in a boardroom about angel investor pitch deck

Blog

· Apr 13, 2026

· 16 min read

The best AI tools for board meeting minutes

By Kezia Farnham

Discover the top AI tools tailored for generating board meeting minutes, ensuring legally defensible records and efficient governance workflows. This guide explores what sets these governance-grade platforms apart from generic AI note-takers, helping organizations choose the right solution to streamline their documentation while maintaining compliance and security.

Guide

· Dec 1, 2025

· 1 min read

AI Use Case Checklist Template

Assess potential new uses of AI with this Use Case Checklist Template, created in partnership with Diligent's Education & Templates Library and Taylor Wessing.