THE POLICY EDGE

Views are personal.

Expert Commentary image

A background note can be accessed here: UNESCO: Corporate AI Governance Gaps


The UNESCO findings show that only 1 in 7 companies can identify a specific individual responsible for ethical risks across the AI lifecycle, despite widespread adoption of AI systems. How does this absence of clearly defined accountability roles reshape the feasibility of enforcing responsible AI within corporate governance frameworks?

The UNESCO findings show that accountability in corporate AI governance is largely nominal. While 40 percent of firms report board-level oversight, only 13 percent of S&P 500 boards have AI expertise and 14 percent discuss it regularly. This creates a condition where liability exists without the operational capacity to exercise it. Literature on Board Overload helps explain this: competing ESG, CSR, and fiduciary mandates dilute attention, leaving AI oversight underdeveloped. Under emerging regimes such as the EU AI Act, this gap becomes legally consequential.

At the operational level, the absence of defined accountability disrupts escalation. Only 12.4 percent of firms have human oversight policies, and fewer than one-third of those conducting impact assessments link them to escalation pathways. This allows risks like algorithmic bias to embed silently, appearing as technical failures rather than governance breakdowns.

Existing governance structures are not designed for opaque, evolving AI systems. Without clear ownership across the lifecycle, oversight remains partial and reactive. Where internal accountability weakens, enforcement shifts outward to regulators and investors. Clear, company-level mandates for boards can help align long-term strategy with systemic AI risk.


While a significant share of firms report having AI strategies, only a small fraction are aligned with recognised governance frameworks or have implemented human oversight and risk assessment mechanisms. What explains this persistent gap between stated ethical commitments and operational practice in corporate AI governance?

The gap between stated AI ethics commitments and operational practice reflects a structural separation between signalling and implementation. UNESCO/AICDI data shows that while firms articulate AI strategies, governance mechanisms such as human oversight, impact assessment, and dedicated teams remain unevenly embedded. Commitment operates at the level of principle; implementation requires institutional infrastructure.

Three factors sustain this divide. First, symbolic compliance is inexpensive and sufficient in environments without enforceable benchmarks. Public principles communicate intent to investors and regulators without necessitating systems such as model registries, named accountability roles, or escalation mechanisms.

Second, implementation capacity varies. Large multinational firms can absorb the cost of specialised teams and compliance systems, while SMEs and MSMEs face resource constraints. This creates uneven adoption, where governance maturity tracks firm capability rather than baseline responsibility.

Third, fragmented global standards weaken accountability triggers. Evidence from external governance shows that when enforcement is inconsistent, firms prioritise visible commitments over embedded processes. In India, director networks may partially offset this by diffusing AI risk awareness across boards, and encouraging more grounded governance responses.


The report highlights systemic gaps in areas such as training data quality, workforce preparedness, and environmental impact assessment, suggesting that firms are not internalising the full risks of AI deployment. How should policymakers design regulatory frameworks that ensure firms internalise these risks without stifling innovation?

The findings point to systematic risk externalisation: firms deploy AI while shifting associated risks to workers, communities, and regulators. Core governance tools remain absent across much of the corporate landscape, particularly outside technology sectors, limiting the ability to track, assess, and respond to system-level impacts.

An instructive comparison comes from sustainable finance. There too, risks were externalised until mandatory disclosure created measurable accountability. The implication for AI is that voluntary principles alone do not alter incentives. Enforceable disclosure, paired with auditability and eventual liability, anchors risk within firms’ decision-making structures.

Policy design can follow a sequencing approach. Soft norms, such as UNESCO’s Recommendation and NIST’s AI Risk Management Framework, establish a shared vocabulary. These can then be codified into mandatory reporting, audit, and incident disclosure requirements, allowing governance systems to mature before stricter liability regimes are introduced.

External governance can reinforce this transition. Active shareholders, through board engagement and earnings calls, can push firms toward operationalisation. At a jurisdictional level, treating corporate governance as part of national security strategy allows domestic regulation, such as supply chain oversight, to extend accountability across borders. For India, this also implies calibrating governance thresholds, director qualifications, and audit norms to local risk conditions, including AI-specific training and committee design.


Rethinking Public Policy Through Insight | Inquiry | Impact

Opinion • Grassroots Voices • Policymakers Perspectives • Expert Analysis • Policy Briefs