Governance Architecture Series · Article 4

AI Governance Is Not a Data Problem

Why governing AI inside domains produces blind spots, and what the architectural gap actually looks like

By Lenna Ndoko

Founder, Institute for Cross-Domain Governance

Governance Leader, Subject Matter Expert, and Practitioner

PublishedCross-Domain Governance

A model goes into production. The data governance team reviews lineage and quality. The security team assesses the model's access controls. The privacy team checks for personal data in the training set. The compliance team maps it to applicable regulatory obligations. The vendor management team reviews the third-party components embedded in the pipeline.

Every domain does its job. Every domain issues a green rating.

And still, nobody can answer the question that matters most. What happens when this model fails across all of them at once?

That is not a hypothetical. It is the condition many enterprises are operating in right now. And it is why AI governance, despite years of frameworks, toolkits, and regulatory guidance, keeps producing blind spots instead of clarity.

The problem is structural. Most enterprise governance frameworks were designed to manage risks within domains first, and to coordinate between them second. AI systems reverse that order. Their risks often form across domains first, and only later appear within a single domain's control structure.

That is the architectural challenge AI governance now faces.

The Framework We Trust Most Does Not Have a Cell for This

COBIT is one of the most rigorous governance frameworks in the world. It has been refined across decades, adopted by thousands of organizations, and endorsed by regulators. A senior governance professional encountering COBIT for the first time would recognize immediately that the people who built it understood the problem space.

Look at how it is organized. Data governance has a column. Security management has a column. Vendor portfolio management has a column. Compliance and assurance have defined structures. AI strategy and technology governance appear in structured capability areas.

Each domain is distinct, structured, and internally coherent.

That design is not a flaw. It reflects a long tradition in enterprise governance. Complex organizations require clear accountability, and domains provide the structure for assigning ownership of controls, policies, and oversight responsibilities.

Within those boundaries, COBIT works extremely well.

But consider a different question.

Where in the governance architecture do we track what happens when a data integrity issue in one domain triggers a regulatory exposure in another, while a vendor dependency in a third amplifies both?

Most frameworks assume that coordination between domains will address situations like this. Integration efforts, enterprise risk management functions, and cross-functional governance committees are intended to provide connective tissue.

Those mechanisms are valuable. They improve communication and oversight.

But they operate primarily at the level of information sharing. They do not always create architectural visibility into the risks that form in the intersections themselves.

This is an important distinction. Integration improves how domains talk to each other. Architecture determines what the governance system can see.

ISACA recognized this gap. Its 2025 guidance on leveraging COBIT for AI governance moves in the right direction, and that work is worth reading carefully. But even with those extensions, the way most organizations implement COBIT remains domain-centric. The framework is evolving. The implementation architecture, in most enterprises, largely is not.

That is the design gap modern organizations are beginning to encounter.

The 86 Percent Problem

This gap is not purely theoretical. Organizations feel it constantly.

A February 2025 MIT Sloan Management Review article, based on research from AuditBoard, found that more than 86 percent of audit and risk professionals say data silos affect their team's ability to manage risk effectively.

Eighty-six percent.

That statistic reflects a widespread operational experience. Risk signals often appear in one domain while the consequences surface in another. Teams are aware of the problem, and many organizations have responded by investing in unified GRC platforms, enterprise dashboards, and cross-functional governance councils.

These are constructive responses. They improve transparency and reduce friction between teams.

But they assume that the existing domain structure is correct and that the primary problem is connectivity.

In practice, connectivity does not always resolve architectural blind spots. Even when every risk register feeds into a shared platform, the governance picture may still reflect the same domain boundaries that produced the original fragmentation.

Integration improves coordination. It does not necessarily reveal risks that form in the space between governance domains.

What AI Actually Touches

Consider a production AI model used in patient risk scoring at a large health system.

The data governance team owns data quality, lineage, and classification. If the training data contains incomplete records or integrity issues across clinical systems, that is their domain.

The AI strategy or model governance team owns model development standards, deployment criteria, and performance monitoring. If the model drifts or produces unreliable risk scores over time, that is their domain.

The privacy team owns personal health information handling and regulatory alignment. If protected health attributes influence model outputs in ways that create HIPAA exposure, that is their domain.

The security team owns access control and adversarial risk. If the model pipeline can be compromised through a system vulnerability, that is their domain.

The compliance team owns regulatory interpretation and examination readiness. If the model produces risk scores that result in differential treatment regulators consider discriminatory, that is their domain.

The vendor management team owns third-party dependency and contractual oversight. If a component of the model pipeline comes from an external provider with a control weakness, that is their domain.

Six domains. Six ownership structures. Six legitimate areas of responsibility.

Each one necessary. Each one incomplete when viewed in isolation.

When a model failure occurs in practice, it rarely presents as a single domain event. It is more often a compound sequence. A signal appears in one domain, passes through a gap that sits between governance functions, and surfaces as a finding in another.

That sequence is difficult to detect in advance when governance oversight is organized primarily around domain boundaries.

No single domain can reasonably see the full propagation path on its own. Not because the professionals in those domains lack expertise, but because the architecture of oversight distributes the signals across multiple functions.

Governing AI Requires a Different Field of View

The conversation about AI governance has largely focused on what to govern within domains. What data standards apply. Which regulatory frameworks control. How model decisions should be documented. What vendor controls must be established.

These are necessary questions. Every domain must continue strengthening its own governance practices.

But an additional structural question is now emerging.

How does risk move between the domains that each claim partial responsibility for the same system? Where are the handoffs between governance functions. Who monitors the accountability that lives in the overlap. What does a governance failure look like when it forms simultaneously across a data integrity gap, a vendor dependency, and a regulatory threshold?

The data suggests organizations already sense this problem but have not yet named it precisely. The 2025 AI Governance Benchmark Report, based on survey data from 100 senior AI and data leaders, found that 58 percent of leaders cite disconnected systems as the top blocker preventing them from scaling AI responsibly. The same report found that only 14 percent enforce AI assurance at the enterprise level. These are not findings about domain governance quality. They are findings about architectural gaps that domain governance alone cannot close.

Existing frameworks provide pieces of this view. Enterprise risk management functions attempt to aggregate signals. Cross-functional committees review emerging issues. Integrated GRC systems bring data together.

Those mechanisms matter.

What they often lack is a governance design that explicitly treats cross-domain risk as an object of oversight in its own right.

In practice this means asking an additional set of questions before a model goes into production. Not only whether each domain has reviewed it, but whether anyone has mapped what happens if two or more of those reviews surface issues at the same time. Whether escalation paths exist for compound failures. Whether governance structures can see interactions between domains as clearly as they see activity within them.

Most governance programs can answer the column questions. They have spent years building that capability.

The intersection question is newer. And it requires something more specific than better coordination.

It requires treating cross-domain risk scenarios as formal governance objects, defined before deployment, not discovered after a finding. For a patient risk scoring model, that means naming the compound failure paths explicitly. Training data drift coinciding with a vendor control gap, for example, is not a data problem and a vendor problem occurring simultaneously. It is a single cross-domain risk event with its own propagation path, its own potential regulatory consequence, and its own accountability requirement. It should appear in risk registers and model inventories as a first-class entry, with a named owner and a defined escalation path, not buried separately in two domain logs that may never be read together.

That is a structural change. It does not replace domain governance. It adds a layer of visibility that domain governance alone cannot provide.

Every domain green.
And still, no one owns the picture no single domain can see.

The Institute for Cross-Domain Governance advances research and advisory practice focused on risks that form between governance domains. This article is part of an ongoing series examining AI governance as a cross-domain challenge within modern enterprises.

Next in the series: How Governance Decisions Actually Get Made (Coming Soon)

Continue Reading

Enter your email to read the full article. No password. No subscription required. You will receive occasional governance architecture analysis from The Governance Desk.

Topics

AI GovernanceCross-Domain GovernanceEnterprise Risk ArchitectureGovernance ArchitectureModel Governance

Governance Architecture Series

01

The Governance Visibility Gap: Why Enterprise Governance Architecture Matters More Than Governance Programs

02

The Audit Right You Never Exercise Is Not a Control

03

Security Governance Has Done Its Job. Now the Architecture Has to Evolve.

04

AI Governance Is Not a Data Problem- You are here

05

How Governance Decisions Actually Get Made

06

Why Frameworks Do Not Produce Visibility

07

Designing the Architecture Layer