Your data governance is the real liability. The LLM just reveals it.
A note from Brijal Patel, VP, Data Governance, AI & Analytics at Vericence.
A 2025 MIT study on enterprise GenAI adoption found what most of the market already
suspected. The vast majority of deployments aren’t delivering measurable P&L impact yet. The authors called it the GenAI Divide. Billions invested. Very little coming back.
Everyone wants to blame the model.
The model isn’t the problem. Your data governance is.
The governance most enterprises built was designed for dashboards.
Across years of enterprise data work, both individually and as a firm, the same crash-and-burn sequence has repeated at more than one Fortune 100 client. From on-prem warehouse
migrations through modern Lakehouse transformations, the hurdle was rarely the cloud. It was a significant portion of shadow data nobody would claim ownership of until it broke a pipeline. The companies that won didn’t have better tools. They had better metadata discipline before the tools showed up.
Now the same movie is running at ten times the speed. And most governance programs are still
optimized for a world that no longer exists: quarterly data quality reviews, annual stewardship certifications, dictionaries nobody reads, and lineage captured by hand six weeks after the fact.
That cadence was fine when a BI analyst was on the other end of it, waiting for Monday’s report. It falls apart the moment the other end is an LLM generating responses in real time for a patient, a claims adjuster, or a regulator.
LLMs don’t forgive ambiguity. They amplify it.
A dashboard with a mislabeled field produces a wrong number in a deck. Annoying. Fixable.
An LLM with a mislabeled field produces a confident, articulate, beautifully formatted lie at scale, with your company’s name on it.
That is not a model failure. That is a governance failure wearing a better coat.
What actually separates the 5% from the 95%
Three moves. Every time.
• Make metadata active, not documented. The data should tell the model its own quality
level before the model answers. Lineage, sensitivity, and quality update at the cadence
of model inference, not the cadence of audit season.
• Put a human name on every data product. “The data team owns it” is not ownership.
A data product isn’t a table. It’s a bundle of data, metadata, and SLAs. Every critical one
needs a named person accountable when an LLM hallucinates off of it.
• Govern the last mile. Traditional governance stops at the warehouse door. AI
governance follows the data into the prompt. The same customer table that’s fine for a
loyalty dashboard is a compliance landmine for a GenAI concierge.
What we’re seeing in the market.
When we’re in the room with Healthcare COOs, the fear isn’t the AI. It’s the thought of an LLM
making a clinical recommendation based on a 2014 data schema that hasn’t been updated
since the Obama administration.
Swap “clinical recommendation” for “credit decision” and a risk officer at a regional bank will tell you the same story, with a frozen risk model instead of a stale schema.
That anxiety sits underneath every AI strategy deck coming across our desks these days.
Healthcare COO, financial services CRO, industrial CIO. The sentence we keep hearing is
almost always the same. “We have a governance program. It just doesn’t seem to help us with AI.”
It doesn’t help because it wasn’t built to. It was built for reporting. You don’t retrofit a RAG
pipeline onto a governance program designed for quarterly reviews. You rebuild the operating
model around a new unit of value: the data product that feeds a model.
The good news: you don’t need a two-year transformation. You can lay the foundation in ninety days.
1. A domain data maturity assessment honest enough to surface the gaps no one wants to
name.
2. A use-case-driven prioritization so you stop boiling the ocean.
3. An active metadata layer that turns governance from a document into a signal.
Ninety days isn’t a full transformation. It’s the foundation and one instrumented data product.
The rest of the enterprise takes longer. But if the trajectory doesn’t start bending in the first
quarter, it probably never will.
Done right, this stops being a defensive program. It becomes the thing that turns AI from a cost center into something that actually earns.
The question worth asking your board.
Most executive teams are asking the wrong question. They’re asking, “What’s our AI strategy?”
The better question: if one of our LLMs hallucinated tomorrow and cost us a customer, a claim, or a headline, could we trace it back to the governed data asset that failed us? In hours, not weeks?
If the answer is no, you don’t have a governance problem. You have a liability.
