The reality inside organisations
Across industries and levels of maturity, the same data quality issues surface again and again. Individually, they’re manageable. Collectively, they shape whether people trust the data they work with and whether AI is seen as useful or risky.
AI can introduce new challenges of its own, but it also exposes existing data quality issues far more quickly and at much greater scale.
Inconsistent definitions of the same metric
One of the most persistent issues inside organisations is inconsistent definitions of the same business metric.
Metrics like revenue, retention, utilisation, or completion often sound straightforward at the executive level. In practice, they are frequently defined differently across systems and teams, usually for legitimate reasons such as timing, scope, or reporting purpose.
Logic varies across systems. Cut-off dates change. Certain records are included in one place and excluded in another.
I’ve seen this repeatedly in regulated environments, particularly where reporting and compliance are involved. Each team had a defensible definition, but the organisation spent significant time reconciling differences rather than improving insight.
At first, this feels like a reporting inconvenience. People reconcile numbers and move on.
The hidden cost shows up when AI enters the picture.
AI models don’t reconcile differences on their own. They learn patterns from the data and context they’re given. When definitions are inconsistent or poorly defined, those inconsistencies are absorbed and reflected back at scale.
Outputs become harder to explain. Confidence drops. Leaders start questioning results, not because the model is wrong, but because the underlying logic was never aligned in the first place.
What looked manageable in dashboards becomes damaging in automated decision-making.
In Part 2, We’ll look at how downstream fixes and silent upstream changes quietly erode trust and why those patterns become liabilities once AI is introduced.
