In most large organizations, data quality has been identified as a critical issue. It directly impacts the reliability of reporting, regulatory compliance, operational performance and, increasingly, the ability to automate and deploy artificial intelligence at scale.
Yet, despite significant investment, skilled teams and mature tools, many Data Quality initiatives never come to fruition, or remain permanently stuck at the pilot stage.
This article analyses why Data Quality projects fail to scale from an IT perspective, not for technical or budgetary reasons, but because of structural, organizational and decision-related bottlenecks that are often underestimated.
š These blockages are analyzed in depth in our white paper
"Solving the 5 data blockages preventing IT from deploying Data Quality". A framing white paper designed to go beyond observation, detail the mechanisms at work and propose concrete levers to secure industrialization, aimed at CDOs, CIOs and Data Governance managers.
For a long time, data quality was approached as a subject for continuous improvement. Imperfect but usable data was enough, as long as its use remained local, with little automation and little exposure.
This model is no longer valid.
As soon as a rule is automated, a control becomes systematic or a correction is logged, the status of data changes.
It is no longer simply used to produce results: it must be explained, justified and defended.
In concrete terms, every rule applied, every correction made and every indicator produced can be questioned: by a business line, a control function, an auditor or a regulator.
The question is no longer simply how to improve data quality, but how to industrialize it without unnecessarily exposing IT and data teams.
It is precisely at this point that bottlenecks arise.
In many companies, data is considered "good enough to work with". As long as usage remains local, this compromise holds.
The problem begins when IT needs to industrialize. Formalizing rules, automating controls and tracking corrections transforms an implicit practice into an official mechanism. At this point, simple questions become paralyzing: which definition is authoritative? which rule applies everywhere? who decides in the event of disagreement?
According to the McKinsey Global Institute, data and IT teams spend up to 30% of their time verifying or reconciling existing data, a sign that the source of truth is often still treated on a case-by-case basis, rather than as an industrialized capability.
As long as data cannot be justified, explained and defended, IT is reluctant to lock it into a system. Not out of conservatism, but because to freeze without a framework is to institutionalize a gray area, in which rules exist but are neither explicitly assumed nor collectively validated.
š These issues are rarely addressed in a structured way, even though they directly condition the ability to industrialize. The white paper details approaches for making data justifiable and defensible, while securing IT in the face of business challenges and audit requirements.
When Data is not defensible, the organization doesn't block itself head-on. It adapts.
In regulated environments, not making a decision, not freezing a repository or not automating a correction means preserving a margin of interpretation often perceived as less risky than an explicit decision. As long as inconsistencies are corrected manually, on a case-by-case basis, the system holds together.
The tipping point comes when IT needs to industrialize. A formalized rule becomes visible, explicable and questionable. Responsibility, hitherto diffuse, becomes explicit.
In this context, maintaining a degree of uncertainty becomes rational, as it prevents a clear rule from being attributed to a single player.
The blocking threshold is not technical. It's decision-making.
š If this situation sounds familiar, it's because it has become structural in many organizations. The white paper shows how to objectify these areas of uncertainty, identify what can be stabilized and gradually introduce rules without requiring irreversible arbitration from the outset.
š Find out more in the white paper
Business rules almost always exist. They live on in practices, files, historical scripts or shared habits. The problem is not their absence, but their status.
An implicit rule works as long as it remains local. A formalized rule becomes a standard. It must be explained, maintained and, above all, accepted. Defining an active customer or a valid order may seem trivial as long as the data remains consultative. But as soon as this definition triggers invoicing, regulatory reporting or an automated decision, the rule becomes binding.
It is precisely at this point that IT finds itself exposed. To formalize is to carry a business risk. To refuse is to block industrialization.
Gartner identifies the absence of enforceable, formalized business rules as one of the main factors in the failure of data governance programs.
š This type of blockage is rarely a problem of method, but of a balance of responsibilities that is difficult to formalize. The white paper analyzes how some organizations manage to make business rules explicit, versionable and shareable, without exposing a single player or prematurely rigidifying governance.
š Access the complete framework in the white paper
On paper, everyone wants traceability. In reality, it raises a dreaded question: are we capable of explaining every data transformation, including legacy ones?
Many of today's decisions are based on mechanisms put in place years ago. As long as they remain implicit, they operate in the shadows. When full traceability makes them visible, they become explicable - and therefore contestable.
With the GDPR and the AI Act, this tension is accentuated. It's not traceability per se that's worrying, but raw or illegible traceability, i.e. purely technical, non-contextualized and difficult for non-specialists to understand, which turns auditing into questioning.
Forrester points out that the fear of unexplainable decisions is a major barrier to advanced IT process automation of IT processes.
š Many teams use this type of situation as an entry point to realign IT, data and compliance. The white paper explains how to structure readable, decision-oriented traceability, capable of meeting regulatory requirements without turning the audit into a finger-pointing exercise.
š Go to the white paper
When data cannot be justified, governed or traced, responsibility becomes mechanically fragmented.
Business teams own the meaning of data, but not the tooling. IT controls the flows, but does not define them. Data teams arbitrate, often by default, for lack of an explicit decision-making framework.
Risk and compliance add legitimate constraints. But no one has a clear mandate to say: "this rule is ours, in production".
In this vacuum, IT becomes the last line of defence. Not out of a desire to control, but because it is IT that will be held responsible in the event of an incident.
IBM estimates that 15-20% of decision-making capacity is lost due to manual validations linked to data quality and governance.
š Before looking for an operational solution, some organizations start by making a clear, shared diagnosis. The white paper proposes a framework for distributing responsibility for Data Quality, structuring collaboration between business, data and IT, and moving away from informal validations in the long term.
š Find out more in the white paper
Taken in isolation, each of these blockages may seem manageable. Together, they form a coherent system that explains why so many Data Quality projects remain blocked.
It's not the quality of the data that's at issue, but the decision-making conditions required for its industrialization.
Until these conditions are clarified, deploying a Data Quality platform is tantamount to exposing IT without securing the organization.
The white paper "Solving the 5 data blockages that prevent IT from deploying Data Quality" offers a structured framework for understanding these blockages in depth, and analyzing how successful organizations manage to overcome them without creating new risks.
It is aimed at CDOs, Data Governance managers, Heads of Data Quality, CIOs and CTOs faced with situations where everyone knows that something needs to be done, but no one wants to be the sole bearer of an enforceable rule.
š Framing the discussion before industrialization: access the white paper