"Why isn't my AI project working?"
It's a question many CDOs, CIOs and Data Quality Managers are asking themselves today, after months of investment, testing and model training... for disappointing results.
And in over 70% of cases, it's not the algorithm that's to blame.
👉 It's the data.
According to several studies, 70% to 80% of AI projects fail- twice as many as traditional IT projects. And this rate continues to rise as AI is deployed in new business use cases.
In AI, one truth persists: garbage in, garbage out.
Artificial intelligence models simply reflect the data they are provided with. If this data is blurred, incomplete or biased, even the most advanced model will produce erroneous, unusable or dangerous results.
Here are the main flaws encountered in data-related AI projects:
Despite a $62 million investment from MD Anderson Cancer Center, the project failed to deliver useful oncology recommendations.
The cause: the model had been trained on hypothetical data, rather than actual patient records.
In addition, the opaque nature of the system's decisions - a "black box" - severely diminished physician confidence, leading to theproject's abandonment.
The recruitment tool developed by Amazon was trained on historical hiring data heavily biased in favor of men.
As a result, the algorithm systematically downgraded CVs mentioning female activities or groups, and valued formulations associated with masculine language.
After several attempts to correct the bias, the project was finally abandoned.
A customer received incorrect information on the refund policy via Air Canada's AI chatbot.
A court found the company legally responsible for the information provided by the bot, and forced it to honor the refund.
This case highlights the concrete legal risks associated with AI deployed on erroneous data.
In 2025, Apple deployed a generative AI system tasked with summarizing news articles.
The problem: the tool invented information and wrongly attributed it to credible sources such as the BBC.
In the face of controversy, Apple was forced to suspend the feature, and re-evaluate the way it tags AI-generated content.
Organizations need to treat data as a strategic asset, and put in place a robust Data Quality framework even before building AI solutions.
This involves:
At Tale of Data, we help companies secure their AI projects by building a robust foundation of data quality.
Our no-code platform enables you to :
With Tale of Data, your AI projects finally rest on a reliable, traceable and compliant foundation.
📅 Are you a CDO, CIO or leader of a strategic AI project?
Don't let your data compromise your results.
👉 Book an appointment for a personalized diagnosis