Opinion: As a regular user of AI, particularly for search functions, I’ve observed AI systems, like Perplexity, outperform Google. Despite this, traditional search seems to be deteriorating, with AI-enhanced searches following suit. When searching for precise data, such as market statistics, results are often derived from non-reliable sources. Instead of authentic data from official reports, I’m met with near-accurate figures which miss the mark. This isn’t isolated to one AI search tool; it’s prevalent across multiple platforms.

The core issue is what we term Garbage In/Garbage Out (GIGO) in AI circles, known as AI model collapse. This phenomenon occurs when AI systems, trained on their outputs, degrade in accuracy and reliability due to cumulative errors. As generation succeeds generation, data distortion results in what a Nature 2024 paper describes as, ‘a model poisoned with its own reality projection.’

Three main factors contribute to this collapse: error accumulation, loss of rare data in training sets, and feedback loops that cement repetitive patterns, fostering bias. Renowned AI entity, Aquant, summarizes the issue: ‘Training AI on its own outputs causes reality drift.’

A Bloomberg Research study found multiple renowned models giving unreliable outputs when responding to harmful prompts, highlighting the reality of GIGO. Retrieval-Augmented Generation (RAG), designed to pull live, external data into AI models, was hypothesized to improve this. While reducing errors, it unpredictably increased risks like data leaks and misleading analyses.

Despite the promise RAG holds, it inadvertently increases the chance of information bias and data breaches, as noted in studies shared by Bloomberg AI strategists. Ultimately, AI’s true value could diminish as reliance grows, ignoring potential faults. Companies poised to gain operational efficiency could opt for AI over rigorous data generation, leading to a diluted quality threshold for AI outputs.

All projections considered, the real question isn’t if AI’s deterioration is noticeable—it’s when the evidence becomes undeniable. Continuous reliance and investment in suboptimal models might eventually render AI solutions ineffective. For now, indicators of decline are subtle, but industry practitioners may soon confront more pronounced setbacks in their AI endeavors.