Why ‘Artificial Intelligence’ Continues to Grow in Complexity

Cats on the Moon? Safe Sun Gazing? Eating Stones for Health?

Surprisingly, some of the latest information served up by Google in the US includes claims like cats traveling to the moon, it’s safe to look at the sun for 15 minutes if you have dark skin, and eating a small stone daily is beneficial for health. This feature, called AI Summaries, integrates Google’s AI model, Gemini, into its search engine, displaying AI-generated answers above traditional search results, which users cannot opt out of.

Unexpected Virality and Issues with AI Summaries

AI Summaries have quickly gone viral online, not for their utility but for their humor. For example, asking for a list of fruits ending in ‘um’ yields results like ‘Applum, Strawberrum, and Coconut,’ showcasing a phenomenon known in AI as a ‘hallucination.’

Google’s AI Struggles and High-Profile Missteps

Despite its $2 trillion market capitalization and access to top talent, Google has faced significant AI challenges. Their first major AI initiative, the Bard chatbot, launched in February last year, made headlines for inaccuracies. During its initial demonstration, Bard incorrectly stated that the James Webb Space Telescope took the first pictures of Earth from outside the solar system, a blunder that cost Google $100 billion in market value.

In February, Google introduced Gemini, an AI capable of generating both text and images. However, it produced historically inaccurate images, such as depicting black Nazi soldiers and a female South Asian pope, despite being asked for accurate representations. The Economist termed this a ‘well-intentioned mistake.’

AI’s Fundamental Flaws and Training Issues

The root of these errors lies in the way AI is trained. Rather than using a curated dataset, AI models are trained on vast amounts of virtually unfiltered data. This approach often incorporates inaccurate or sarcastic information from forums like Reddit, leading to unreliable outputs. The phrase ‘garbage in, garbage out’ aptly describes this situation.

Impact on Practical Applications

The inconsistency and hallucinations of generative AI severely limit its practical applications, especially in commercial and business settings. For instance, a study on the use of generative AI in legal work found that the verification required to correct AI errors negates any time savings.

Compounding Errors and Synthetic Data Risks

AI’s tendency to generate false information exacerbates existing issues on the internet. Major AI companies have admitted to using synthetic data—data generated by AI itself—when they run out of real-world data to scrape. This practice leads to what experts call ‘model collapse,’ where AI models become unstable and unreliable. Professor Nigel Shadbolt warned of this risk, likening it to the inbreeding problems of the Spanish Habsburg dynasty, a phenomenon dubbed ‘Habsburg AI.’

Self-Perpetuating Falsehoods

The problem is compounded as AI tools start using each other’s inaccuracies as facts. For example, when Google erroneously claimed there were no African countries beginning with the letter ‘K’, it was likely based on an incorrect discussion from ChatGPT. This cycle of misinformation is described as ‘Model Autophagy Disorder’ (MAD), similar to how mad cow disease spread through contaminated feed.

The Growing Challenge of AI-Caused Misinformation

Since the release of ChatGPT in November 2022, generative AI has polluted the web with inaccuracies. Addressing this will be a significant challenge. While the potential benefits of AI remain largely unrealized, the rising costs and complications are becoming increasingly apparent.

Andrew Orlowski is a weekly columnist at Telegraph. Visit his site here. Follow him at X: @AndrewOrlowski.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top