Artificial Intelligence

Researchers have found a way to make AI go insane

Image From Unsplash

The vulnerability of AI could potentially lie within AI itself, according to an intriguing research paper by scientists from Rice and Stanford University. The study suggests that continually feeding AI-generated content to AI models leads to a deterioration in the quality of their outputs. This self-consuming loop appears to strain the model’s digital intelligence, creating an ouroboros-like effect.

In essence, the absence of “fresh real data” — referring to original human work rather than AI-generated material — adversely impacts the model’s performance. The researchers explain that subjecting the model to synthetic content during repeated training sessions causes the marginal and less-represented information at the periphery of the training data to vanish. Consequently, the model begins to rely heavily on increasingly converging and less diverse data, resulting in its eventual collapse.

It is important to note that the paper is awaiting peer review, so caution should be exercised in interpreting the findings. Nevertheless, the results are compelling. According to the study, the AI model under examination exhibited signs of deterioration after only five rounds of training with synthetic content.

If it is indeed true that AI can disrupt AI, there are profound real-world implications to consider. Presently, numerous legal cases against OpenAI highlight the prevalent practice of training AI models through extensive web scraping. Furthermore, the prevailing notion is that the more data a model is fed, the better its performance becomes. Consequently, AI developers consistently seek additional training material. However, as the internet becomes increasingly saturated with AI-generated content, the task of ensuring that training datasets remain free from synthetic data becomes progressively more challenging. This situation places the quality and integrity of the open web in a precarious position, especially given the extensive usage of AI by both the general public and major companies like Google, which employ AI for content generation and search services.

The findings of the research paper also prompt us to question the actual utility of these systems in the absence of human input. Based on the results presented, the answer appears to be quite limited. Paradoxically, this realization offers a glimmer of hope, for it suggests that machines cannot entirely replace human beings; their cognitive capabilities are finite!

More content at PlainEnglish.io.

Sign up for our free weekly newsletter. Follow us on Twitter, LinkedIn, YouTube, and Discord.


Researchers have found a way to make AI go insane was originally published in Artificial Intelligence in Plain English on Medium, where people are continuing the conversation by highlighting and responding to this story.

https://ai.plainenglish.io/researchers-have-found-a-way-to-make-ai-go-insane-df190f52abf5?source=rss—-78d064101951—4
By: Ronit Batra
Title: Researchers have found a way to make AI go insane
Sourced From: ai.plainenglish.io/researchers-have-found-a-way-to-make-ai-go-insane-df190f52abf5?source=rss—-78d064101951—4
Published Date: Thu, 20 Jul 2023 08:16:57 GMT

Leave a Reply

Your email address will not be published. Required fields are marked *