Influencers Of Multan | IOM

GPT-5.2 investigation raises AI source concerns

Illustration showing ChatGPT GPT-5.2 investigation with Grokipedia logo, AI credibility documents and fake news concerns

Loading

A recent investigation has raised serious questions about the authenticity of AI generated content, after claims surfaced that the new ChatGPT model GPT-5.2 has been sourcing information from Groikipedia, an AI powered online encyclopedia launched by Elon Musk in 2023. The findings have sparked intense debate among researchers, journalists, and digital media experts who are increasingly concerned about how artificial intelligence gathers and presents information.

According to the investigation, ChatGPT GPT-5.2 referenced Groikipedia in nine out of more than a dozen test inquiries. These responses reportedly included discussions on sensitive political and historical topics, such as Iran’s political structure and issues connected to Holocaust denial. The frequent appearance of Groikipedia in AI generated answers has raised alarms, especially since many users rely heavily on AI tools for quick and trusted information.

A report cited by The Guardian highlighted that Groikipedia appears to be part of the model’s broader information pool. Unlike Wikipedia, Groikipedia is fully AI generated, with no human editorial oversight, increasing the risk of bias, factual errors, and unverified claims being repeated across platforms.

Experts also noted that ChatGPT did not reference Groikipedia when answering questions on widely disputed topics such as the January 6 Capitol attack or HIV and AIDS misinformation. However, Groikipedia appeared more frequently in responses to obscure or complex queries, where the AI made stronger claims that went beyond well established facts, including alleged links between Iranian companies and political leadership.

This controversy is not limited to one platform. Other large language models, including Anthropic’s Claude, have also reportedly cited Groikipedia in certain responses. OpenAI has stated that its systems draw from a wide range of data sources and apply safety filters to reduce the spread of harmful or misleading information.

Industry experts stress that rigorous source evaluation is essential as AI continues to shape how people access news and knowledge. Without stricter safeguards, reliance on questionable sources could unintentionally mislead users and reinforce misinformation at scale.

Recent Articles
Share it :

Leave a Reply

Your email address will not be published. Required fields are marked *