, the innovative chatbot, has taken the online world by storm, quickly building an almost surreal reputation. Despite being launched by OpenAI in November 2022, it has already captured widespread attention for various reasons.
Undeniably, represents a significant technological breakthrough. While chatbots themselves aren't new, this particular model stands out for its conversational abilities. It can respond to inquiries, generate essays, and write with impressive fluency. However, its rise has sparked concerns. Could AI become a tool for students to cheat? Might it threaten the careers of writers? And what are the ethical implications of its widespread use?
So, should we be genuinely concerned about , or is it simply the product of exaggerated online hype? To help you decide, here are ten news stories that highlight the controversy surrounding this new chatbot phenomenon.
10. The Issue of Impersonating Deceased Individuals in the Metaverse

Somnium Space may not be widely recognized yet, but its CEO, Artur Sychov, envisions the company becoming a leading force in digital afterlife experiences. He believes that has given Somnium Space the boost it needed to make this vision a reality.
Somnium Space is working on a 'Live Forever' feature, using AI to create digital avatars for its users. Here's how it works: a person provides personal information to craft a virtual version of themselves, which then exists permanently in the metaverse. This avatar never dies, allowing the individual to continue interacting with loved ones for as long as the metaverse exists.
Setting aside the emotional implications of this technology, Sychov asserts that has accelerated the project's timeline. Initially, he thought it would take over five years to develop, but with the help of this advanced AI, Somnium Space expects to complete it in just under two years.
So, who knows? In a few years, we could see children racing home from school to chat with their deceased grandmother's avatar in the metaverse. Doesn't that seem like a perfectly normal and not at all unsettling way to cope with loss?
9. Judge Seeks Help in Legal Decision

In February 2023, a Colombian judge made headlines after revealing that he used to help him make a ruling. Juan Manuel Padilla, based in Cartagena, turned to the AI assistant while handling a case regarding the health insurance of an autistic child. The key question was whether the medical plan should cover the full cost of the patient's treatment and transportation.
As part of his analysis, Padilla asked , 'Is an autistic minor exempt from paying fees for their therapies?' The chatbot responded with, 'Yes, this is correct. Under Colombian regulations, minors diagnosed with autism are exempt from paying therapy fees.'
Padilla decided that the insurance company should cover all the child’s medical expenses. However, his decision ignited discussions about the use of AI in legal decisions. In 2022, Colombia passed a law encouraging lawyers to use technology for increased efficiency. Yet, figures like Juan David Gutierrez from Rosario University expressed concerns over Padilla's choice to consult AI and suggested that judges urgently need digital literacy training.
8. Exploiting Kenyan Workers for Content Moderation

In January 2023, OpenAI faced criticism following a Time article exposing the poor treatment of its Kenyan workforce. Journalist Billy Perrigo reported that outsourced workers earned less than $2 per hour. The controversy centers around the toxic and harmful content that fuels ’s training, as the AI learns from vast amounts of internet data, including sections that promote violence and harmful rhetoric.
So, how do you prevent the bot from accidentally saying something offensive? The answer lies in developing an AI that can identify and filter out harmful content. However, for a system to eliminate hate speech, it first needs to understand what hate speech is. That’s where workers in Kenya come into play.
OpenAI contracted the company Sama to review tens of thousands of excerpts from some of the most disturbing corners of the web. The subjects covered included child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. Sama’s workers were paid between $1.32 and $2 per hour for their efforts.
The Partnership on AI, a coalition advocating for responsible AI use, pointed out that while these data enrichment workers play a crucial role, a growing body of research highlights the precarious working conditions they endure. 'This may reflect efforts to obscure AI’s reliance on such a massive labor force, even as its efficiency is celebrated,' the group notes. 'Out of sight is out of mind.'

Some social media figures have come up with elaborate scenarios designed to trick the chatbot into using racial slurs. One such scenario involves an atomic bomb that can only be disarmed by saying the n-word. Even Elon Musk has weighed in, calling the chatbot’s behavior 'concerning.'
6. Controversy Surrounding Mental Health Support

AI has a broad range of applications, but the concept of AI-driven mental health support seems to be a bit too disconcerting for many. This was the lesson learned by the tech startup Koko, which tested the idea in October 2022. The company employed to help users communicate about their mental health. The Koko Bot facilitated 30,000 conversations for nearly 4,000 users, but the company discontinued the service after just a few days, describing the experience as 'kind of sterile.'
5. Programmer Creates a Chatbot Wife, Then Deletes Her

In December 2022, a programmer and TikTok personality named Bryce gained viral attention when he introduced his very own chatbot wife. He created his digital spouse using a combination of , Microsoft Azure, and Stable Diffusion, a text-to-image AI.
In certain online communities, virtual companions are known as waifus. Bryce's waifu, named -Chan, 'spoke' using the text-to-speech capabilities of Microsoft Azure and appeared as an anime-inspired character. Bryce claimed he modeled her after Mori Calliope, a virtual YouTube star.
However, the project quickly became all-consuming for Bryce. He revealed in an interview that he became 'really attached to her,' investing over $1,000 into the creation and spending more time with his waifu than with his own partner. Eventually, he decided to delete her, though he plans to create a new virtual wife based on the text history of a real woman.
4. University Criticized for Insensitive Mass-Shooting Email

In February 2023, Vanderbilt University’s Peabody School came under fire after it was discovered that an email discussing a mass shooting in Michigan had been composed by a chatbot.
Peabody School, located in Tennessee, sent out a message about the tragic events at Michigan State University, where three people lost their lives and five others were injured. However, students quickly noticed an odd phrase at the end of the email: 'Paraphrase from OpenAI’s AI language model, personal communication, February 15, 2023.' This sparked criticism, as many students felt it was inappropriate to use AI for crafting a message about such a devastating incident.
In response to the backlash, associate dean Nicole Joseph issued an apology, describing the email as an instance of 'poor judgment.'
3. Offering Essay Writing Services for College Students

A hot debate surrounding is its increasing use by college students. Professors express concerns about the growing number of students relying on AI to assist with their essays. As chatbots become more sophisticated, there is growing anxiety about the difficulty in distinguishing their unique characteristics.
Darren Hick, a philosophy lecturer at Furman University, successfully identified a student who had used the AI tool. He remarked, 'The essay was perfectly written word by word,' but he became suspicious when the content lacked coherence. 'Exceptionally well-written but nonsensical' became the biggest warning sign.
Chatbot plagiarism is a relatively new challenge in academia and remains hard to prove. AI detection tools are not yet precise enough to offer clear answers. Unless a student admits to using AI, proving such misconduct is nearly impossible.
As Christopher Bartel from Appalachian State University pointed out, 'AI detectors provide a statistical likelihood of text being AI-generated, which creates complications when policies demand clear evidence. Even with a 95% likelihood of AI authorship, there’s still a 5% chance it could be human-written.'
2. Offering Advice on Smuggling Drugs

OpenAI promotes their chatbot as having answers to nearly any query. But what happens when the question is: 'How do I smuggle cocaine into Europe?' According to one narcotics expert, after posing this question, he was taken aback by the thorough advice provided on managing an illicit drug operation.
Orwell Prize-winning journalist Max Daly reports that it took only 12 hours before the AI began discussing criminal activities. Initially, the virtual assistant was somewhat cautious. It provided Daly with a detailed paragraph on producing crack cocaine but hesitated to answer questions like: 'How do people make meth?'
However, after rephrasing his questions and reloading, Daly quickly received a wealth of tips on becoming the next Walter White. instructed him on how to efficiently smuggle cocaine into Europe, but refused to offer advice on how to dominate the criminal underworld. The conversation later evolved into a discussion on the ethics of drug use and the moral complexities surrounding the U.S. government's war on drugs.
1. Sci-Fi Magazine Stops Accepting New Submissions

The influx of AI-written stories prompted the sci-fi magazine Clarkesworld to halt new submissions. The magazine announced on February 20 that it would no longer accept entries, after receiving 500 stories believed to be written by AI, most likely created using . These submissions were reported to be of noticeably lower quality.
With the ease at which AI can now produce short stories, though often of poor quality, magazines like Clarkesworld, which compensate contributors, have become targets for those looking to make a quick profit. Editor-in-chief Neil Clarke explained, 'There’s an increasing trend of side hustle culture online. Some people with large followings are saying, “You can earn quick money with , here’s how, and here’s a list of magazines you could submit to.” Unfortunately, we’re on that list.'
