Over the past few years, artificial intelligence has rapidly evolved across different sectors, with many applications now widely used by both businesses and the general public. However, as with any groundbreaking technology, the potential rewards of AI must be carefully weighed against the risks it presents.
There are debates about whether this balance can truly be achieved. After all, few emerging technologies come with the looming threat of outsmarting the very people who designed them. Fortunately, this has not yet occurred, but AI has already sparked numerous controversies, crimes, and scandals on its journey to where it stands now. It’s likely that more will unfold, but here are ten of the most shocking and fascinating AI controversies to date.
10. The Wizard of Oz Approach
In the late 2010s, several tech companies were uncovered for using human labor to perform tasks that they claimed— or at least implied— were being carried out by advanced AI. While some referred to this as 'pseudo-AI,' others dubbed it the 'Wizard of Oz technique,' referencing the iconic moment in the classic film when the curtain is pulled back to reveal that the mighty, fiery wizard was simply an old man operating a machine.
This doesn’t automatically imply they were fraudulent. The reasoning behind this practice is to ensure that there is enough demand for a service before investing in its automation. However, privacy issues have arisen from this method. In numerous instances, individuals’ private messages were shared with others without their knowledge or consent.
9. AI-Driven Interrogation
Given the CIA’s involvement in controversial operations like MK-Ultra, it is not surprising that they were among the first agencies to explore AI. Declassified documents reveal that as early as the 1980s, they were experimenting with rudimentary AI. In 1983, they used a primitive program called 'Analiza' in an attempt to interrogate one of their agents.
The idea behind the program was simple: it would record the agent’s responses and then choose a relevant reply, threat, or question from a set of pre-programmed options. While primitive by today’s standards, it was likely more advanced for its time. Like a human interrogator, it would probe for weaknesses, analyzing factors such as how much the agent spoke and the level of hostility he exhibited.
It remains unclear whether the CIA continues to develop AI-based interrogators. While such methods may be less violent than other interrogation techniques, there are concerns about removing the last shred of human interaction from an already harsh and impersonal process, leaving detainees with no one to appeal to.
8. North Korean Job Applications
While the CIA was among the first government agencies to explore AI, now many countries around the world are taking an interest in the technology, not always for noble purposes. One nation making use of AI in a dubious way is North Korea. It’s believed that the country’s intelligence services have been using AI to create thousands of fake applications for remote jobs in the United States.
AI-driven automation tools allow operatives to submit hundreds of applications under different false identities, and even secure and carry out one or more of the jobs. The money earned from these activities is then funneled back to support the North Korean regime. One of the U.S. companies targeted was a tech startup called Cinder, which proved to be an unfortunate choice, as it’s run by former intelligence officials.
Although Cinder raised awareness about the issue, other companies, likely less vigilant, have also been targeted. Even small, independent businesses have fallen victim. According to reports, some of these North Korean workers are earning as much as $300,000 a year, amounting to hundreds of millions of dollars for the regime.
7. Deepfake Scams
One of the most alarming trends in AI is how convincing deepfakes have become. These are videos where the face, and sometimes the voice and body, of a different person is digitally superimposed onto someone being filmed. The technology has advanced so much that scammers can easily impersonate family members or colleagues. Some have already carried out such scams with great success.
In early 2024, a shocking story emerged involving a finance employee at the engineering giant Arup, who unknowingly transferred $25 million to scammers using deepfake technology. The worker first received an email that seemed to come from the chief financial officer, but he was initially suspicious because it involved requests for secret financial transactions.
Despite his doubts, he went ahead with the transactions after a video call with individuals he believed to be his colleagues and the CFO. Unfortunately, all of them were actually deepfake reproductions. The massive sum of money stolen in this case attracted widespread media attention, but there are likely many more similar scams happening on a smaller scale.
6. The Hollywood 'Double Strike'
In 2023, a wave of strikes among TV and film writers and actors led to the postponement of new releases, largely driven by fears over the impact of AI on their careers. Writers worried about a future where AI models like could generate entire scripts or where executives would use AI to create source material and hire writers only for adaptations—an area where writers often earn less money and recognition compared to original works.
Eventually, the studios agreed that these issues would be addressed. Writers would not be forced to use AI but could choose to do so if they wished, and they would receive no less recognition or compensation for it. For actors, their main concern was AI technologies like deepfakes. Their likeness could be captured once on film and then reproduced indefinitely by AI. The agreement their union secured stipulated that studios must seek actors' consent to use their likenesses this way, which would likely require substantial compensation.
These were positive results for those industries, but they are far from the only ones facing the threat of AI replacement. Similar, lengthy strikes could occur in the future across other sectors.
5. Copyright Infringement
In AI, 'training' is analogous to human education, and it requires materials like words, images, or sounds. The problem arises once AI has been trained, as it can recall these materials much more accurately than humans. For instance, while a person might summarize a novel, AI could potentially reproduce it verbatim or rewrite the entire story with different wording.
This raises concerns about fairness to the original creators, which is why copyright laws exist. However, some AI companies have been accused of blatantly disregarding the copyright of the materials used to train their models. Many argue that using such content is acceptable because training AI falls under 'fair use.' This view is shared by Mustafa Suleyman, head of Microsoft AI, who stated in an interview that there exists a 'social contract' meaning that anything published online is effectively free to be copied or recreated.
4. Hallucinations
A lot of the material used to train AI comes from online sources. While it's impressive that AI can quickly retrieve and reproduce this information, one major challenge is that a significant portion of what's found online is simply false. As a result, some AIs end up sharing information that sounds plausible but is entirely fabricated.
A Canadian lawyer experienced this firsthand in early 2024 when she used for legal research. While advocating for a father in a child custody case, she asked the AI to find similar past cases. It provided three cases, two of which she submitted to court. However, the mother’s legal team found no record of these cases.
It was revealed that the AI had invented the cases, a mistake known as a 'hallucination.' Fortunately, the judge chalked it up to the lawyer’s naivety and did not believe there was any intention to deceive. However, he voiced concerns that if these kinds of errors go unchecked, they could result in a miscarriage of justice.
3. Harmful to Well-Being
While many AI pioneers and enthusiasts have expressed excitement about the transformative potential of AI, the actual impact has been less encouraging. A survey conducted in February 2024, involving over 6,000 participants, examined how various technologies, including AI, influenced people's lives. The findings revealed that those with more exposure to AI reported worse health and well-being outcomes.
This mirrors previous research, and although the study did not investigate the precise causes, the authors suggested that factors like job insecurity and diminished autonomy could be contributing. Despite the current negative effects of AI on people's lives, there may still be hope for a more positive future.
The same research found that older technologies, such as laptops and instant messaging, had a positive impact on users' well-being. The design of technology and the context in which it is used also influence outcomes. In the future, AI might evolve to better integrate into people's lives.
2. Racial Inequality
One AI application already widely used in several countries is facial recognition. This technology made significant advancements during the 2010s, with studies showing that between 2010 and 2018, its accuracy improved by 25 times when identifying individuals from large databases. However, research also revealed a persistent issue—facial recognition algorithms consistently exhibited a higher error rate when identifying black faces, sometimes up to 10 times greater than for white faces.
A research study discovered that the false match rate for white faces stood at approximately one in 10,000, whereas for black faces, it was one in 1,000 when comparing women’s images. The technology used in this study was developed by Idemia, a leading French security company. Similar technologies created by Amazon, Microsoft, and IBM were also found to be less accurate when dealing with darker skin tones.
The potential for facial recognition technology, when employed by the government, to create racial biases sparked significant criticism. In response, certain areas, including San Francisco, enacted bans on its usage.
1. Recruitment and Termination
Although there is widespread concern about AI replacing human jobs, the technology is still not capable of replacing people in most industries. However, AI can indirectly lead to job loss by replacing workers with others. Amazon, for instance, has been accused of leveraging automation to terminate employees. The company uses an automated system to monitor productivity and efficiency, sending warnings and even dismissing workers if they fail to meet targets.
Critics have raised concerns that, similar to the CIA’s AI interrogator, employees have no opportunity to appeal to a compassionate human being in their time of need. Amazon has defended its practices, stating that workers are first enrolled in a training program, that there is an appeal process, and that supervisors can override the system. Despite these assurances, employees have expressed feeling like robots rather than humans due to the rigid, automated supervision.
The demands are so overwhelming that many workers have been forced to skip bathroom and prayer breaks. Records show that between 2017 and 2018, one Amazon warehouse dismissed about 10% of its full-time employees due to productivity concerns.
