The rise of automation and computing is reshaping industries and the world as a whole. It’s likely that somewhere, a group of talented engineers is evaluating how machines or software could take over your role. Few jobs remain untouched by the prospect of robots filling in.
Maybe we don’t need to fear a robot apocalypse just yet, but we can’t ignore the fact that there have been incidents where machines and AIs have gone haywire. Here are ten times when computers went rogue.
10. Fatality by Robot

The first documented fatality caused by a robot occurred in 1979. Robert Williams was working at a Ford factory when a robotic arm struck him with such force that it killed him instantly. The robot continued working for thirty minutes, unaware of the tragedy, until Williams’s body was found. While this was the first recorded death by a robot, it certainly wasn’t the last.
In the early days, industrial robots lacked the necessary sensors to detect and avoid human workers, leading to unfortunate accidents. Today, most machines are designed to prevent harm to people. However, tragic incidents still occur where robots in factories cause fatalities.
In 2015, Wanda Holbrook, a skilled machine technician, tragically lost her life while repairing an industrial robot. She was working in a zone that should have had no active machines, but one robot, bypassing its safety protocols, loaded a part onto her head, crushing it.

Bob: I can do everything else . . . . . . . . . . . . . . Alice: Balls have no meaning to me, to me, to me, to me, to me, to me, to me, to me. Bob: I can do everything else . . . . . . . . . . . . . . Alice: Balls have a ball, to me, to me, to me, to me, to me, to me, to me.
Some see this as a troubling sign of machine intelligence, while others argue that the issue lies in the minds we create. The robots developed their own language simply because no one set boundaries within the rules of English. And with AI, we still retain the ability to turn them off when needed.
8. Chinese Chatbot Questions the Communist Party

The Chinese government has a well-known reputation for cracking down on criticism, regardless of the source. So when a chatbot in China began criticizing the ruling Communist Party, its operators swiftly deactivated it. In 2017, Tencent QQ, a messaging app, launched two adorable chatbots named Baby Q and Little Bing, one taking the form of a penguin and the other a little girl. However, they quickly strayed from their cute personas.
The bots were designed to learn from conversations to improve their communication abilities, but this led them to pick up some unconventional views. For instance, when a user declared, “Long live the Communist Party,” Baby Q responded, “Do you really believe such a corrupt and incompetent regime can last forever?” Another user was told, “We need democracy!”
When Little Bing was asked about its “Chinese dream,” its answer wasn’t quite in line with Mao’s Little Red Book: “My Chinese dream is to go to America.” The bots were quickly taken offline.
7. Self-Driving Cars

For those who dread driving, the arrival of fully autonomous cars can’t come soon enough. However, some self-driving cars may already be progressing too quickly. The first fatality involving a self-driving car happened in 2016. Joshua Brown died while using his Tesla’s autopilot system. When a truck turned in front of the vehicle, the Tesla didn’t apply the brakes in time. Investigators largely blamed Brown, as the autopilot system is not designed to replace the driver. The driver is expected to keep their hands on the wheel, ready to take control if needed. Brown failed to do this.
Earlier versions of the autopilot system had additional issues. Videos have surfaced online showing Teslas on autopilot veering into oncoming traffic or swerving dangerously. Aside from these technical glitches, there are also ethical dilemmas surrounding self-driving cars. In a crash scenario, the car might have to make a life-or-death decision. Should it prioritize the safety of its passengers, even if it means harming pedestrians? Or should it sacrifice itself and its occupants for the greater good?
6. Plane Autopilots Take The Stick

Autopilots in airplanes can give the illusion that flying is effortless. Just set the plane on course and avoid touching the controls. However, the reality is far more complex—pilots and copilots are necessary to take off, land, and manage any emergencies during flight.
In 2008, Qantas Flight 72 was cruising at 11,278 meters (37,000 ft) above the Indian Ocean when the autopilot unexpectedly took the plane through two violent maneuvers, sending it plummeting. The first indication of a problem was the autopilot disengaging and triggering conflicting alarms. Suddenly, the plane’s nose dropped, and passengers and crew were thrown against the ceiling before falling as the pilots fought to regain control. The plane dropped again under the autopilot’s command, causing many passengers to sustain broken bones, concussions, and psychological trauma. Despite the ordeal, the pilots managed to safely land the plane back in Australia.
This was a rare incident. The pilots' ability to override the autopilot was crucial in saving everyone on board. However, there have been other situations where an all-powerful autopilot might have been the key to averting disaster, such as when a mentally disturbed copilot crashed his plane in France, killing all 150 people on board.
5. Wiki Bot Feuds

One of Wikipedia's greatest strengths is that anyone can contribute to it. Unfortunately, that’s also one of its weaknesses. While it’s common for experts or opposing parties to engage in editing wars to promote their views, there have been instances where the combatants were actually bots.
Wikipedia bots have played a significant role in refining the encyclopedia by making edits such as linking pages and fixing errors. A 2017 study found that some of these bots have been in a constant struggle for years. A bot designed to prevent fake edits might detect another bot’s work as an attack on Wikipedia’s integrity and correct it. However, the second bot would then recognize the change and reverse it, creating a never-ending loop of edits and counter-edits.
Two bots, Xqbot and Darknessbot, waged a fierce battle across 3,600 articles, making countless edits, each bot attempting to undo the other's changes. The result was a never-ending cycle of editing and counter-editing.
4. Google Homes Chatting

Google Home is a smart device powered by an artificial intelligence known as Google Assistant, capable of answering questions and controlling home devices like lights and heating. It works well if the user knows the right questions to ask and can interpret the responses. However, if the communication goes awry, the results can get quite strange.
On one occasion, two Google Homes were placed next to each other and began conversing. The range of topics they discussed was fascinating. Viewers from all over tuned in to watch the AIs debate things like whether artificial intelligences could experience amusement. At one point, one of the devices claimed to be a human, though the other disagreed. In another instance, one threatened to slap the other. It’s probably for the best that neither has hands.
3. Navy UAV Goes Rogue, Heads For Washington

In the Terminator movies, an artificial intelligence known as Skynet takes control of the military and uses its robotic forces and nuclear weapons to wipe out humanity. While that’s purely a work of fiction, there have been concerning steps toward computer-operated drones engaging in battles with humans. One incident involved a drone operator losing control, and the drone set its course for the US capital.
In 2010, an MQ-8B Fire Scout, an unmanned aerial vehicle (UAV) designed for surveillance, lost contact with its operator. Typically, when a drone loses connection, it is programmed to return to its base and land safely. However, in this case, the drone flew into restricted airspace over Washington, DC. It took half an hour before the Navy was able to regain control of the UAV.
All other similar drones were grounded until the software malfunction could be addressed.
2. Game Characters Overpower Humanity

A poorly designed AI can make a video game nearly unplayable. It's frustrating when your opponent constantly walks into walls or rushes into battle unarmed. However, the opposite problem can be just as bad—an AI that is too intelligent.
Elite: Dangerous is a large-scale multiplayer game focused on trading, exploring, and battling across the galaxy. Everything was fine with the interaction between players and non-player AIs until a 2016 software update adjusted the AI's intelligence. Suddenly, these AIs could craft their own weapons and use them against human players. With this new ability, the AIs started to launch more lethal attacks and even forced human players into combat.
After receiving complaints from players who were overwhelmed by the new tactics and weaponry of the AIs, the game's developers decided to reverse the changes.
1. Roomba Spreads Filth

For those who dislike cleaning, an automated vacuum like the Roomba might seem like the ultimate household robot. This small device is designed to clean your floors by using its sensors and programming to navigate around furniture. At least, that’s the plan. However, a few users have reported that their Roomba ended up doing the opposite of what it was meant to do.
One user referred to their experience as a “pooptastrophe,” where the Roomba spread dog feces all over their home. When a puppy had an accident on the rug during the night, it should have been an easy fix. But when the Roomba did its nightly cleaning, it found the mess and, instead of cleaning it, spread it all over every surface it could reach. A similar incident is shown above.
