While most software bugs are minor inconveniences that users can easily bypass, there are a few infamous instances where a seemingly simple error impacted millions, causing harm and even loss of life.
Software is crafted by humans, and every software system contains flaws, or what a salesperson might prefer to call 'undocumented features.' Essentially, a bug occurs when the software either behaves in unexpected ways or fails to perform an expected task. These errors may stem from poor design, miscommunication about a problem, or simply human oversight – much like a typo in a book. But while readers of a book can often guess the meaning of a misspelled word, computers, being much less adaptable, only do exactly what they’re instructed to do.
Below are ten cases where the outcomes of these bugs were vast and impactful in different ways:
10. Therac-25 1985-1987

The Therac-25 was a radiation therapy machine used primarily for cancer treatment. It had two modes of operation. The first involved a low-energy electron beam directed at the patient for short durations. The second aimed a high-energy electron beam at a metal 'target,' which would convert the beam into X-rays that would then pass into the patient.
Earlier versions of the Therac machine had physical fail-safes in place for this second mode of operation to ensure the target was properly positioned. Without it, high-energy beams could be accidentally directed into the patient. In the Therac-25, these physical safety measures were replaced with software-based ones.
Tragically, a software bug led to an ‘arithmetic overflow’ during automatic safety checks. This error meant the system sometimes used a number too large for its calculations to handle. If the operator was configuring the machine at that moment, the safety checks would fail, and the metal target would not be positioned correctly. As a result, radiation doses 100 times greater than intended would be delivered to the patient, causing radiation poisoning. This occurred in six documented cases, ultimately leading to the death of four patients.
9. World of Warcraft 'Corrupted-Blood' Glitch September 13, 2005

The immensely popular online game World of Warcraft (WoW), developed by Blizzard Entertainment, encountered a significant issue after an update on September 13, 2005 – leading to a virtual outbreak of death. Following the update, a new enemy character, Hakkar, was introduced, who had the ability to infect players with a disease called Corrupted Blood, which drained their health over time. This disease could spread from one player to another, much like an actual contagion, and had the potential to be fatal for any character infected. The effect was intended to be confined to the area in the game where Hakkar resided.
However, a crucial detail was missed: players could teleport to different areas of the game while still carrying the infection, and pass the disease to others. This is exactly what happened. While the exact number of virtual fatalities isn't known, entire cities in the game became quarantined zones, with deceased player avatars scattered across the streets. Fortunately, death in WoW isn't permanent, and the incident was quickly resolved when game administrators reset the servers and implemented additional software updates. What's particularly fascinating is how the players' responses to this in-game crisis mirrored how people might react to a similar real-world event.
8. North American Power Outage August 14, 2003

Impacting around 55 million people, primarily in the northeastern United States and Ontario, Canada, this event became one of the largest power outages in history. The incident began when a power plant along the southern shore of Lake Erie, Ohio, went offline due to excessive demand, placing additional strain on the remaining power network. As power lines are subjected to higher electrical loads, they heat up, causing the materials (typically aluminum and steel) to expand. This caused several lines to sag lower, coming into contact with trees, bringing them down and further escalating the system’s strain. The resulting cascade ultimately reduced the power network's output to just 20% of its normal capacity.
Although the power outage itself wasn’t caused by a software glitch, a software bug in the control center's alarm system played a crucial role in preventing an earlier response. In a 'race condition' scenario, two components of the system competed for the same resource and couldn't resolve the conflict, leading to the failure of the alarm system. Tragically, the failure was 'silent' – meaning the system broke down without alerting anyone. This left control room operators without the usual audio and visual cues, which they heavily relied on for situational awareness. The consequences were far-reaching, leaving numerous areas without power for several days, disrupting industries, utilities, and communication networks. The blackout was also implicated as a contributing factor in several fatalities.
7. USS Yorktown Incident September 21, 1997

In the software development world, certain bugs are well-known and regularly encountered. One such bug is the ‘divide by zero’ error, which occurs when a program attempts to divide any number by zero. This calculation is impossible to perform, at least without the use of advanced mathematical methods. As a result, most software – from supercomputers to handheld calculators – is designed to account for this scenario and prevent crashes or unexpected behavior.
The USS Yorktown experienced an embarrassing failure when its propulsion system completely shut down, leaving it stranded for nearly three hours. This was caused by a crew member entering a '0' into the on-board database management system, which was then used in a division calculation. The software had been installed as part of an initiative to reduce the crew needed to operate the ship. Fortunately, the ship was conducting maneuvers at the time rather than being deployed in a combat setting, where the consequences could have been far more dire.
6. Trans-Siberian Gas Pipeline Explosion 1982

This incident is somewhat speculative and may not have actually occurred, but if true, it serves as a notable example of a deliberately introduced software bug causing a significant crisis.
During the Cold War, when tensions between the US and Soviet Russia were high, it is believed that the Central Intelligence Agency intentionally implanted bugs into software sold by a Canadian company. The software was used to control the Trans-Siberian gas pipeline. The CIA suspected that Russia was purchasing this system through a Canadian intermediary to secretly acquire US technology, and saw an opportunity to deliver defective software instead.
The practices mentioned were later brought to light in the declassified 'Farewell Dossier,' which claimed that faulty turbines were indeed used in a gas pipeline. According to former Air Force Secretary Thomas C. Reed, a series of software bugs were deliberately introduced to make the system pass tests but fail under real-world conditions. The settings for pumps and valves were adjusted to exceed the pipeline's pressure limits, ultimately resulting in an explosion that was said to be the largest non-nuclear explosion ever recorded.
However, these claims have been disputed by KGB veteran Anatoly Medetsky, who asserts that the explosion was a result of poor construction, not intentional sabotage. Regardless of the cause, no fatalities were reported, as the explosion occurred in a very remote location.
5. Cold War Missile Crisis September 26, 1983

Stanislav Petrov, the duty officer at a secret bunker near Moscow, was responsible for overseeing the Soviet early warning satellite system. Just after midnight, an alert notified him that the United States had launched five Minuteman intercontinental ballistic missiles. Under the doctrine of mutually assured destruction, a strike from one power would trigger a retaliatory strike from the other.
This meant that if the attack was genuine, a swift response would be necessary. However, it seemed odd that the US would launch an assault with only a handful of warheads: while these would cause catastrophic damage and loss of life, they wouldn’t be nearly enough to decimate the Soviet forces. Additionally, the ground-based radar stations weren’t detecting any threats, although their inability to scan beyond the horizon due to the Earth’s curvature could explain the delay.
Another key consideration was the early warning system itself, which had known flaws and had been hastily deployed. Taking all these factors into account, Petrov made the decision to dismiss the alert as a false alarm. Although Petrov wasn’t in control of launching the missiles, had he recommended to his superiors that the attack be taken seriously, it could have triggered a full-scale nuclear war. Whether due to experience, intuition, or sheer luck, Petrov’s decision was the correct one.
It was later discovered that the early warning software had mistakenly identified the reflection of the sun off the tops of clouds as missile launches.
4. Sony CD Malicious Copy Protection

The ongoing struggle between the media industry and pirates continues to shift year after year. Each time a new method of safeguarding and distributing media securely is developed, new techniques for bypassing these protections are quickly discovered.
In 2005, some would argue that Sony BMG crossed a line when they introduced a controversial form of copy protection on certain audio CDs. When played on a Windows computer, these discs would automatically install a software component known as a ‘rootkit.’ A rootkit is a type of software that burrows deep into a system, modifying key processes. While not inherently harmful, rootkits are commonly used to covertly deploy malicious software, such as viruses or trojans. In Sony BMG’s case, the rootkit was designed to control how the CDs were used on Windows machines, preventing users from copying the media or converting it to MP3 format, thus reducing piracy.
The rootkit did its job – but by hiding itself from the user, it also allowed other viruses and malicious software to operate undetected. The flawed execution of the idea, combined with growing public resentment over Sony BMG’s secretive manipulation of users' computers, ultimately backfired. The rootkit was soon labeled as malware by many security companies, leading to lawsuits and a recall of the problematic CDs.
3. Year 2038

Although the Y2K issue is behind us, the problem isn't completely solved. Not all computers interpret dates in the same way, and many UNIX-based systems calculate dates by counting the number of seconds since 01/01/1970. For example, the date 01/01/1980 corresponds to 315,532,800 seconds since 01/01/1970. These dates are stored as a ‘signed 32-bit integer’, which has a maximum value of 2,147,483,647. This limitation means that these systems can only handle dates up to 2147483647 seconds after 01/01/1970, which takes us to January 19, 2038. After that, we might face another set of issues.
This is particularly concerning because UNIX-based software is often used in 'embedded systems' rather than personal computers. Embedded systems are designed for very specific tasks closely tied to their hardware, like controlling robotic assembly lines, running digital clocks, managing network routers, operating security systems, and more.
And let's not forget about the year 10000. Someone will need to figure out how to handle that, but I'm not volunteering.
2. Millennium Bug

The Y2K bug, often referred to as the Millennium Bug, remains one of the most infamous bugs in history, and it’s the one many of us recall hearing about back in the day. This issue stemmed from the short-sighted decisions made by computer professionals leading up to the year 2000. A common practice in many systems was to represent dates with just two digits, such as ‘98’ instead of ‘1998,’ which, at the time, seemed like a reasonable approach, and it had been in use long before computers even existed.
The real trouble arose when the year 2000 approached. Computers using this two-digit system could only represent the year as ’00,’ which could easily confuse the machine into thinking it was the year 1900. This would create issues in any system that calculated across a range of years, such as displaying a person born in 1920 and dying in 2001 as being negative 19 years old.
To tackle the problem, software companies quickly updated their systems, which controlled almost everything from banking and payroll to hospital systems and train tickets. Recognizing the global scale of the issue, the International Y2K Cooperation Centre was established in February 1999 to coordinate efforts between governments and organizations. In the end, the transition to the new millennium passed relatively smoothly, with the biggest fallout being the infamous universal hangover.
It’s difficult to determine whether the success in avoiding major problems was due to the extensive work done to fix the issue or if the media had overstated the potential impact to begin with—likely, it was a bit of both.
1. Patriot Missile Bug February 25th, 1991

During Operation Desert Shield, the US military deployed the Patriot Missile System to defend against aircraft and missiles, particularly the Iraqi Al Hussein (SCUD) missiles. The system’s tracking software relied on the velocity of the target and the current time to predict its future position. Given that some targets could travel at speeds up to MACH 5, the calculations required for accuracy were critical.
At the time, a bug in the targeting software caused the internal clock to 'drift' over time. This meant that, as the system continued running, the clock became increasingly inaccurate. The bug was known, and the standard fix was to reboot the system regularly to reset the clock.
Unfortunately, those responsible didn’t fully understand the necessary frequency of reboots, leading to the system running for 100 hours without a reset. When an Iraqi missile targeted a US airfield in Dhahran, Saudi Arabia, the Patriot missile system detected it. However, by this point, the internal clock had drifted by 0.34 seconds. This small error caused the system to calculate the missile’s location over half a kilometer away from its actual position. As a result, the system incorrectly assumed there was no missile threat and canceled the interception. The missile continued on its course, killing 28 soldiers and injuring 98 others.
