We are living through the Fourth Industrial Revolution, an era marked by rapid advancements in robotics, autonomous vehicles, and the rise of smart home technologies. At the core of all these innovations lies artificial intelligence (AI), which involves the creation of automated systems capable of matching or even exceeding human intelligence.
AI is hailed as the next monumental leap forward—so much so that future technologies will rely heavily on it. But do we fully understand the consequences of this? Here are ten unsettling truths about artificial intelligence.
10. Your Self-Driving Car Could Be Programmed to Make Life-or-Death Decisions

Imagine you're driving when a group of children suddenly appears in your path. You slam the brakes, but they fail to respond. Now, you're faced with two choices: run over the children to save yourself, or steer into a nearby wall, sacrificing yourself but saving the children. Which option would you choose?
Most people would choose to swerve into the bollard, even if it meant sacrificing their own life.
Now, imagine you're no longer the driver but a passenger in a self-driving car. Would you still want the car to swerve into the bollard, potentially killing you? Many people who previously said they would sacrifice themselves as a driver also stated they wouldn't want a self-driving car to make that decision for them. In fact, they'd likely refuse to buy such a car if they knew it could deliberately put them in harm's way during an accident.
This brings up an important question: What will self-driving cars actually do in such situations?
Self-driving cars will do exactly what they're programmed to do. As of now, car manufacturers remain tight-lipped on the issue. Big names like Apple, Ford, and Mercedes-Benz avoid commenting whenever the question arises. A Daimler AG executive (parent company of Mercedes-Benz) once claimed that their self-driving cars would ‘protect the passenger at all costs.’ However, Mercedes-Benz later contradicted this, stating that their cars are designed to prevent such dilemmas from occurring in the first place. But we all know such situations are inevitable.
Google, on the other hand, has been more transparent. The tech giant revealed that its self-driving cars are programmed to avoid hitting unprotected road users and moving objects, meaning the car would hit the bollard and kill the driver. Google further elaborated that in unavoidable accidents, their cars would aim for the smaller of two vehicles. In fact, Google's self-driving cars might even try to stay closer to smaller vehicles. Google holds a patent for a technology that enables their cars to steer away from larger vehicles and toward smaller ones on the road.
9. Robots Could Demand Rights Like Humans

With the rapid development of AI, it's conceivable that robots could reach a point of self-awareness. At that stage, they might begin to demand rights similar to those of humans—seeking housing, healthcare, and even the right to vote, join the military, or be granted citizenship. In exchange, they could be required to pay taxes.
This idea was explored in a study by the UK Office of Science and Innovation’s Horizon Scanning Center. Published by the BBC in 2006, when AI was still in its infancy, the research speculated on the technological advancements that could emerge in the next five decades. Does this mean machines could be asking for citizenship within 40 years? Only time will tell.
8. Lethal Autonomous Robots Are Already Deployed

When we refer to 'automatic killer robots,' we mean robots capable of killing without human intervention. Drones don’t count, as they are operated by humans. One such autonomous killing machine is the SGR-A1, a sentry gun created in collaboration between Samsung Techwin (now Hanwha Techwin) and Korea University. The SGR-A1 looks like a massive surveillance camera but is equipped with a high-powered machine gun capable of automatically identifying and killing targets.
The SGR-A1 is already deployed in Israel and South Korea, where it has been placed along the Demilitarized Zone (DMZ) with North Korea. South Korea claims that the machine operates in semi-automatic mode, where it detects targets but requires human approval before executing a kill. However, they deny enabling the fully autonomous mode that would allow the robot to independently decide whom to target and kill.
7. War Robots Could Switch Allegiances

In 2011, Iran captured an RQ-170 Sentinel stealth drone from the United States military in one piece. This is significant because it means the drone was not destroyed. Iran claims it tricked the drone into landing by spoofing its GPS signal, causing it to believe it was in friendly airspace. Some US experts dispute this account, but the fact remains that the drone wasn’t shot down. So, what really happened?
For all we know, Iran might actually be telling the truth. Drones, GPS systems, and robots all rely on computers, and as we’re all aware, computers can be hacked. War robots wouldn’t be exempt if they were deployed in combat. In fact, there’s a real possibility that enemy forces would attempt to hack them and turn them against the very army that created them.
While autonomous killer robots aren't yet in widespread use, imagine a scenario where they’re hacked. Picture an army of robots suddenly switching sides in the heat of battle, turning on their own commanders. Or think about North Korea hacking the SGR-A1 sentry guns along the DMZ and turning them on South Korean soldiers. The possibilities are frightening.

Russia continues to make headlines for using bots to influence US voters and push them toward voting for Donald Trump in the 2016 election. But another little-known episode involves Russia deploying bots to sway UK voters toward leaving the European Union during the 2016 Brexit referendum.
Just days before the Brexit vote, more than 150,000 Russian bots, which had previously focused on tweets about the war in Ukraine and Russia’s annexation of Crimea, suddenly began posting pro-Brexit messages. These bots sent out approximately 45,000 pro-Brexit tweets in just two days, but the activity dropped sharply to nearly nothing right after the referendum.
5. Machines Are Set to Replace Human Jobs

There's no doubt that machines will eventually take over many of our jobs. However, the real question is: when will this happen, and how much of our work will they replace? Well, it turns out, it’s going to be a lot.
According to leading consulting and auditing firm PricewaterhouseCoopers (PwC), robots are projected to replace 21% of jobs in Japan, 30% in the UK, 35% in Germany, and 38% in the US by 2030. By the end of this century, machines could take over more than half of all jobs currently held by humans.
The most heavily impacted sector will be transportation and storage, where 56% of jobs will be replaced by machines. This will be closely followed by manufacturing and retail, with robots taking over 46% and 44% of positions, respectively.
When will this happen? Predictions suggest that by 2027, machines will be driving trucks and running retail stores by 2031. By 2049, they may even be writing books, and by 2053, performing surgery. Few professions will remain untouched—one notable example being the role of a church minister, which will likely remain human-led, not because a robot can’t preach, but because most people probably wouldn’t want to be preached to by a machine.
4. Robots Have Learned the Art of Deception

In a fascinating development, robots are beginning to master deceit. Researchers at the Georgia Institute of Technology in Atlanta designed an algorithm that allowed robots to choose whether or not to deceive other robots or humans. If they opted for deception, the algorithm also helped them figure out how to deceive, while minimizing the chances of the victim discovering the trick.
In one experiment, a robot was tasked with guarding resources. It would regularly check on the resources but would deliberately visit false locations whenever it sensed another robot nearby. Sponsored by the United States Office for Naval Research, this experiment hints at potential military applications, where robots guarding military assets might alter their patrols to avoid detection by enemy forces.
In a different study conducted at the Ecole Polytechnique Federale of Lausanne in Switzerland, scientists created 1,000 robots and split them into ten groups. The robots’ job was to find a 'good resource' while avoiding a 'bad resource.' Each robot had a blue light that it flashed to alert its group when it discovered the good resource. The top 200 robots from the experiment were selected to have their algorithms 'crossbred' to produce the next generation of robots.
The robots improved in locating the good resource, but this led to overcrowding as too many robots gathered around the prize. In fact, the robot that found the resource was sometimes even pushed away from it. After 500 generations, the robots had adapted to keep their lights off when they found the resource, reducing congestion and avoiding being displaced by other robots. Meanwhile, other robots evolved to track down these 'lying' robots by searching for areas where robots had gathered with their lights off, going against their original programming.
3. AI Could Be Our Undoing

There are growing concerns that AI could lead us to an apocalyptic end, much like the scenarios depicted in the Terminator movie series. And these fears are not being voiced by just random scientists or conspiracy theorists, but by some of the most renowned figures in science and technology, including Stephen Hawking, Elon Musk, and Bill Gates.
Bill Gates has warned that AI could one day become so intelligent that it surpasses our control. Stephen Hawking shares a similar view, but he doesn’t believe AI will simply snap and go rogue overnight. Instead, he predicts that machines will bring about our downfall by becoming too skilled at what they do, leading to a conflict when their goals no longer align with ours.
Elon Musk has likened the rise of AI to 'summoning the demon.' He considers it to be the greatest threat humanity faces. In order to avoid an AI catastrophe, Musk has urged governments to regulate AI development before profit-driven companies do something rash.
2. AI Will Surpass Humans in Logic and Intelligence

Artificial intelligence can be categorized into two types: strong and weak AI. Currently, the AI systems around us are classified as weak AI. This includes technologies like smart assistants and computers that have been defeating chess champions since 1987. The key distinction between strong and weak AI is their ability to reason and act like the human brain.
Weak AI typically performs tasks as it was specifically programmed to do, no matter how complex those tasks may seem to us. In contrast, strong AI possesses the consciousness and reasoning abilities of humans. It isn't confined by programming limits and can make decisions independently, without human guidance. Strong AI doesn't exist yet, but scientists forecast that it could emerge within the next decade.
1. The AI Market Is Becoming a Monopoly

In the first quarter of 2017 alone, major tech companies acquired 34 AI startups. To make matters worse, they’re offering enormous salaries to recruit top AI researchers. If this trend continues unchecked, the future of AI could be dominated by just a few giants.
