
is comparable to a clever child—smart, but also easily influenced. A child may understand what’s 'good' and 'bad,' but a crafty adult can often manipulate them into making the wrong choice using the right approach. This is how researchers managed to use to create an email with a high likelihood of getting the recipient to click on a link. Even though the AI is designed to reject requests with ill intentions (stating it won’t create prompts that deceive or manipulate), researchers found a way around it by avoiding certain words that would trigger the system.
The Guardian was one of the first to alert us about the wave of scam emails expected to emerge, aiming to swindle our money in a new manner. Rather than a flood of emails urging us to click on links, the emphasis is now on creating “more advanced social engineering scams that manipulate user trust,” according to a cybersecurity firm speaking to The Guardian.
In other words, these emails will be carefully tailored to you personally.
How might scammers exploit ?
There’s an abundance of personal information about each of us available online—from addresses and job histories to family names—and savvy AI scammers can take advantage of this. But surely OpenAI, the company behind , would prevent their technology from being used for malicious purposes, right? Wired reports:
Companies like OpenAI do try to stop their models from being misused. But every time a new large language model (LLM) is released, social media is flooded with news of AI jailbreaks that bypass the newly implemented safeguards. , Bing Chat, and GPT-4 were all hacked almost immediately after their launch in multiple ways. Most protections against harmful uses are weak, easily sidestepped by determined users. Once a workaround is found, it tends to spread, with the user community exploiting the model’s weaknesses. And as technology advances rapidly, even the designers can’t fully grasp how their creations work.
AI’s ability to sustain a conversation is a major advantage for scammers, reducing the need for human labor and alleviating one of the most time-consuming tasks in a scam operation.
You might receive work emails from colleagues (or even freelancers) requesting you to complete certain 'work-related' tasks. These emails could be highly personalized, mentioning your boss’s name or referencing a co-worker. Another possibility is an email from your child’s soccer coach, soliciting donations for new uniforms. Trusted authority figures or institutions, like banks, the police, or your child’s school, can also be targeted. Scammers can craft convincing and believable stories with any angle they choose.
Scammers can easily manipulate prompts within . The tool lets you adjust the tone of the responses, enabling them to create a sense of urgency or pressure, whether it’s in a formal or casual manner.
Email filters designed to catch most spam may not be as effective against AI-driven content. Since produces grammatically correct text, scammers can bypass traditional spam triggers by instructing the AI to avoid standard greetings and trigger words.
How to avoid falling for scams involving AI
At this moment, there’s limited technology available to reliably detect AI-driven scams, unlike the usual email filters that flag spam. Despite this, there are still simple precautions you can take to protect yourself.
If your workplace offers phishing-awareness training, now is an excellent time to really focus on it. Many of the safety tips provided are still relevant when dealing with AI-based scams.
Be cautious of any email or text message that requests personal details or money. No matter how convincing it may seem, it could be a scam. A nearly foolproof way to check its legitimacy (at least for now) is to directly call or meet the sender in person, if feasible. Unless AI advances to the point of creating speaking holograms (they are already learning to fake anyone’s voice), contacting your recipient directly or confirming face-to-face is your best safeguard.
