(Homeland) - Recently, cybersecurity experts have uncovered a 'new trick' for AI tools to steal your data.

Let's not dwell much on the countless benefits that artificial intelligence (AI) brings in various aspects of modern life. However, that's when this technology is used for good purposes. Conversely, if abused to carry out malicious acts, the harm caused by AI is immeasurable.
Computer science experts at Cornell University (USA) have recently discovered a 'new trick' for AI tools to steal your data: Keyboard combinations. The research team has outlined details of an AI-controlled attack that can accurately steal passwords up to 95% by simply listening to what you type on the keyboard.
Accordingly, researchers conducted experiments by training an AI model specialized in analyzing the sound of keyboard typing activities and deployed it on a placed phone. The integrated microphone on the phone listened to keystrokes on a MacBook Pro and could reproduce them with 95% accuracy—the highest accuracy ever recorded by researchers without using a large language model.

The team also tested the accuracy of the model through a Zoom call, where keystrokes were recorded by the laptop's microphone during the call. In this test, AI reproduced keystrokes with an accuracy of 93%. Similarly, for Skype, the accuracy was 91.7%.
Don't rush to doubt the clattering mechanical keyboard you use daily. It's noteworthy that the keyboard's volume is unrelated to the accuracy of the attack. Instead, an AI model is trained on the waveform, intensity, and timing of each keystroke for recognition. For example, you may press a key slower by a fraction of a second than other keys due to your typing habits, and this factor will be taken into account by the AI model.

In reality, this attack takes the form of malicious software installed on your phone or another nearby device with a microphone. It will then collect data from your keystrokes by silently listening using the infected device's microphone and send this data to the research AI model. Researchers used CoAtNet, an AI image classifier tool, for the attack and trained the model on 36 keystrokes on MacBook Pro, each key pressed 25 times.
This type of attack is quite dangerous, and changing keyboards won't help the victim. Even the best keyboards can fall prey to the clever attack method of this AI model. However, the mitigation is not overly difficult. For instance, you should avoid entering passwords and take advantage of features like Windows Hello and Touch ID. You can also invest in a good password manager, not only avoiding the risk of entering passwords but also enabling you to use random passwords for all your accounts.
