FortiGuard Labs security research group (USA) has forecasted cybersecurity threats in 2024, focusing on AI-generated content becoming a 'weapon' for cyberattacks. This has been and is being used in multiple stages, from defeating security algorithms to creating deepfake videos to mimic behavior and tone to deceive users.
https://Mytour/Mytour/ro-trao-luu-do-dynamic-island-cho-iphone-cuLooking ahead, hackers will exploit AI in new ways, posing significant challenges for security systems. One of the first worrying trends is the use of artificial intelligence to create fake personal profiles. By extracting data from social media platforms and public websites, hackers will combine this information with AI to create highly realistic fake profiles, enhancing the likelihood of success in fraudulent activities.
FortiGuard Labs warns that security units will face significant challenges in identifying and handling a large number of 'virtual people' on the cyberspace.
The challenge of password security escalates as Artificial Intelligence (AI) intervention becomes increasingly widespread. Present methods for password cracking often rely on predicting and testing various character sequences.
One of the initial concerns is leveraging artificial intelligence to fabricate personal profilesUsing machine learning tools, AI can analyze the passwords commonly utilized by humans, identify common patterns, and ascertain trial patterns with high accuracy, significantly reducing the time needed to guess passwords.Furthermore, AI possesses the capability to surpass password anti-guessing measures, including blocking access after multiple incorrect attempts within a short timeframe.Thanks to recognizing the rules of the security system, artificial intelligence can adjust the password cracking speed to avoid detection. Particularly, some AI models are trained to automatically process captcha images, a tool used to distinguish between robots and users during login, making online information security increasingly intricate.
Artificial Intelligence (AI) is gradually becoming a target for attacks through the 'AI poisoning' strategyAccording to experts at FortiGuard Labs, the Artificial Intelligence (AI) model is gradually becoming a target for attacks through the 'AI poisoning' strategy (AI poisoning attacks). Particularly, from the training phase, hackers will infiltrate systems and corrupt data sources, leading to the skewed development of AI or generating undesired behaviors, causing significant harm to the owning entity. These flawed AIs carry numerous risks when deployed in fields such as self-driving cars, healthcare, and security.
Conversely, experts also note that AI can become a powerful force in combating cyberattacks. In a recent report, researchers at Fortinet have successfully utilized AutoGPT, an AI system based on the GPT-4 model, to implement enhanced security measures for network systems.
Security systems need to conduct continuous monitoring, access control, and protect AI training dataThis AI not only receives tasks from humans but also automatically divides tasks into stages, then initiates 'AI agents' to analyze and make decisions. Moreover, it has the ability to automatically search for and download necessary security tools during operation like a cybersecurity employee.
To counteract hackers applying Artificial Intelligence (AI) to cyberattacks, security systems need to conduct continuous monitoring, access control, and protect AI training data.Simultaneously, this process includes application control, behavior analysis, as well as user inspection and verification. Enhancing cybersecurity requires not only narrowing the skill gap in cybersecurity but also sharing information and experience in handling security incidents among organizations. These measures help weaken cybercriminal networks and establish a secure, flexible system capable of rapidly responding to increasingly complex AI-based threats.