A new report highlights the need for new innovations and investments in security to prevent adversaries from gaining the upper hand with AI-enabled cyberattacks.
Although the use of artificial intelligence (AI) in today’s cyberattacks is limited, a new report warns that this may change in the near future. Created in collaboration with WithSecure™ (formerly known as F-Secure Business), the Finnish Transport and Communications Agency (Traficom) and the Finnish National Emergency Supply Agency (NESA), the report analyzes current trends and developments in AI, cyber-attacks and areas where they two overlap. The report points out that AI use for cyberattacks is currently very rare and is limited to apps that use “social engineering” (impersonate an individual), or are used in ways that are not directly observable by researchers and analysts ( such as data analysis in backend systems). However, the report highlights that the quantity and quality of advances in AI have increased the likelihood of more advanced cyber attacks in the foreseeable future. According to the report, target identification, social engineering and impersonation are the most imminent threats enabled by AI and something that is expected to further develop within the next two years – both in number and complexity. Within the next five years, attackers are likely to develop AI that can autonomously find vulnerabilities, plan and execute attack campaigns, as well as stealthily evade defenses and gather information from compromised systems or from open source code. “While AI-generated content has been used for social engineering purposes, AI technologies designed to direct campaigns, perform attack steps, or control the logic behind malware have still not been observed in the wild. These technologies will first be developed by resourceful, highly skilled adversaries, such as nation-state groups,” says Andy Patel, Intelligence Researcher at WithSecure. “After new AI technologies are developed by sophisticated adversaries, some of these are likely to spread to less skilled adversaries, thereby broadening the threat landscape.” While current defenses can handle some of the challenges posed by attackers’ use of AI, the report highlights that others require defenders to adapt and evolve. New technologies are needed to counter AI-based phishing that uses synthesized content, spoofing of biometric authentication systems, and other features on the horizon. The report also touches on the significant role that non-technical solutions, such as information sharing, resources and security awareness training, have in dealing with the threat of AI-powered attacks. “Security is not seeing the same level of investment or advancement as many other AI applications, which could eventually lead to attackers gaining an upper hand,” says Samuel Marchal, Senior Data Scientist at WithSecure. “You must remember that while legitimate organizations, developers and researchers follow privacy regulations and local laws, attackers do not. If policymakers expect the development of safe, reliable and ethical AI-based technologies, then they need to consider how to secure that vision in relation to AI-enabled threats.”
A professional writer by day, a tech-nerd by night, with a love for all things money.