ESET: new generation viruses will be able to imitate human behavior

Experts have named the main cyberthreats posed by the spread of artificial intelligence (AI) technologies

Analysts predict the growth of the AI technology market in the cybersecurity sector by more than 20 percent per year. It will reach $46.3 billion by 2027.

More than thirty countries, including Russia, have national strategies for AI development. In 2021-2022, the Russian government will allocate $53.5 million for artificial intelligence technology.

The head of the Threat Intelligence department, Alexander Pirozhkov, noted that their active implementation can create a new threat to information security.

Artificial intelligence is changing the characteristics of cyber threats, and its systems will become more and more vulnerable.

First, AI-based malware will be able to impersonate trusted users by learning the nuances of human behavior and communication in email correspondence and social networks. Viruses will reproduce the writing style of a particular user, and such messages will be impossible to distinguish from genuine ones.

The second threat to AI is the ability to conduct malicious actions in the background. For example, skilled hackers often remain unnoticed for several months.

And the third threat is the speed of analysis and decision-making, which is much higher in AI than in humans. Artificial intelligence will be able to make the most targeted attacks without wasting time and resources on processing data not relevant to the victim.

"And since AI knows how to understand context, it will become even harder to detect attacks under its control. It is not yet clear how the programmed system will behave when the context changes. Can it be controlled? Information security professionals continue to work on forms of human control so that all decisions made by AI are auditable," the expert concluded.