loader image

Is there malicious use of Artificial Intelligence?

Artificial Intelligence (AI) already has a leading role in our lives, based on the development of algorithms – mathematical learning capabilities – that allow us to analyze large amounts of data, streamline processes and to bring efficiency to performances.

The development of AI technology is directly related to the development of Big Data, an essential factor for this type of technology. Exponential growth in data generation and high levels of telephone connectivity make AI able to generate certain consumption patterns that serve to empower business.

The COVID-19 pandemic accelerated the digital transformation of companies that brings about the development of AI technology, which already has enormous potential. In today’s scenario, the use of AI is an essential factor that contributes to the strength of organizations and promotes their growth in a strategic way.

As already perceived, it is clear that AI can bring enormous benefits to society and help solve some of the biggest challenges we face today. But it is also true that it can be used as a method of protection against a number of varied threats, not only digital, but also physical and political.

Large corporations and governments have already implemented AI to their Internal Security systems. They can save time and money by going through structured data quickly, as well as comprehensively reading and learning unstructured data, statistics, words, and phrases. It’s a more proactive and faster form of response to new challenges.

AI can be a valuable tool to help against hackers. Through constant training, machine learning – as a component of AI – learns and assimilates user behavior patterns and can alert them to any unusual activity. Unfortunately, we all tend to rely on firewalls as a defense method, which today can already be breached in the face of an increasingly sophisticated hacker attack.

Hackers are always trying to do better, to sneak through holes we didn’t know existed. In many cases it takes months for an organization to detect a data breach or information theft. In this new scenario, AI can collect data and wait for a hacker to appear. The AI ​​searches for behavioral anomalies that hackers often show, for example the way a password is entered or where the user is logging in.

In the matter, a report written by Trend Micro, United Nations Interregional Crime and Justice Research Institute (UNICRI) and Europol – published in November 2020 – provides an updated panorama on the malicious uses and abuses of AI, which can include malware of AI, password hacking – through AI – and encryption and social engineering attacks assisted by AI. Therefore, the risks and possible criminal abuse of AI systems need to be interpreted, not only to protect society, but also strategical industries and infrastructure.

According to the information provided in the report, one of the most visible malicious uses of AI is the phenomenon of the so-called DeepFakes. Through AI, it is possible to manipulate or generate visual and audio content -which would be difficult for human beings- or even technological solutions -not immediately distinguishable from the authentic ones-. DeepFakes videos have become the most sophisticated technique to generate fake news.

It is possible that malware developers may already be using AI without being detected by researchers and analysts. For example, they could use AI to avoid spam filters, escape the detection characteristics of antivirus software, and thwart malware analysis.

AI could also serve to improve traditional hacking techniques by introducing new forms of attacks that would be difficult for humans to predict. These could include fully automated penetration tests, improved password disclosure methods, tools to break CAPTCHA security systems, or phishing-like attacks.

It is believed that in the future AI could enable criminals to accomplish large-scale social engineering by automating the first steps of an attack – through content generation, improving business intelligence data collection and accelerating the detection of attacks on potential victims and business processes.

We observe that the increasing automation of the many aspects of daily life – using AI-based systems – inevitably brings with it a possible loss of control over these aspects. An example of this could be stock market manipulation -enabled to AI- that takes advantage of the use of algorithms based on AI in the area of ​​high-frequency trading. The distinctive characteristic of this type of trading is the speed of processing using algorithms

Meanwhile, current and future technologies like 5G, in combination with AI, will shape the industry and further drive automation and smart technologies at scale. It can be presumed that criminals will also be in conditions to attack or manipulate these technologies.

In conclusion and facing the future challenge of preventing and combating the malicious use of this technology, it is undeniable that AI has a lot of potential for positive applications, including critical support in the investigation of all kinds of crimes and the implementation of important projects by organizations.

At G5 Integritas we can help you to carry out searches for large volumes of information, as well as process the information using digital data management tools and then provide a personalized analysis. For more information, contact us at info@g5integritaslatam.com or visit our website www.g5integritaslatam.com

Equipo G5 Integritas Latam

Equipo G5 Integritas Latam

Equipo G5 Integritas Latam

G5 Integritas is a consulting and strategic advisory firm for corporate clients, financial institutions, law firms, entrepreneurs and investors, specialized in: RISK AND COMPLIANCE DUE DILIGENCE AND BACKGROUND CHECK INVESTIGATIONS AND BUSINESS INTELLIGENCE SECURITY CONSULTING FORENSIC INFORMATION AND DATA RECOVERY

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This