We protect your TV shows, movies, live events, software, video games or ebooks
We help you to protect your profitability and to stop unauthorized broadcasts in real time
Monitoring the internet to protect your brand and reputation, giving you the tools to safeguard your image
Highly specialized service for cable distributors tracking illegal IP TV resellers in a geographically delimited zone
We protect your TV shows, movies, live events, software, video games or ebooks
We help you to protect your profitability and to stop unauthorized broadcasts in real time
Monitoring the internet to protect your brand and reputation, giving you the tools to safeguard your image
Highly specialized service for cable distributors tracking illegal IP TV resellers in a geographically delimited zone
In our previous articles, we looked at what AIs are and how they can be applied in different industrial fields. But what impact does this technology have on cyber-security?
AIs can be used both by those seeking to compromise systems, and by those protecting them. In today’s article, we will examine the cyber-criminal’s perspective.
Experts claim that attackers can use generative AI and large language models to extend attacks to an unprecedented level of speed and complexity. Most notably, they can find new ways to take advantage of geopolitical tensions to carry out advanced attacks. AIs also enable hackers to optimize their ransomware and phishing attack techniques and make them far more sophisticated.
Thanks to AI, cyber-criminals can remain dormant and undetected in a company’s network for long periods, during which they will set up tools to access the organization’s critical infrastructure. Then, when they’re ready to launch a company-wide attack, they can monitor meetings, extract data, spread malware, create privileged user accounts to access other systems and/or install ransomware.
AI is a particularly effective tool for cyber-criminals due to its ability to learn by collecting data and anticipate reactions accordingly, making attacks more effective. Automated and targeted attacks, such as phishing attacks and AI-generated malware, can be more difficult to detect and counter. AIs can generally be used to improve existing hacking techniques, such as stealth attacks, password decryption, CAPTCHA hacking and identity theft.
However, AI-powered cyber-crime also has its own specificity, and we will see a few examples together.
AI-enabled attacks are among the emerging threats identified by the European Union’s cyber-security agency
If data is altered or corrupted, an AI-powered tool can produce unexpected, even malicious results. It is currently perfectly possible to corrupt a model with malicious data to alter the results, which can be very dangerous for the company or its customers.
Attackers use machine learning and AI to compromise environments by poisoning models with inaccurate data. Machine learning models rely on correctly labeled data samples to establish accurate and reproducible detection profiles. By introducing benign files that resemble malware, or creating behavioral models that turn out to be false positives, attackers can make it appear that attack behaviors are not malicious. Attackers can also poison AI models by introducing malicious files that AI training has qualified as safe.
For example, slightly modified images can mislead an image recognition model. This can have serious implications in areas such as autonomous vehicle safety and facial recognition.
AI-powered tools could enable developers with basic programming skills to create automated malware, such as advanced malicious bots. A malicious bot can steal data, infect networks and attack systems with little or no human intervention.
Sophisticated malware, for example, can modify local system libraries and components, execute in-memory processes and communicate with one or more domains belonging to the attacker’s control infrastructure. All these activities combined create a profile known as tactics, techniques and procedures (TTP). Machine learning models can observe TTPs and use them to develop detection capabilities.
Attackers are actively seeking to map existing and developing AI models used by cyber-security vendors and operational teams. By learning how AI models work and what they do, criminals can actively disrupt machine learning operations and models during their cycles. This can enable hackers to influence the model by tricking the system to favor attackers and their tactics. It can also enable attackers to evade known models altogether by subtly modifying the data to avoid detection on the basis of recognized models.
We will be back in mid-September for the final part of our article. We’ll be looking at how AI can benefit cyber-security. In the meantime, if you have any content to protect – film, series, book, music album or software – please don’t hesitate to contact us, and one of our account managers will be happy to help. We’ve been pioneers in cybersecurity and intellectual property protection for over ten years. Happy start of the new school year!
© 2023 PDN Cyber Security Consultant. All rights reserved.