Ad Code

Responsive Advertisement

Artificial Intelligence in Cyber Security Attacks

Artificial Intelligence in Cyber Security Attacks, Cyber ​​security experts explained ways in which artificial intelligence and machine learning can be used, with the aim of evading cyber security defenses to make intrusions faster and more effective in penetrating the target, during the Cybersecurity Summit (NCSA) and (Nasdaq).


The Executive Director of the National Cyber ​​Security Alliance hosted the conversation as part of Usable Security entitled Creating and Measuring Change in Human Behaviour. Hackers can use AI to evade detection, hide where they can't be found, and automatically adapt to countermeasures.


Artificial Intelligence in Cyber Security Attacks



What are the ways in which AI and machine learning can be used in cybersecurity attacks?


DigitalGuardian's chief information security officer has pointed out that cybersecurity will always need human brains to build strong defenses and stop attacks. Where AI is the tool and expert information security analysts and threat hunters are the real heroes. Here are three ways in which AI and machine learning can be used in cybersecurity attacks:


1- Data poisoning


Bad security actors sometimes target the data used to train machine learning models. Data poisoning is designed to manipulate the training dataset, with the aim of controlling the prediction behavior of a trained model to trick the model, leading to incorrect performance, such as classifying spam emails as safe content.


Types of data poisoning


There are two types of data poisoning: attacks that target the availability of the machine learning algorithm, and attacks that target its integrity. Research indicates that poisoning a training data set by 3% leads to a decrease in accuracy by 11%, and with backdoor attacks, it can be The hacker can add inputs to an algorithm that the model designer doesn't know. The hacker uses this backdoor to make the machine learning system misclassify a particular string as benign when it may carry bad data.


Since poisoning data technologies can be transferred from one model to another, data is the blood and fuel for machine learning systems, and a great deal of attention must be paid to the data we use to train models as models. User confidence is affected by the model, the quality of training, and the data that goes into it. She noted that the industry needs standards and guidelines to ensure data quality, and that (NIST) is working on national guidelines for trustworthy artificial intelligence, including high-level guidelines and technical requirements to address accuracy, security, bias, privacy, and annotation.


2- Generative adversarial networks


What are Generative Adversarial Networks (GANs)?


Essentially two AI systems compete with each other, one that simulates original content and the other that detects errors, it's a class of machine learning frameworks designed by Ian Goodfellow and colleagues in 2014. Two neural networks compete with each other in a game. Looking at the training set, this technique learns to generate new data with the same statistics as the training set. And by competing against each other, they jointly create content compelling enough to pass on the original. Nvidia researchers trained a unique AI model to recreate PAC-MAN simply by monitoring hours of gameplay, without the game engine, as Stephanie Condon explains on ZDNet.


What is the point of hackers using Generative Adversarial Networks (GANs)?


Hackers and hackers use GANs to mimic normal traffic patterns, divert attention away from attacks, and quickly find and infiltrate sensitive data. They are in and out within 30-40 minutes thanks to these capabilities, and once attackers start taking advantage of AI and machine learning, they can automate these tasks. GANs can also be used to crack passwords, evade malware detection, and spoof facial recognition.


System (PassGAN)


The PassGAN system that the machine learning researchers built was trained on an industry-standard password list, with the system eventually being able to guess more passwords than many other tools trained on the same data set. In addition to generating data, GANs can also create malware that can evade machine learning-based detection systems.


The artificial intelligence algorithms used in cybersecurity must be constantly retrained, with the aim of identifying new and ever-evolving attack methods. As adversaries evolve, we must be prepared for this evolution in attacks. An example can be obfuscated: when a piece of malware is often generated using legitimate code, it must be a machine learning algorithm capable of identifying the malicious code within.


3- Manipulating robots


The chief cybersecurity strategist at VMware Carbon Black has pointed out that if AI algorithms are making decisions, they can be manipulated to make a wrong decision. And if hackers understand these patterns, they can exploit and abuse these patterns in attacks, pointing to the recent attack on the cryptocurrency trading system (Cryptocurrencies) operated by bots. The attackers got in and discovered how the bots trick the algorithm, and this can also be applied across other applications. This technology is not new but these algorithms are now making smarter decisions; This increases the risk of making a bad decision.