Gerry Saporito's site logo

Adversarial Attacks on Intrusion Detection Systems

man holding laptop computer with both hands

Artificial intelligence (AI) has come a long way in a very short period of time. Alan Turing, a pioneer computer scientist, published the first paper on the possibility of machines that can think in 1950. In less than a century, humans have created machines and programs that can compute and comprehend very large amounts of data to learn and mimic the acts of humans themselves.

People, businesses, and governments rely heavily on this newfound technology without even realizing it. One growing sector of AI is security. Intrusion Detection Systems (IDS) are systems used to protect networks or systems from malicious traffic. AI is dynamic by nature with its ability to learn, so it would be ideal for this application so that it can learn and evolve. This makes AI good at detecting good and malicious internet traffic as it doesn’t follow a defined set of rules but instead dynamically creates its own.

Intrusion detection systems (IDS) play a big role in protecting this information saved in a network or system, and AI is being integrated into IDS due to their low maintenance and ability to stay up to date with the latest attacks. Due to the rapid developments in AI, the need to increase the robustness of these systems have been neglected, and current research looks into ways of improving IDS.

An emerging technology called a Generative Adversarial Network (GAN) tries to attack any kind of machine learning systems using AI. Attacks generated by a GAN on machine learning systems act to confuse or fool the algorithm, to produce an output different than expected.

The image below is a good example of how a GAN confuses a machine learning system. It takes an input and modifies it so that it still seems like the original. Any person can look at these two images will be convinced that they are both cats. They both have pointed ears, whiskers, shape of the face, fur, etc. But to a machine learning system where it analyzes the image on a deeper, lower level, these two are very different. The one on the left has much more detail which will confuse the machine learning system as it doesn’t see as much fuzz and color differences. Along with this, the image on the right is much more plain, which will either convince the machine learning system that the image on the left is too detailed to be a cat or the image on the right doesn’t have enough details to be a cat, even though to a person they are the same thing. The end result will be a misclassified image, proving the GAN to be successful in confusing the machine learning algorithm.

An attack on any machine learning-based IDS contains three main characteristics which depict what kind of attack it will be, therefore falling into one of eight distinct classes of attacks. It should be noted that positive is assumed to be malicious and negative is assumed to be normal. The three different classes can be found below, each containing two different characteristics:


  • Causative attacks influence learning with control over training data (alter training process)
  • Exploratory attacks cause Denial of Service (DoS) (exploits existing weaknesses), usually with false positives (rejects good input)

Security Violation:

  • Integrity attacks compromise assets via false negatives (accepts malicious input)
  • Availability attacks cause denial of service, usually via false positives (rejects good input)


  • Targeted attacks focus on a particular instance (lets certain input pass)
  • Indiscriminate attacks encompass a wide class of instances (lets a lot of things pass)

An attack can take one characteristic per category, and will never take both from the same category as the two would contradict each other. Depending on the type of IDS you are attacking, the type of attack the GAN would create would differ. For example, a sequence-based (rule-based) IDS would be subjected to Exploratory Integrity attacks, where the specificity would not matter. The Exploratory Integrity attack focuses on flooding the system using false negatives, allowing malicious traffic to enter the system. Fooling a sequence-based IDS involves sending a mass number of inputs, each with a slightly different intent, to try and get by all of the rules set by the IDS. The goal could be to send one malicious input through, or many, and that is up to the bad actor to decide.

These attacks are then binned into 4 attack types: Denial of Service (DoS), Probe, User to Root(U2R) and Root to Local (R2L). These attack types focus on different outcomes, where the intent of each attack is found below:

  • DoS is an attack that tries to shut down traffic flow to and from the target system. The IDS is flooded with an abnormal amount of traffic, which the system can’t handle, and shuts down to protect itself. This prevents normal traffic from visiting a network. An example of this could be an online retailer getting flooded with online orders on a day with a big sale, and because the network can’t handle all the requests, it will shut down preventing paying customers to purchase anything.
  • Probe or surveillance is an attack that tries to get information from a network. The goal here is to act like a thief and steal important information, whether it be personal information about clients or banking information.
  • U2R is an attack that starts off with a normal user account and tries to gain access to the system or network, as a super-user (root). The attacker attempts to exploit the vulnerabilities in a system to gain root privileges/access.
  • R2L is an attack that tries to gain local access to a remote machine. An attacker does not have local access to the system/network and tries to “hack” their way into the network.

An IDS is subjected to many types of attacks, as there are many subclasses in each attack type. The idea of a machine learning-based IDS is growing, where an IDS can learn by itself to classify the different attacks. This is why the potential of creating a GAN to counter the IDS is so great: a GAN is made to fool machine learning systems. This will prove to be a great asset in testing the robustness of an IDS and could lead the security system to a new era, taking IDS to the next level.

Overall, an IDS is a very useful, fundamental tool for today’s level of attacks, but the possibility of using a GAN to generate attacks to combat an IDS is growing very quickly. A framework called an Intrusion Detection System Generative Adversarial Network (IDSGAN) has already been created and has proven that a simple IDS is weak against attacks generated by a GAN. The future of security will be a fight between IDSs and GANs, and whichever develops faster will be the one standing on top.

Share on facebook
Share on twitter
Share on linkedin

Related Articles

Gerry Saporito holding a camera on a bridge

Gerry Saporito

An Entire IT Department

Gerry is the co-founder & CTO of Lumaki Labs, a startup assisting companies build future-proof talent pipelines by building a platform to maximize internships. When he isn’t working or watching anime, he is either playing tennis or looking for new companies to add to his WealthSimple portfolio.

Gerry Saporito

My Personal Favourites
Close Bitnami banner