It is known that Intrusion Detection Systems (IDS) are weak against adversarial attacks and research is being done to prove the ease of breaking these systems. Many have begun to recognize the flaws in machine learning, and consequently, a framework called Intrusion Detection System Generative Adversarial Network (IDSGAN) is proposed to create adversarial attacks, which can deceive and evade any IDS.
To understand more about IDSGANs and how feasible it is to break into an IDS system, let us take a look at a Generative Adversarial Network (GAN) which is a generic framework used to fool machine learning algorithms. GAN is an adversarial AI system which is built to combat AI. It is a deep neural net architecture comprised of two neural nets, pitting one against the other (thus the “adversarial”), where a generator and a discriminator tries to outsmart the other. A generator modifies malicious version of the input it was originally given and sends it to be classified by the IDS and the discriminator. The goal of the generator is to fool an IDS, and the goal of the discriminator is to mimic the IDS on classifying inputs (correct or wrong) and provide feedback to the generator. The game ends when the IDS and discriminator cannot accurately classify the input created by the generator.
The concept of how the IDSGAN framework functions can be explained through an example. For reference, let the malicious input be monsters and normal input be people trying to enter a village (system/network) and the IDS be a gatekeeper who is protecting that village. The gatekeeper is very good at distinguishing monsters from normal humans and lets the humans into the village while fighting off the monsters. The monsters come up with a plan to disguise themselves to fool the gatekeeper and enter the village. One monster is appointed as the fake gatekeeper and is given the task of standing near the gatekeeper to learn how to identify humans and monsters as the gatekeeper does (discriminator). Another becomes a costume maker and learns how to create people-like disguises with the feedback from the fake gatekeeper (generator) to get these monsters into the village. The costume maker creates a costume and sends the monster over to the fake and real gatekeepers with other real humans, and the fake and real gatekeepers will classify the monsters as people or monsters. The fake gatekeeper will compare his decision with the real gatekeeper’s decision and learn which features/characteristics made the monster look like a monster, and send his feedback to the costume maker to make better disguises. When the fake gatekeeper can’t distinguish the monsters and people apart anymore, the costume maker has successfully created a disguise that can fool the real gatekeeper.
This is how the IDSGAN framework operates; a generator generates malicious traffic and mixes it in with normal traffic, and sends the traffic to the IDS and the discriminator. The discriminator and generator train in parallel. The discriminator tries to identify each piece of traffic it encounters and compares its decision with the IDS (loss vs loss), and adjusts its weights accordingly. Meanwhile, the generator learns how it is performing based on the decision the discriminator made. Eventually, the generator learns how to disguise a malicious input and creates a near-perfect model, which completely fools the discriminator.
The GAN is still a new concept and is still being developed. That being said, there are many applications of this in different fields, primarily image recognition. The IDSGAN is a new spin of this, and although this is still a very new framework, there is a lot of potential. The IDSGAN has already been proven to fool a simple IDS, and with enough time it can fool the most robust security systems. Hopefully, this development will be used to improve today’s IDS to be able to defend against the most robust attacks. Because you wouldn’t want your information open to just anybody, right?