Generative AI is being increasingly used in penetration testing (pen testing) to enhance various aspects of security testing and assessment. Technology makes it easier to now find and discover vulnerable sites and attacks. The key is to find an AI engine that will allow you to query this information. Some will stop you for safety and legal reasons. As always, get written permission before beginning your engagement. Here are some ways in which generative AI is being applied:
Overall, generative AI is revolutionizing the field of penetration testing by enabling more automated, efficient, and effective security testing and assessment. However, it’s important to note that while AI can enhance security testing, it also raises concerns about the potential for AI-driven attacks. As such, it’s crucial for organizations to stay informed about the latest developments in AI-driven security threats and defenses.
Notably some of the attacks use AI via ChatGPT Pro versions versus the free version. In some videos circulating on the web, they are using AI to test APIs. API Recon and fuzzing are a few ways this is being illustrated. We have a prompt into ChatGPT Pro that can ask for the different data types that an API accepts like strings, numbers, or dates. Some edge cases can be invalid input and common vulnerabilities. When generating a wordlist, it should cover a wide range of possibilities to effectively test the API’s handling validate how robust it is. Another AI is the White Rabbit and is used for ethical hacking. This can be obtained from Github and can be used and provides all the commands and tools needed to carry out attacks.
A hot topic is the use of input parameters that are susceptible to SQL injection attacks. Once this is asked the output is quite interesting getting attackers ready to test specifically. SQL injection attacks occur when malicious SQL statements are inserted into an input field, which are then executed by the database. Several types of input parameters are commonly susceptible to SQL injection attacks:
These input parameters are susceptible to SQL injection attacks because they directly interact with the database and are often not properly sanitized or validated by the application. Attackers can exploit this vulnerability to execute arbitrary SQL commands, potentially gaining unauthorized access to the database, extracting sensitive information, or modifying data. A way to prevent SQL injection attacks is for developers to use parameterized queries or prepared statements, validate, and sanitize user input, and implement proper access controls. A web application firewall might also be a good addition to protect against these attacks.
Now that we have looked at the different types of attacks and some of the demonstrated and proven attacks, let’s go for something new and exciting. Taking a deeper dive into AI Machine Adversarial Machine Learning is like Jiu-Jitsu using the machine against the machine. Here are some of the attacks that can be carried out.
Adversarial machine learning attacks are techniques used to manipulate or trick machine learning models by inputting specially crafted data. Here are some common examples:
Adversarial Examples: These are carefully crafted inputs that are slightly modified from regular data to cause a machine learning model to misclassify them. For example, adding imperceptible noise to an image can cause an image recognition system to classify it incorrectly. This could be two images of what appears to be the same animal, but the gradient of the loss function goes up but minimized a bit to keep it subtle. This is a neural network attack. This is a white box untargeted attack, and the image was misclassified. Misspellings can also break a model. Think of self-driving cars who see a stop sign with modifications or graffiti may think it’s a speed limit sign.
Poisoning Attacks: In poisoning attacks, an attacker manipulates the training data to introduce bias into the model. For example, an attacker could add malicious data to the training set to influence the model to classify future data in a specific way. The algorithm is referred to as training data or training model. It’s best to think of the model as a program that contains data and procedure for using the data. These models are then used for a variety of tasks that are built into various products and applications. Often the training goes through multiple iterations and the deployment model gets frequent training updates based on new data. Data poisoning is mainly used to describe an attack on the previously described training process. The attacker alters existing data, policies, data adds poison to the training data before it split into the algorithm, resulting in an altered model. Data poisoning is a broad type of attack that generally involves providing false or malicious data to a function with the goal of altering outcomes.
Model Inversion: This attack involves reverse-engineering a machine learning model to extract sensitive information from it. For example, an attacker could use queries to the model to reconstruct training data or extract information about individuals represented in the data.
Model Stealing: In this attack, an attacker tries to replicate a machine learning model by querying it and using the responses to train a new model. This can be used to steal intellectual property or create a copy of a proprietary model.
Evasion Attacks: Evasion attacks, also known as adversarial perturbations, involve modifying input data to evade detection or classification by a machine learning model. This is often done by adding small, carefully crafted perturbations to the input data.
Data Poisoning: Data poisoning attacks involve manipulating the training data to degrade the performance of the model or cause it to behave maliciously. For example, an attacker could introduce incorrect labels into the training data to disrupt the model’s learning process.
These are just a few examples of adversarial machine learning attacks. As machine learning models become more prevalent and sophisticated, it is important to develop defenses against these types of attacks to ensure the security and reliability of AI systems.