top of page
  • Writer's pictureNth Generation

Revolutionizing Cybersecurity: Leveraging AI for Enhanced Penetration Testing and Defense Strategies with Nth Generation

By: Jeromie Jackson, Director of Security & Analytics at Nth Generation 

AI is enabling hackers to manipulate both technology and humans. As a Director for Security and Analytics for Nth Generation, my role encompasses spearheading red team activities and leveraging cutting-edge technologies, including Artificial Intelligence (AI), to enhance our penetration testing services and ultimately to fortify your security posture. Equally, we are highly focused on utilizing AI to close the resource and cybersecurity talent gaps that many organizations face. 

The cyber threat landscape has shifted. It is no longer about the old days when the focus was primarily on protecting a single attack vector. It is now about protecting complex, multiple attack vectors by highly sophisticated adversaries. These advanced threats use deep recon and focus on relationships with third parties and existing social relationships. Many are studying coercion tactics such as framing, obligation, scarcity, neuro-linguistic programming, and more. Phishing, smishing, and vishing are not isolated instances in any way, but rather a part of a well-orchestrated, targeted attack. 

Currently, the initial attack vector is very often “humans” since organizations have improved securing their security perimeter(s). Instead of studying python or other programming languages, many attackers are focusing on how to hack the human subconscious. Below are several of the tactics being seen in the wild. 

Authority: This attacker poses as a position of authority. An executive, a government official, or an IT support staff member are common ruses. 

Concession: An attacker may make a big request. Once the request is denied, the victim is often more open to, or at least responding to, a less aggressive request. 

Liking: Attackers may attempt to build rapport based on information they’ve learned about the target, such as high school, where someone has lived, sports interests, and/or hobbies. 

Obligation: After doing something seemingly helpful for the victim, the attacker might ask for a favor in return. 

Reciprocity: Like obligation, reciprocity involves the attacker offering something of value first. The victim then feels compelled to reciprocate the gesture. 

Scarcity: Fear of Missing Out (FoMo) is a method of making the victim believe they are going to miss out on something if they do not respond to the attacker’s request. 

Social Proof: By using testimonials or using the majority influence, attackers may try to sway the victim into herd mentality. 

AI also plays a core role in Nth Generation's penetration testing strategy. It enables simulating and predicting – with unparalleled precision – the behaviors of attackers. We integrate AI with our testing framework to process vast data from vulnerability scans, threat intelligence feeds, as well as formal industry reports. This integration gives us the identification and prioritization of vulnerabilities based on real-world threat intelligence. This enables us an opportunity to work on those assets that are most likely to be hit by the attacker. This AI-driven methodology for penetration testing allows us to carry out much more effective social engineering campaigns and penetration tests. 

While many are talking about using AI to improve phishing scams, there is a great deal more that AI brings to the table. Voice cloning is a tactic utilized to gain trust with an unexpected target. With less than 15 seconds of audio, it is relatively fast and easy to clone anyone’s voice. A personalized message on a voicemail or a short YouTube clip is plenty to create a convincing voice clone. These are great fun during a Red Team engagement, but genuinely a significant risk that organizations should have on their radar. One thing to be aware, if you suspect a voice has been cloned, take note there is typically a slight pause in the response. Thus, if you ask something that wasn’t expected, you will experience a brief pause in response.  This should be a red flag to people receiving calls, especially help desk personnel. 

Video deepfakes are certainly becoming more real and difficult to identify. In recent news, there were deepfakes of Taylor Swift discovered. Earlier this year, a finance worker paid out $25 million after a video call with a deepfake of what appeared to be their company’s Chief Financial Officer. Martin Lewis, a financial journalist, was also recently involved in a deep fake promoting Quantum AI investments.* These are getting much harder to detect than the video of Will Smith eating pasta.**


The Open Web Application Security Project (OWASP) organization has put together a list of Top 10 Machine Learning Security Risks. This project focuses on the types of vectors related to attacks against machine learning systems. This is currently in draft, however it is a solid motion toward getting our arms around the risks. It can be found here


Along with using AI to assist in attacking and defending organizations, AI itself is being targeted. MITRE created a knowledge base of AI attacks named MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems). This framework is very similar to the MITRE ATT&CK framework, however it focuses on directly attacking AI systems. Our consultants are helping organizations develop AI-acceptable Use Policies (AUPs) and procedures, as well as helping assess potential sensitive data exposure. This is a crucial step to take prior to deploying a Generative AI Large Language Model (LLM), such as Microsoft Copilot.   


Our team of security experts are constantly striving to keep up to date with evolving AI applications to integrate into our penetration testing and other security posture testing services. We are not just testers at Nth Generation –we are innovators. Our AI-powered penetration testing services are designed to bring our clients to the highest level of security assurance in their preparedness to meet the challenges that tomorrow’s cybersecurity landscape has to offer.   




bottom of page