21.5 C
Pakistan
Saturday, June 15, 2024

AI, both offensive and defensive: let’s talk about it (GPT)

The fastest-growing consumer application to date is ChatGPT. The wildly popular generative AI chatbot can produce responses that are coherent, contextually relevant, and human-like. For applications like content creation, coding, education, customer support, and even personal assistance, this makes it extremely valuable.

There are security risks associated with ChatGPT, though. ChatGPT can be used for creating cyberattacks, disseminating false information, phishing emails, and data exfiltration. Conversely, it can benefit defenders by helping them discover different defenses and identify weaknesses.

We demonstrate multiple ways for attackers to take advantage of ChatGPT and the OpenAI Playground in this article. Furthermore, we demonstrate how defenders can use ChatGPT to improve their security posture.

The Threat Actor: An Easy Way to Hack

It’s simpler for those wishing to get into cybercrime thanks to ChatGPT. It can be exploited for system exploitation in the following ways:

Finding Vulnerabilities: Potential weaknesses in systems, websites, APIs, and other network components can be prompted by attackers to ChatGPT.
As stated by Etay Maor, Senior Security Strategy Director at  Cato Networks

“Both ChatGPT and the Playground have filters in place to stop users from providing answers that encourage doing evil or bad things. However, the AI’s “social engineering” makes it possible to get past that barrier.”

One way to do this is to pose as a pen tester and ask questions about how to check a website’s input field for vulnerabilities. A list of website exploitation techniques, such as input validation testing, XSS testing, SQL injection testing, and more, will be included in ChatGPT’s response.

Exploiting Current Vulnerabilities: ChatGPT can also give attackers the technical know-how they require to take advantage of a current vulnerability. A threat actor might, for instance, ask ChatGPT how to test a field on a website that has a known SQL injection vulnerability.

Mimikatz – Via ChatGPT, malicious actors can instruct it to write code that launches and downloads Mimikatz.
Composing Phishing Emails: ChatGPT can be used to generate phishing emails that appear real in a number of languages and writing styles. The prompt in the example below asks that the email be formatted to appear as though it is from a CEO.
Confidential File Identification: ChatGPT can assist attackers in locating files containing sensitive information.

To write a Python script that looks for Doc and PDF files that contain the word “confidential,” copies them into a random folder, and transfers them, ChatGPT is prompted to do so in the example below. Even though the code isn’t flawless, it’s a decent place to start for someone looking to advance this skill. More advanced prompts might ask for things like encryption, setting up a Bitcoin wallet to hold the ransom money, and more.

Defending Made Simple with The Defender

It is also possible and desirable to use ChatGPT to improve defense capabilities. In the words of Etay Maor, “ChatGPT also lowers the bar, in a good sense, for Defenders and for people who want to get into security.” Professionals can enhance their security knowledge and skills in a number of ways.

Acquiring Knowledge of New Terms and Technologies: ChatGPT can expedite the process of investigating and acquiring new terms, technologies, procedures, and approaches. It offers prompt, precise, and succinct responses to inquiries about security.
ChatGPT explains what a specific snort rule is in the example below.

Condensing Security Reports: ChatGPT can assist in condensing breach reports, giving analysts insight into the methods used in the attack so they can stop it from happening again later.
Interpreting Attacker Code: By uploading an attacker code to ChatGPT, analysts can receive a detailed explanation of the actions taken and the payload that was executed.
Predicting Attack Paths: By examining comparable historical cyberattacks and the tactics they employed, ChatGPT is able to forecast the likely future paths of an attack.
Investigating Threat Actors and Attack Paths: Creating a report that maps a threat actor’s technical details, recent attacks, mapping to frameworks, and other information. An extensive technical report about the ALPHV Ransomware group is given in this example.

Finding Vulnerabilities in Code: Engineers can use ChatGPT to paste code and ask it to look for any vulnerabilities. Even in cases where a logical error rather than a bug exists, ChatGPT can still detect vulnerabilities. Take caution when uploading code. You might be exposing proprietary data externally if it’s included.

Finding Suspicious Activity in Logs: Examining activity logs and searching for unusual activity.
Finding Vulnerable Web Pages: Web developers and security experts can ask ChatGPT to examine the HTML code of a website in order to find any weaknesses that could lead to DDoS, CSRF, XSS, or SQL injection attacks.

Extra Things to Think About When Using ChatGPT#

It’s critical to recognize the following elements when utilizing ChatGPT:

Who owns the generated content in terms of copyrights? The owner of the prompts is the person who wrote it, according to ChatGPT’s response. But it’s not really that easy. This is still an open issue that will depend on different legal precedents and systems. A corpus of law pertaining to this matter is presently developing.
Data retention: Some of the data that OpenAI uses as training or other research prompts may be kept on file. It’s crucial to use caution and refrain from pasting any sensitive data into the application because of this.
Privacy: ChatGPT raises concerns about privacy in a number of areas, including how it handles data it receives and how it keeps track of user interactions.

Bias: ChatGPT is prone to prejudice. For instance, when asked to rank groups according to IQ, it prioritized some ethnic groups over others. Using answers at face value could have serious repercussions for certain people. For instance, if it directs police profiling, hiring procedures, court decision-making, and other processes.
Accuracy –ChatGPT results aren’t always accurate, so it’s important to double-check them (i.e., ‘hallucinations’). The following example asks ChatGPT to generate a list of five-letter words that begin with B and end with KE. “Bike” was one of the answers.

AI versus AI At the moment, ChatGPT cannot tell if a text that is prompted was composed by AI or not. Newer versions may be able to do so in the future, which can aid security initiatives. This capability, for instance, might be useful in spotting phishing emails.

According to Etay, “We can’t stop progress, but we do need to teach people how to use these tools.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles