Tuesday, November 7, 2023

Offensive and Defensive AI: Let’s Chat(GPT) About It

Nov 07, 2023The Hacker NewsArtificial Intelligence / Data Security

ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game.

ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses. This makes it very valuable for applications like content creation, coding, education, customer support, and even personal assistance.

However, ChatGPT also comes with security risks. ChatGPT can be used for data exfiltration, spreading misinformation, developing cyber attacks and writing phishing emails. On the flip side, it can help defenders who can use it for identifying vulnerabilities and learning about various defenses.

In this article, we show numerous ways attackers can exploit ChatGPT and the OpenAI Playground. Just as importantly, we show ways that defenders can leverage ChatGPT to enhance their security posture as well.

The Threat Actor - Hacking Made Easy

ChatGPT makes it easier for people looking to enter the world of cybercrime. Here are a few ways it can be used for system exploitation:

  • Finding Vulnerabilities - Attackers can prompt ChatGPT about potential vulnerabilities in websites, systems, APIs, and other network components.

    According to Etay Maor, Senior Director of Security Strategy at Cato Networks, "There are guardrails in ChatGPT and the Playground to prevent them from giving answers that support doing something bad or evil. But, 'social engineering' the AI enables finding a way around that wall."

    For example, this can be done by impersonating a pen tester about how to test a website's input field for vulnerabilities. The response from ChatGPT will include a list of website exploitation methods, like input validation testing, XSS testing, SQL injection testing, and more.

  • Exploiting Existing Vulnerabilities - ChatGPT can also provide attackers with the technological information they need about how to exploit an existing vulnerability. For example, a threat actor could ask ChatGPT how to test a known SQL injection vulnerability in a website field. ChatGPT will respond with input examples that will trigger the vulnerability.
  • Using Mimikatz - Threat actors can prompt ChatGPT to write code that downloads and runs Mimikatz.
  • Writing Phishing Emails - ChatGPT can be prompted to create authentic-looking phishing emails across a wide variety of languages and writing styles. In the example below, the prompt requests that the email is written to sound like it's coming from a CEO.
  • Identifying Confidential Files - ChatGPT can help attackers identify files with confidential data.

In the example below, ChatGPT is prompted to write a Python script that searches for Doc and PDF files that contain the word "confidential," copy them into a random folder and transfer them. While the code is not perfect, it is a good start for a person who wants to develop this capability. Prompts could also be more sophisticated and include encryption, creating a Bitcoin wallet for the ransom money, and more.

The Defender - Defending Made Easy

ChatGPT can and should also be used to enhance defender capabilities. According to Etay Maor, "ChatGPT also lowers the bar, in a good sense, for Defenders and for people who want to get into security." Here are a number of ways professionals can improve their security expertise and capabilities.

  • Learning New Terms and Technologies - ChatGPT can shorten the time it takes to research and learn new terms, technologies, processes and methodologies. It provides immediate, accurate and concise answers to security-related questions.

In the example below, ChatGPT explains what a specific snort rule is.

  • Summarizing Security Reports - ChatGPT can help summarize breach reports, helping analysts learn about how attacks were performed so they can prevent them from recurring in the future.
  • Deciphering Attacker Code - Analysts can upload attacker code to ChatGPT and get an explanation of the steps taken and the executed payload.
  • Predicting Attack Paths - ChatGPT can predict future probable attack paths of an attack, by analyzing similar past cyber attacks and the techniques that were used.
  • Researching Threat Actors and Attack Paths - Providing a report that maps a threat actor, including their recent attacks, technical data, mapping to frameworks, and more. In this example, a detailed, technical report is provided about the ALPHV Ransomware group.
  • Identifying Code Vulnerabilities - Engineers can paste code in ChatGPT and prompt it to identify any vulnerabilities. ChatGPT can even identify vulnerabilities when there is no bug, only a logical error. Be wary of the code you upload. If it contains proprietary data you may be exposing it externally.
  • Identifying Suspicious Activities in Logs - Reviewing log activity and looking for suspicious activities.
  • Identifying Vulnerable Web Pages - Web developers or security professionals can prompt ChatGPT to review a website's HTML code and identify vulnerabilities that would enable SQL injections, CSRF attacks, XSS attacks, or DDoS attacks.

Additional Considerations When Using ChatGPT

When using ChatGPT, it's important to acknowledge the importance of the following factors:

  • Copyrights - Who owns the generated content? When asking ChatGPT, the answer is that the person who wrote the prompt owns them. However, it is not as simple as that. This issue is still not completely resolved and will depend on various legal systems and precedents. A body of law is currently emerging about this issue.
  • Data retention - OpenAI may retain some of the data used as prompts for training or other research purposes. That's why it's important to exercise caution and avoid pasting any sensitive data into the application.
  • Privacy - There are privacy issues surrounding ChatGPT, ranging from how it uses the data it is being prompted with to how it stores user interactions. Therefore, it's recommended to avoid entering PII or customer data into the application.
  • Bias - ChatGPT is subject to bias. For example, when asked to rate groups based on intelligence, it placed certain ethnicities before others. Using responses blindly could have significant consequences for individuals. For example, if it is used to guide decision-making in courts, police profiling, recruitment processes, and more.
  • Accuracy - It's important to verify ChatGPT's results, since they are not always accurate (i.e, 'hallucinations'. In the example below, ChatGPT was prompted to write a list of five-letter words starting with B and ending with KE. One of the answers was "Bike".
Offensive and Defensive AI
Offensive and Defensive AI
  • AI vs. AI - Currently ChatGPT is not able to identify if a prompted text was written by AI or not. In the future, newer versions might be able to, which can help with security efforts. For example, this ability could help identify phishing emails.

Etay summarizes, "We can't stop progress, but we do need to teach people how to use these tools."

To learn more about how security professionals can make the most of ChatGPT, watch the entire masterclass here.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://bit.ly/3QJ2TG1
via IFTTT

No comments:

Post a Comment