OpenAI: ‘Hackers use ChatGPT to develop malware and deploy phishing attacks’
ChatGPT developer OpenAI has disrupted over 20 state-linked cyber actors and covert influence operations around the world that have attempted to use ChatGPT to write and develop their own malware.
OpenAI’s latest report ‘Influence and cyber operations: an update’ analyzes how hackers, cybercriminals and other threat actors have used ChatGPT for their criminal endeavors.
OpenAI says it’s their mission to generate artificial intelligence for the benefit of all humanity. With the American presidential election ahead of us, it’s important to build “robust, multi-layered defenses against state-linked cyber actors and covert influence operations” in order to disrupt deceptive campaigns on social media and other online platforms.
One of the report’s conclusions is that OpenAI’s AI-powered chatbot is used to deconstruct existing malicious software to improve key features, like evading detection.
But that’s not the only way threat actors use large language models or LLMs. ChatGPT is also deployed for generating and spreading misinformation that’s posted by fake personas. These activities vary from writing a simple article to complex, multi-staged efforts to analyze and reply to social media posts.
Another way ChatGPT is misused by threat actors is by conducting spear-phishing attacks.
By studying how ChatGPT is used by malicious actors, OpenAI has gained several insights. For example, the company was able to improve their AI-powered tools to detect and dissect potentially harmful activities.
In addition, OpenAI found out that threat actors often use ChatGPT and other AI-tools to perform tasks in a specific phase during their campaigns: after they had gained access to email addresses and social media accounts, but before they deploy ‘finished products’ like social media posts or malware.
“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” researchers state in the report.
They have outlined three cases in which ChatGPT was used to attack unsuspecting targets. The first one was by SweetSpecter, a China-based cyber threat group that targets Asian governments. The group used a cluster of ChatGPT accounts to perform scripting and vulnerability analysis research. It also targeted OpenAI employees directly by sending them spear-phishing emails with malicious ZIP attachments.
The second case was the cyber threat group CyberAv3ngers, who is closely affiliated with the Iranian government’s Islamic Revolutionary Guard Corps (IRGC). They used OpenAI’s AI-tools to develop custom bash and Python scripts, obfuscate code, and learn how to exploit specific vulnerabilities.
The third threat actor OpenAI identified was an Iranian hacking group as well and is called STORM-0817. Members used ChatGPT to debug malware, create an Instagram scraper, translate LinkedIn profiles into Persian, and develop a custom malware for the Android platform. The malware was able to steal contact lists, call logs and files stored on Android devices, but also take screenshots, steal a person’s browser history and retrieve someone’s exact location.
“As we look to the future, we will continue to work across our intelligence, investigations, security, safety, and policy teams to anticipate how malicious actors may use advanced models for dangerous ends and to plan enforcement steps appropriately. We will continue to share our findings with our internal safety and security teams, communicate lessons to key stakeholders, and partner with our industry peers and the broader research community to stay ahead of risks and strengthen our collective safety and security,” OpenAI says in a statement.
Your email address will not be published. Required fields are marked