ChatGPT

OpenAI is Being Used by Threat Actors to Carry Out Malicious Operations


The latest report by OpenAI confirms that ChatGPT is being used by threat actors from China and Iran for their operations. Proofpoint reported the first signs of these kinds of activities in April and since then OpenAI has tried to disrupt over 20 million cyber operations which were abusing the use of ChatGPT for different cyber attacks.

OpenAI first outlined SweetSpector which is a Chinese adversary which is targeting Asian governments. SeretSpector also targeted OpenAI directly and sent them phishing emails with malicious ZIP attachments by disguising itself as the email address of OpenAI employees. If opened, these attachments lead to SugarGhost RAT spreading on the victim’s computer.

Further investigation found that SweetSpector was using a bunch of ChatGPT accounts to perform scripting and vulnerability analysis. Threat actors asked ChatGPT a lot of requests like asking for information on specific CVE numbers, asking how internet wide scanners are made, asking for ways to exploit infrastructure, asking for help to debug codes and many other requests.

The second group known as CyberAv3ngers is an Iranian Government Islamic Revolutionary Group which is targeting the industrial systems in Western countries. These threat actors asked ChatGPT questions related to Python scripts, Programmable Logic Controllers (PLCs), and developing custom bash. The third group was Storm-0817, which was also Iranian-based. ChatGPT was used by the group to create Instagram scrapers, debug malware and translate LinkedIn profiles into Persian.

OpenAI banned all the accounts that were made by threat actors and shared all of the information to cybersecurity partners. This shows that AI tools can be used to make offensive operations from planning to execution.

Image: DIW-Aigen

Read next: Meta Faces Backlash Over AI-Generated Northern Lights Suggestion





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *