Although the manufacturer OpenAI has already taken security measures to prevent misuse of the product, these have not curbed hacking activities.

Security researchers at Check Point Research (CPR), the research division of Check Point® Software Technologies Ltd., report that cybercriminals are now selling low-dollar tools that allow other hackers to bypass ChatGPT's security restrictions. There is even a special, publicly available script on the dark web that allows almost anyone to bypass ChatGPT controls.

As early as March 2023, new malware families were discovered that use ChatGPT's name to deceive users. The scams are often mobile applications or browser extensions that imitate ChatGPT tools. In some cases, the fake tools even offer some ChatGPT features. However, their real goal is to steal the user's login credentials.

Hackers use ChatGPT to create malware

It is no longer a secret that ChatGPT is used to create malware, including infostealers for Microsoft Office documents, PDFs, and image-based targets, as well as deceptive Python scripts that perform cryptographic operations (also known as encryption tools).

Although the strength and functionality of the malware created via ChatGPT is questionable, the prospect that ChatGPT could improve its capabilities through learning and thereby support the generation of even stronger malware poses a real concern. It is also used for crypto fraud purposes ChatGPT exploited.

Read the full article HERE .


In line with this topic:

Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )