Russian hackers use ChatGPT AI to write viruses.


Hackers

ChatGPT Artificial Intelligence Helps Hackers Write Malicious Codes to Steal Personal Data

Russian hackers have managed not only to successfully use ChatGPT’s AI to write malicious code, but also to bypass geofencing aimed at preventing them from accessing the platform.

ChatGPT, a chatbot launched by the American company OpenAI to demonstrate advances in artificial intelligence (AI) research, has become known for its interactive style of communication with the user. However, it’s worth studying it a little more, and the bot will help you write a term paper for college, poems for a loved one, or even a short story.

However, according to Business Insider, Russian hackers went too far and used a chatbot to write malicious code. It is well known that ChatGPT is able to help in writing program code, and perhaps do it better than an entry-level programmer. The fact that hackers could use it to write malicious code was certainly foreseen, but the fact that it would happen so quickly came as a complete surprise to Western experts.

According to a report by Check Point Research (CPR), a cybersecurity company, Russian cybercriminals tried to circumvent Open AI restrictions aimed at preventing this kind of use of the technology.

A conversational chatbot is able to help you write any piece of code, and it doesn’t know whether your intentions regarding the code are good or bad. Business Insider, Journal Interesting Engineering.

Last month, a topic called ChatGPT- Benefits of Malware appeared on one of the popular underground hacker forums, in which the user describes in detail how he uses the chatbot service to create strains and methods of malware, information about which was published in scientific publications, writes in his report Business Insider.

Earlier, another member of this forum boasted of creating his first Python script, which other users found similar to the OpenAI writing style. Later, the author of the script confirmed that he used the OpenAI platform to write the code. This raises concerns among cybersecurity experts, as even people with little programming skills can become potentially dangerous by using a chatbot as an assistant.

Source: https://chat.openai.com/chat

Exit mobile version