Crypto Scammers Use ChatGPT to Unleash Botnet

Crypto Scammers Use ChatGPT to Unleash Botnet

A botnet that uses ChatGPT, an advanced AI language model created by OpenAI, has been exposed by researchers at Indiana University Bloomington. The botnet, which operates on X, the social platform that used to be called Twitter, aims to trick users into clicking on links that lead to fraudulent cryptocurrency websites.

AI Botnet Powered by ChatGPT

Cryptocurrency scams are not new, but they have become more sophisticated with the use of artificial intelligence. A recent investigation by researchers at Indiana University Bloomington has revealed the existence of a botnet powered by ChatGPT, a state-of-the-art AI language model developed by OpenAI, that was used to promote cryptocurrency scams on X, the social platform formerly known as Twitter.

Fox8 Botnet

The botnet, named Fox8 by the researchers, consisted of 1,140 accounts that used ChatGPT to generate and post content related to cryptocurrency, as well as to interact with each other’s posts. The goal of the botnet was to lure unsuspecting users into clicking on links that led to websites that hyped up various cryptocurrencies and offered dubious investment opportunities.

The researchers discovered the botnet by searching for a specific phrase, “As an AI language model…”, which ChatGPT sometimes uses in response to certain prompts. This phrase was a giveaway that the accounts were not human, but rather automated bots using ChatGPT. The researchers then manually analyzed the accounts and confirmed that they were part of a coordinated campaign.

AI Can be Harnessed for Disinformation and Fraud

The use of ChatGPT for the botnet shows how easily and effectively AI can be harnessed for disinformation and fraud. ChatGPT is a powerful tool that can generate fluent and coherent text in response to any input, making it hard to distinguish from human-written text. However, ChatGPT can also produce misleading or false information, as well as exhibit social biases and ethical issues. OpenAI’s usage policy explicitly prohibits the use of its AI models for scams and disinformation, but this does not prevent malicious actors from exploiting them.

The researchers warn that the Fox8 botnet may be just the tip of the iceberg, as there may be other botnets using ChatGPT or similar AI models that have not been detected yet. They also stress the challenge of identifying and combating such botnets, as they can evade detection and manipulate algorithms to spread their messages more effectively. They suggest that platforms like X should implement more robust measures to verify the identity and authenticity of their users, and that users should be more vigilant and critical when encountering online content.


Leave a Reply

Your email address will not be published. Required fields are marked *