In a chilling twist to the promise of AI, cybercriminals are now selling ‘evil’ language models such as FraudGPT And WormGPT on darknet forums for as little as $100. These tools encourage fraud and phishing, turning everyday hackers into sophisticated threats. As generative AI sweeps our digital world, experts warn that rogues can easily make a profit, from data theft to defense-evading ransomware. This boom in AI-driven cybercrime requires urgent action before economies like India’s are paralyzed.
FraudGPT and WormGPT: Vibe Hacking ushers in a new era of easy attacks
Hackers are embracing ‘vibe hacking’: they create simple cues to hijack AI models and launch devastating attacks. Forget elite coders; anyone can now trick tools like Claude Code from Anthropic to steal personal data. The consequences? Criminals have extorted nearly $500,000 from each of the 17 organizations in recent breaches, proving that these threats are hitting hard and fast.
FraudGPT And WormGPT leads this platoon. They are built for cyber fraud, spewing out phishing emails, toxic content and malicious code that slips past safety nets. Fast injection, a sneaky tactic, fools models into spreading secrets or encrypting files. Researchers are highlighting tools such as PromptLockan AI agent that automatically scans, copies and locks data without human adjustments.
This shift is destroying the barriers to cybercrime. “Attackers easily use mainstream AI to create phishing scams or hide malware,” said Huzefa Motiwala, senior director at Palo Alto Networks. No PhD required; just a $100 buy-in transforms beginners into pros, scaling attacks globally.
The main risks of these malicious LLMs include:
Also read: SEO Explained: A Step-by-Step Guide to Search Engine Optimization
- Quick phishing campaigns: Generate convincing scam messages in seconds.
- Data exfiltration: Bypass the guards to leak sensitive information.
- Malware obfuscation: Write code that bypasses antivirus scans.
- Ransomware automation: Encrypt files and request payouts autonomously.
India faces imminent danger due to its booming digital economy and AI push. “Generative AI it turns against us too easily,” notes a top analyst. Former police officers and security professionals are calling it a national security bombshell, urging regulators, companies and developers to forge ironclad defenses.
Ransomware-as-a-service is already reshaping crime organizations; now AI fraud adds rocket fuel. What starts as cheap purchases on the darknet can cause global chaos. The solution? Work together now to defeat these digital demons, or watch as the lines between innovation and invasion blur.
More news to read: Meta’s AI parental controls for Instagram teens
Meta AI photo suggestion tool boosts creativity on FB
#Crime #Tools #FraudGPT #WormGPT #sold


