Meta, the parent company of social media giants Facebook, Instagram, and WhatsApp, has issued a warning about hackers using generative AI tools like ChatGPT to deceive people into downloading malicious code onto their devices.
In a recent briefing by Meta’s chief information security officer Guy Rosen, it was revealed that security analysts had discovered a wave of malware campaigns masquerading as ChatGPT or similar AI tools. Hackers are known to use attention-grabbing developments to bait their traps, tricking unsuspecting individuals into clicking on booby-trapped web links or downloading programs that can steal data.
According to Rosen, “generative AI technology has been capturing people’s imagination and everyone’s excitement, making it a prime target for bad actors.” Meta’s security team has identified “threat actors” promoting internet browser extensions that offer generative AI capabilities but instead contain malicious software designed to infect devices.
The company has blocked over a thousand web addresses that appear to be promising ChatGPT-like tools but are, in fact, traps set by hackers.
While Meta has yet to see generative AI being used as more than bait by hackers, Rosen warns that “we should all be very vigilant to stay safe, as generative AI holds great promise and bad actors know it.” Even as the company works on ways to use generative AI to defend against hackers and deceitful online influence campaigns, they are also preparing for the inevitable use of generative AI as a weapon.
According to Meta’s head of security policy Nathaniel Gleicher, they have teams already “thinking through how (generative AI) could be abused, and the defenses we need to put in place to counter that.”
Overall, Meta’s warning highlights the need for caution when downloading any software and the importance of remaining vigilant in today’s digital landscape where hackers continuously find new and inventive ways to exploit technology and prey on unsuspecting individuals.