Meta blocked more than a thousand unique links with ChatGPT malware since March

Spread the love

Since last March, Meta has found 10 “malware families” and blocked more than a thousand unique malware links on its platforms that use ChatGPT and similar AI tools as decoys, the company said in a report.

The malware families masqueraded as ChatGPT browser extensions and mobile apps in official app stores, among others. Meta reveals in its quarterly Security Reports. In some cases, they contain real ChatGPT functionalities in addition to malware, according to the tech giant’s research team. Meta’s chief information security officer, Guy Rosen, said during a press conference about the report that ‘ChatGPT is the new crypto for malicious parties’ and expects that the use of generative AI for abuse will increase significantly in the near future. writes Reuters. Meta has removed the links posted on its platforms and reported the browser extensions and apps found to the administrators of the app stores where they are distributed.

According to Meta Companies in particular often fall victim to such malware. Malicious actors then initially focus on personal accounts affiliated with companies in order to gain access to the company account. For this reason, Meta says it will be releasing special ‘Meta Work’ accounts later this year. This makes it possible to log in to company accounts and use Facebook’s Business Manager tools without the need for a personal account. This way, the company accounts cannot be taken over if a personal account is hacked.

In addition, the company is working on a tool that helps users recognize malware and guides them step-by-step through the malware removal process. Third-party antivirus tools are also recommended, Meta says. The tool will be standalone, so it can also be used outside Meta’s platforms.

An example of malware presented on the official Chrome app store under the guise of a ChatGPT browser extension.

You might also like
Exit mobile version