Uncategorized

Hackers turn to generative AI to commit crime – Malware News


Hackers turn to generative AI to commit crime

Experts have spent most of 2023 warning about the potential dangers of artificial intelligence (AI) . From the risk of global nuclear war to fake news influencing elections, there have been plenty of horror predictions.

But as always, cybercriminals have been the threat. And once again, insufficient attention is being given to hacker activities.

Abusing deepfakes

Deepfakes, images that have been doctored by artificial intelligence systems, have been a concern for some time – mostly in terms of celebrities being grafted into pornographic movies. Other, less serious instances include the famous Pope Francis in a puffer image, which have been designed as entertainment.

However, cybercriminals are using the exact same image tools and techniques to try and break into secure IT systems. Researchers have noticed a massive uptick in incidents where hackers use image mapping tools to defeat facial recognition systems for instance.

The process is incredibly simple:

  1. Locate a photo of their victim (Instagram and Facebook are easy places to find suitable images).
  2. Use a smartphone ‘face swap’ app to map the victim’s face onto their own.
  3. Attempt to log into a protected system by showing the camera a video of the mapped face.

Some good news… for now

Despite becoming more frequent, the majority of these attempts fail because the quality of the mapped image is poor. High resolution cameras and processing algorithms quickly identify that something is wrong and deny access to the account. Because of the ease and minimal cost of face-swap apps, some researchers are referring to this type of attack as “cheapfakes”.

But just because the vast majority of these attacks fail does not mean they can be ignored. Generative AI systems continue to improve – as does the quality of the images they produce. It is quite possible that images will eventually improve to the point that they can defeat biometric security.

To address these risks, businesses will need to get smarter too. More than simply verifying biometric data, systems will have to include additional verification information. This could be in the form of device fingerprinting, geolocation or by requiring additional login info, such as multi-factor authentication (MFA).

Be aware – and be prepared

Biometric security is generally very secure and can normally be trusted. But when it comes to security, more is better. You can reduce the risk of falling victim to deepfakers and cheapfakers by enabling additional security on your digital accounts. 

If you have the option for 2FA / MFA or to use an authenticator app – take it. This additional identity verification makes it harder for criminals to break in – and most will simply shift their attention to attacking a less well-protected account instead.

The post Hackers turn to generative AI to commit crime appeared first on Panda Security Mediacenter.

Article Link: Hackers turn to generative AI to commit crime – Panda Security





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *