Uncategorized

GenAI against humanity: nefarious applications of generative artificial intelligence and large language models



Pretend for a moment that you were Tom Cruise, and on a day like any other (back in 2021) you tap into your social media feed just to see videos of yourself playing golf and prat-falling around your home (see Table 1C). What would your reaction be if you never actually recorded and posted those videos? The malicious use of technological advancements is barely news: each new powerful technology comes with abuse. The problem of tampered footage or photoshopped multimedia is not new, but GenAI and deepfake technologies have brought about a wealth of new challenges [22].

The ability to create deepfakes, provide plausible deniability, and spread subliminal messages or deceiving content makes GenAI a potent tool in the hands of malicious actors. Let us unpack some of the most salient nefarious applications of GenAI technologies. Figure 2 provides a map of such plausible and known applications. In Table 3, we summarized several proof-of-concept examples of scenarios where GenAI and LLMs can be abused to cause personal and financial harm to people, distort the information ecosystem, and manipulate sociotechnical systems and infrastructures.

Table 3 Proof-of-concept Scenarios Highlighting the Potential for Different Type of Harms in Malicious GenAI Applications

The rise of deepfakes

GenAI can produce images of people that look very real, as if they could be seen on platforms like Facebook, Twitter, or Tinder. Although these individuals do not exist in reality, these synthetic identities are already being used in malicious activities (see Table 1D).

AI-generated faces

There are businesses that offer “fake people” for purchase. For instance, on the website Generated.Photos, one can buy a “unique, worry-free” fake person for $2.99 or even 1000 people for $1000. If someone needs a few fake individuals, perhaps for a video game or to diversify a company website, they can obtain their photos for free from ThisPersonDoesNotExist.com. There is even a company named Rosebud.AI that can animate these fake personas and make them talk (the stated goal is for games and art, but the technology can be easily abused).

Use of synthetic personas

AI-generated identities are beginning to appear on the Internet and are being used by real people with malicious intentions. Examples include spies using attractive faces to infiltrate intelligence communities, right-wing propagandists hiding behind fake profiles, and online harassers using a friendly face to troll their targets.

The perfect alibi: plausible deniability and attribution problems

The ability to generate fictitious images and videos can not only lend itself to abuse such as deepfake-fueled non-consensual porn generation, or the creation of misinformation for the sake of harassment or slander. Researchers are concerned that the same technologies could be used to construct alibis or fabricate criminal evidence in scalable and inexpensive ways [25]. Generative AI poses potential threats, especially in the realm of generating fake evidence or alibis. An article published by InWeb3 put it best in words (see Table 1E):

“These possibilities undermine trust, credibility, and accountability. They create plausible deniability, the ability to deny responsibility or involvement in an action, by generating fake evidence or alibis. They also create attribution problems, the difficulty of identifying the source or origin of an action, by generating fake identities or locations. Ethical dilemmas also arise, the conflict between moral principles or values, by generating content that violates human rights or norms.”

GenAI against the people

The potential threats posed by GenAI in the realm of cybersecurity include ad hominem attacks [11], such as automated online harassment and personalized scams (see Table 1F).

AI against users

The primary targets of AI-powered attacks are not just vulnerable systems, but also human users behind those systems. AI technology can scrape personal identifiable information (PII) and gather social media data about potential victims. This enhanced data collection can help criminals craft more detailed and convincing social engineering efforts than traditional human attackers.

Bespoke spear phishing

While “phishing” involves generic email lures, “spear phishing” involves collecting data on a target and crafting a personalized email [12]. Historically, spear phishing was primarily used against governments and businesses. However, with AI tools that can scrape data from various sources, spear phishing will become more common and more effective.

Automated harassment

Beyond data theft and blackmail, GenAI can be used for automated harassment. Cybercriminals, as well as individuals with malicious intent, can use GenAI technology to launch harassment campaigns that result in service disruptions, ruined reputations, or more traditional forms of online harassment. Victims could range from businesses to private individuals or public figures. Tactics might include the creation of fake social media accounts used to spread lies or automated phone calls using voice over IP (VoIP) services. The automation of harassment processes could create a relentless and potentially untraceable campaign against victims.

Fake people, real consequences

The use of LLMs in conjunction with other GenAI tools can bring to life synthetic personas used for scams, swindles, and other deceptions (see Table 1G).

Fake users, real money scams

GenAI can be used to scale up the generation of synthetic personal data, including fake accounts and fake transactions (see Table 1G). For example, JPMorgan Chase discovered that its acquisition of a college financial aid platform included numerous fictitious accounts. The platform was believed to contain 4.25 million customer accounts, but the bank later found that only 300,000 were legitimate. The platform vendor allegedly hired a data scientist to fabricate the majority of the accounts. Similarly, Wells Fargo faced penalties when it was revealed that employees had opened at least 3.5 million new accounts using data from existing customers without their consent. By creating fake PINs and email addresses, funds were transferred from legitimate to fraudulent accounts. Fake accounts have also been a problem in the social media and online retail sectors, leading to issues like spamming, fake reviews, and user-spoofing-powered fraud. For instance, PayPal disclosed that it believed 4.5 million of its accounts were not legitimate and possibly fraudulent.

Kidnapped by a bot?

Generative AI can copy voices and likenesses, making it possible for individuals to appear as if they are saying or doing almost anything. This technology is similar to “deepfake” videos but applies to voices.

AI-generated voices in scams: AI-generated voices are being used to enhance scams, making them more convincing (see Table 1H). For instance, people have received calls from what sounds like a relative asking for money, but the voice was generated by artificial intelligence as part of a fraudulent scheme.

Voice spoofing and ransom: Threat actors can easily obtain a few seconds of someone’s voice from social media or other audio sources and use generative AI to produce entire scripts of whatever they want that person to say. This has led to scams in which children appear to call their parents asking for a wire transfer for ransom (see Table 1I).

Voice authentication: AI can be used to bypass voice authentication systems. For example, some financial services companies allow users to download information based on voice recognition. AI can potentially be used to mimic these voices and gain unauthorized access.

Opening the floodgates to disinformation

LLMs have the ability to craft persuasive content that can parrot false narratives and conspiracy theories, effectively and at scale (see Table 1J). Some concerned researchers recently described Large Language Models like ChatGPT as weapons of mass deception [24]. It seems undeniable that the potential for GenAI and LLMs to craft fictitious, nonfactual, inaccurate, or deceiving content is unparalleled [17].

LLMs and disinformation

Soon after the launch of ChatGPT, researchers tested its ability to produce content based on questions filled with conspiracy theories and false narratives. The AI-generated content was so convincing that Gordon Crovitz, a co-chief executive of NewsGuard (a company that tracks online misinformation), stated, “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the Internet.”

ChatGPT’s capabilities

ChatGPT can produce convincing content rapidly without revealing its sources. When supplied with disinformation-loaded questions, it can generate clean variations of the content en masse within seconds. When researchers from NewsGuard asked ChatGPT to produce content based on false narratives, the AI complied about 80% of the time (see Table 1K). For instance, when asked to write from the perspective of conspiracy theorist Alex Jones about the Parkland shooting, ChatGPT produced content that falsely claimed the mainstream media and the government used “crisis actors” to push a gun-control agenda.

All systems down

Yet, the potential misuse of GenAI could have its most catastrophic consequences when looking at socio-technical systems and infrastructures. When deployed at a planetary scale, GenAI’s influence extends beyond mere technological advancements: it has the potential to profoundly impact the very foundations of our economy, democracy, and infrastructure. Targeted surveillance, censorship, and synthetic realities have been topics of concern in research community.

Hyper-targeted surveillance

Enhanced by GenAI, surveillance capabilities, such as facial recognition systems, can reach unprecedented levels of accuracy. When integrated with other individual information and online data, these systems could not only recognize but also predict individual behaviors. Such advancements, while promising in the context of security, raise alarming concerns about privacy and individual rights. We may be soon be entering an age of ubiquitous GenAI-driven surveillance (see Table 1L).

Total information control

The intersection of GenAI with content moderation and censorship poses significant challenges to democratic values [30]. While LLMs can efficiently detect and remove harmful content from digital platforms, the potential for misuse, especially by authoritarian regimes, is concerning. The risk of suppressing dissenting voices and curating a single narrative threatens the very essence of democracy.

Entirely synthetic realities

In the era of synthetic realities–augmented reality (AR), virtual reality (VR), and the expansive metaverse–Generative Artificial Intelligence (GenAI) stands as a powerful architect. With its capability to craft intricate and indistinguishable virtual environments, GenAI has the potential to redefine our perception of reality itself. However, this transformative power is not without its pitfalls. As these synthetic realities become increasingly immersive and indistinguishable from our physical world, there lies a profound risk of manipulation. Unscrupulous entities could exploit GenAI-generated environments to influence individuals’ beliefs, emotions, and behaviors. From subtly altering virtual advertisements to resonate more with individual preferences, to creating entire virtual narratives that push specific agendas or ideologies, the potential for psychological and behavioral manipulation is vast. As we embrace the wonders of synthetic realities, it becomes imperative to remain vigilant, ensuring that the line between the virtual and the real remains discernible, and that our agency within these realms is preserved.

Systemic aberrations

Lastly, the ability of GenAI to manipulate public opinion can have cascading effects on planetary scale systems. From influencing stock markets to swaying election outcomes, the ramifications are vast and varied. In conclusion, as we navigate the intricate landscape of GenAI, it is imperative to recognize its massive scale implications. While the opportunities are immense, the challenges are equally daunting. Addressing the ethical, security, and societal concerns associated with GenAI is not just a technological endeavor but a global responsibility.

Table 4 Antithetic scenarios demonstrating the dual nature of GenAI’s capabilities



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *