Uncategorized

Generative AI is forcing enterprises — and policymakers — to rewrite the rules of cybersecurity


Following a year full of excitement and uncertainty, and more opinions about the future of AI than anyone could count, AI providers, enterprises, and policymakers are now rallying around one thing: AI security. 

The White House recently followed in the European Union’s footsteps to introduce a new set of standards for the secure development and deployment of AI models. But while regulators triangulate their policies and AI companies work to comply, the real responsibility to proceed safely will remain with enterprises. 

As companies seek to safely integrate generative AI into products and services (and avoid some not-so-good outcomes), it’s essential to understand why generative AI challenges conventional thinking about cybersecurity, and what sparked the need for AI regulations in the first place.  

It’s So Easy a Child Could Do It 

Unlike any technology we’ve seen before, generative AI attacks are limited only by the English language. Attacks can be executed by literally anyone — from a novice hacker, to a ten year old, to your Grandma or Grandpa.

Other types of software require bad actors to have, at minimum, intermediate knowledge or experience in coding languages or IT infrastructure to bypass cybersecurity safeguards. Generative AI merely requires creativity and malicious intent. 

In a sense, creativity is the new hacker’s currency; it’s used to craft and execute attacks that traditional cybersecurity measures fail to detect and prevent. With 72 percent of white hat hackers believing they’re more creative than AI, it’s safe to assume that bad actors with similar skill sets only need a few creative muscles to cause material problems at scale. 

From persistent nagging to creative wordplay, hackers can trick an AI model to perform unintended functions and reveal information otherwise meant to be guarded. These prompts don’t need to be complex, and bad actors are constantly exploring new methods to get generative AI models to spill their secrets. 

The threat landscape for companies innovating with AI just got a lot more complex. So what should we do about it?

Industry-Wide Collaboration is Essential 

Just like there are various ways to express a message in English, the same goes for LLM hacks. There are countless different ways to get an AI model to produce toxic or racist content, expose credit card information, or espouse misinformation. The only way to effectively protect AI apps from this volume of attack vectors is with data. A lot of it. 

Safeguarding against AI threats requires extensive knowledge of what those threats are. Since the possible attack vectors grow every day, AI security calls for an unprecedented compilation of threat data. No one source or company can amass the necessary data to sufficiently safeguard LLMs alone. AI security needs to be a collaborative, industry-wide effort. 

We started to see this play out at DEFCON31, where white hat hackers descended to stress test popular generative AI models, discover vulnerabilities, and share findings. More recently, the Biden Administration mandated that all LLMs undergo safety testing and share results with the U.S. government. In addition to government-led efforts, open-source datasets are popping up and will also play a critical role in pooling AI security data. These community-oriented efforts are essential, and I expect to see even greater collaboration in the future. 

Cybersecurity’s Second Act   

Cybersecurity teams have never had it easy. The rate and complexity of cyberattacks has steadily increased year-over-year, and cyber talent shortages have only made matters worse. The advent of generative AI is the latest 100 pound weight placed on cyber teams’ backs. 

As product teams are integrating hundreds of Generative AI applications across their organization, previously employed security solutions like Firewalls fall short of addressing AI-specific risks. Generative AI is more accessible than ever. The time for cybersecurity leaders to understand the nuances of cybersecurity’s second act to protect their organizations from AI cyber risks is now. They need to think carefully about how to enable their organizations to bring generative AI applications to production–securely and safely. Educating and supporting this group will be essential to the success of any enterprise AI initiative. 

Only when the fundamental risks are understood and internalized, organizations can start to build their own security processes and standards and safely bring AI into the real world.

David Haber is Co-Founder and CEO of Lakera.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *