Uncategorized

Generative AI – A Double-Edged Sword in Cybersecurity


Loading...

The impact of generative artificial intelligence (AI) models such as Google’s Bard and OpenAI’s ChatGPT, and many others, is hard to ignore. Much has been written about how these models have fundamentally changed our ability to create new data, including text, images, and code throughout 2023. Meanwhile, there has been a growing interest in using and large language models (LLMs) and generative AI for cybersecurity, which experts believe will present both opportunities and challenges for the industry in the coming months. 

Anshuman Sharma, Associate Director CSIRT & Investigative Response, APJ, Verizon Business explains that threat actors can exploit generative AI to worsen security breaches, orchestrate large-scale social engineering attacks, create more realistic deepfake audio-visual content, develop sophisticated self-evolving malware strains, and launch phishing attacks. 

Sanjoy Paul, Senior Faculty-Technology at Hero Vired, added that Generative AI significantly enhances the risk associated with password security, empowering cybercriminals to crack passwords more effectively, especially weak or reused ones. It also creates novel forms of malware that can evade traditional security systems, posing a substantial challenge to cybersecurity defenses.  

Loading...

“The interaction between AI models and data privacy becomes a critical concern as well. AI models, if prompted cleverly, may inadvertently leak sensitive information, presenting a potential threat to proprietary company data. The cyber risk from Generative AI extends beyond code and text generation; it also includes the creation of sophisticated photo, video, and voice content,” he says. 

Despite these challenges, Generative AI has a positive impact on cybersecurity, making it a double-edged sword in cybersecurity. It helps developers produce faster and more efficient defense mechanisms against cybercrimes. Historically, new technologies have been exploited by hackers, but they have also empowered the development of stronger defense methods. 

Traditional security measures often struggle to keep up with the rapid development of new malware strains. However, Generative AI can analyze patterns of known threats and generate models to predict and detect newer anomalies, says Sharma, adding that “Its algorithms can analyse vast amounts of historical data and learn to identify patterns that human analysts might overlook. By training on diverse datasets, these algorithms can detect anomalies, unusual behaviours, and potential threats that have not been previously encountered.”  

Loading...

Aaron Bugal, Field CTO – APJ at Sophos, states, “Generative AI, such as ChatGPT, introduces an interesting situation for cybersecurity. On one hand, it raises concerns as it can be used for insidious purposes like masking malware, social engineering, and even acting as a ‘living off the land’ personal assistant. This highlights the need for vigilance and innovative defense measures. On the other hand, Generative AI’s potential to augment human capabilities in tasks such as cross-domain detection, automation, and threat analysis presents a promising avenue for defenders.  

Generative AI also plays a crucial role in the battle against deepfakes, where its detection, analysis, and counteraction are indispensable. By leveraging advanced algorithms and machine learning, AI helps identify subtle manipulations indicative of deceptive content. Bugal states that its integration with authentication mechanisms, employing blockchain and cryptographic techniques, fortifies the digital landscape against manipulation.  

It also offers the possibility of creating more sophisticated biometric systems for authentication. AI-generated deepfake detection models, for example, can differentiate between genuine biometric data and those crafted by impostors, providing an additional layer of security to authentication processes. 

Loading...

Abhinanda Sarkar Academic Director, Great Learning mentions that Generative AI also has profound implications for cybersecurity training. By creating realistic cyber threat scenarios, it provides cybersecurity professionals with hands-on experience in a safe and controlled environment.  

This experiential learning approach ensures that security teams are well-equipped with the necessary skills to handle real-world cybersecurity challenges. 

Muraleedhar Pai, Executive Director & Chief Technology Officer at Maveric Systems, explains, “While cybercriminals can leverage AI for more sophisticated attacks, CISOs can harness the same technology to develop robust defense mechanisms and create impregnable systems.  

Loading...

“Ultimately, the success of cybersecurity depends on the adaptability and innovation of security professionals and the integration of AI technologies into comprehensive, multi-layered defense strategies. As AI evolves, regulations and ethical considerations will play a crucial role in shaping the balance between offensive and defensive uses of Generative AI in the cybersecurity realm.” 

According to a new report by Bloomberg Intelligence (BI), published in June 2023, the generative AI market is poised to touch nearly $1.3 trillion over the next 10 years from a market size of just $40 billion in 2022. Growth could expand at a CAGR of 42%, driven by increased data usage, digital ads, innovation of cloud storage, specialized software and services.  

Meanwhile, a September report by market research firm Fortune Business Insights, said, the cyber security market size was valued at $153.65 billion in 2022 and is projected to grow from $172.32 billion in 2023 to $424.97 billion in 2030, growing a 13.8% CAGR during the forecast. In India itself, cyber security products segment grew by more than 3.5X to reach around $3.7 billion in 2023 from around $1 billion in 2019, said Data Security Council of India. That said, CISOs and security teams can explore newer ways to use Generative AI as a valuable ally for cybersecurity defenses. 

Loading...





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *