Uncategorized

Protect AI Acquires Laiyer AI to Secure Large Language Models (LLMs)


Laiyer AI’s LLM Guard extends Protect AI’s offerings with advanced LLM security capabilities; Open source tool has been downloaded over 2.5 million times per month

SEATTLE, January 31, 2024–(BUSINESS WIRE)–Protect AI, the leading artificial intelligence (AI) and machine learning (ML) security company, today announced it has acquired Laiyer AI. With the acquisition, Protect AI will be offering a commercial version of Laiyer AI’s open source LLM Guard with expanded features, capabilities, and integrations within the Protect AI platform. LLM Guard is freely available today, and an industry leading open-source project for protecting large language models (LLMs) against security threats, misuse and prompt injection attacks, while also providing tools to manage risk and compliance needs.

OpenAI’s GPT-4 and other LLMs are revolutionizing AI, and excel at understanding and generating human language. Their adoption spans various sectors, including customer service, healthcare, and content creation, driving the market’s growth from USD 11.3 billion in 2023 to an expected USD 51.8 billion by 2028, according to multiple industry analysts. This growth, fueled by the demand for applications like chatbots and virtual assistants, positions LLMs as key tools for businesses seeking to leverage textual data for competitive advantage. However, security and misuse concerns are limiting wider adoption among major companies.

“Protect AI is thrilled to announce the acquisition of Laiyer AI’s team and product suite, which significantly enhances our leading AI and ML security platform. These new capabilities will empower our customers in automotive, energy, manufacturing, life sciences, financial services, and government sectors to develop safe, secure GenAI applications,” said Ian Swanson, CEO of Protect AI. “Our industry-leading platform now boasts advanced features and filters for governing LLM prompts and responses, elevating the end-user experience and reaffirming our commitment to safeguarding Generative AI applications,”

In 2023, the OWASP Top 10 for LLM Applications spotlighted the unique security risks associated with deploying Large Language Models that business leaders should understand. Key risks include prompt injections, training data poisoning, and supply chain vulnerabilities. A notable concern is Prompt Injection Vulnerabilities, where attackers can manipulate LLMs through crafted inputs, leading to data exposure or decision manipulation. These attacks can be direct, via the LLM’s input, or indirect, through tainted data sources, and often bypass detection due to the implicit trust in LLM outputs. With upcoming regulations on LLMs, it’s vital to safeguard against such malicious activities and harmful responses to maintain corporate integrity and security.

Laiyer AI LLM Guard is a groundbreaking security solution for addressing the challenges associated with deploying LLMs. Unlike many closed-source, untested options prevalent in the market, LLM Guard offers a transparent, open-source alternative that bolsters confidence in deploying LLMs at an enterprise scale. This innovative tool is designed to enhance the security of LLM interactions, supporting both proprietary and third-party models.

LLM Guard’s core features include the detection, redaction, and sanitization of inputs and outputs from LLMs, effectively mitigating risks such as prompt injections to personal data leaks. These features are integral to preserving LLM functionality while safeguarding against malicious attacks and misuse. Moreover, LLM Guard integrates seamlessly with existing security workflows, offering observability tools like logging and metrics. This positions Laiyer AI at the forefront of providing essential security solutions, enabling developers and security teams to deploy LLM applications securely and effectively.

“There’s a clear need in the market for a solution that can secure LLM use-cases from start to finish, including when they scale into production. By joining forces with Protect AI, we are extending Protect AI’s products with LLM security capabilities to deliver the industry’s most comprehensive end-to-end AI Security platform,” said Neal Swaelens and Oleksandr Yaremchuk, Co-founders of Laiyer AI.

LLM Guard exemplifies price/performance leadership in the enterprise security sector for LLMs. The innovative solution balances latency, cost, and accuracy, boasting an impressive scale of adoption with over 13,000 library downloads and 2.5 million downloads of its proprietary models on HuggingFace in just 30 days. LLM Guard’s performance is enhanced by a 3x reduction in CPU inference latency, enabling the use of cost-effective CPU instances instead of expensive GPUs without compromising accuracy. LLM guard is a leader in the field, reinforced by its status as the default security scanner for Langchain and several other leading global enterprises.

The integration of Laiyer AI reinforces Protect AI’s status as the premier platform in AI security and MLSecOps. Protect AI offers unmatched capabilities, enabling enterprises to build, deploy, and manage AI applications that are not only secure and compliant but also operationally efficient. To learn more about LLM Guard visit the webpage.

About Protect AI

Protect AI is the broadest and most comprehensive platform to secure your AI. It enables you to see, know, and manage security risks to defend against unique AI security threats, and embrace MLSecOps for a safer AI-powered world. Protect AI’s Platform provides visibility into the AI/ML attack surface, detects unique security threats, and remediates vulnerabilities. Founded by AI leaders from Amazon and Oracle, Protect AI is funded by Acrew Capital, boldstart ventures, Evolution Equity Partners, Knollwood Capital, Pelion Ventures and Salesforce Ventures. The company is headquartered in Seattle, Washington.

For more information visit us on the web, and follow us on LinkedIn and X/Twitter.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240131394803/en/

Contacts

Media:
Marc Gendron
Marc Gendron PR for Protect AI
[email protected]
617-877-7480





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *