Uncategorized

Protect AI finds vulnerabilities in open-source AI and machine learning tools



A new report released today by artificial intelligence and machine learning systems cybersecurity startup Protect AI Inc. highlights key vulnerabilities found in those systems recently uncovered by its bug bounty program.

Protect AI was founded in 2022 by former Amazon Web Services Inc. and Oracle Corp. employees, including Chief Executive Officer Ian Swanson, who was previously the worldwide leader for artificial intelligence and machine learning at AWS. The company offers products designed to deliver safer AI applications by providing organizations the ability to see, know and manage their machine learning environments.

Among its offerings is a bug bounty program to identify vulnerabilities in AI and machine learning, which Protect AI claims is the first of its type. The bug bounty program has seen strong success, with over 13,000 community members hunting for impactful vulnerabilities across the entire AI and machine learning supply chain.

Through both the bug bounty program and research, Protect AI has found that the tools used in the supply chain to build the machine learning models that power AI applications are vulnerable to unique security threats. The threat comes because many of the tools, frameworks and artifacts are open source, meaning they may have vulnerabilities out of the box that can lead directly to complete system takeovers, such as unauthenticated remote code execution or local file inclusion.

The first vulnerability detailed posed a significant risk of server takeover and loss of sensitive information. The widely used MLflow tool for storing and tracking models was found to have a critical flaw in its code for pulling down remote data storage. The vulnerability could deceive users into connecting to a malicious remote data source, potentially enabling attackers to execute commands on the user’s system.

Another security issue uncovered in MLflow was the Arbitrary File Overwrite vulnerability. The vulnerability was due to a bypass in MLflow’s validation function that checks the safety of file paths. Malicious actors could exploit this flaw to remotely overwrite files on the MLflow server.

The third vulnerability in MLflow was a Local File Include issue. The vulnerability allows MLflow, when hosted on certain operating systems, to inadvertently expose the contents of sensitive files. The exposure was found to be caused by a bypass in the file path safety mechanism, with the potential damage including the loss of sensitive information and even complete system takeover, particularly if SSH keys or cloud keys were accessible to MLflow with sufficient permissions.

All the vulnerabilities detailed were disclosed to maintainers at a minimum of 45 days prior to publication. Collectively, the vulnerabilities underscore a need for stringent security measures in AI and machine learning tools, given their access to critical and sensitive data.

Image: Protect AI

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *