Uncategorized

Codenotary Adds Machine Learning Algorithms to SBOM Search Tool



Codenotary this week added machine learning algorithms to the search engine it provides for its Trustcenter platform for generating and managing software bills of materials (SBOMs).

Compatible with the Vulnerability Exploitability eXchange (VEX) format, the machine learning algorithms surface more accurate results when trying to determine what software components are running in an application environment.

Codenotary CEO Moshe Bar said that’s critical because, in the absence of that capability, IT and cybersecurity teams might not discover every instance of a vulnerable software component that exists with the application binaries they have deployed.

In fact, each SBOM tool available today will generate different results, so it’s critical that IT teams standardize on a tool that provides the most consistent results, he added.

The VEX format was created for a Multistakeholder Process for Software Component Transparency led by the National Telecommunications and Information Administration (NTIA), an arm of the U.S. Department of Commerce, to make it easier to share information about vulnerabilities in a standard format. The NTIA and the Cybersecurity and Infrastructure Security Agency (CISA) then worked with NTIA and Chainguard to create OpenVEX, a specification and set of tools for reporting vulnerabilities in a format that can be read by machines.

In the wake of an executive order requiring federal agencies to have access to SBOMs to make it easier to determine where vulnerabilities ae located in an application, many enterprise IT organizations have similarly adopted SBOMs to help better protect their software supply chains. The challenge many of them are now encountering is how best to operationalize SBOMs within the context of an application remediation effort.

Many of them are also unsure how accurate SBOMs are at any given time as applications are continuously updated, noted Bar. Trustcenter delivers code signing, provenance checks, attestation and SBOM management that include scores for assessing the severity of the risks an application represents.

In general, it’s not clear how far along organizations are in terms of strengthening their software supply chains as part of the larger embrace of best DevSecOps workflows, but as developers embrace generative artificial intelligence (AI) tools to write code faster, the pace at which applications are being built has increased substantially. Unfortunately, many of them are using general-purpose platforms based on large language models (LLMs) that were trained using samples of code collected from across the internet. Many of those samples contained vulnerabilities that can find their way into the code generated by an AI model.

Most developers lack the expertise to recognize those vulnerabilities, so organizations will need tools capable of identifying vulnerabilities that might be multiplying across their codebase. Conversely, however, there may be just as many instances where developers who lack cybersecurity expertise are creating fewer vulnerabilities because of generative AI.

Hopefully, LLMs that are trained on a narrow base of code will soon consistently generate more reliable code. In the meantime, however, SBOMs will play a critical role in enabling organizations to determine the level of application risk they are willing to assume.

Recent Articles By Author



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *