Endor Labs announces AI Model Discovery to discover and govern open-source AI models
Endor Labs today announced a brand new feature in the company’s signature platform enabling organizations to discover the AI models already in use across their applications and to set and enforce security policies over which models are permitted. Endor Labs AI Model Discovery directly addresses three critical use cases: It enables application security professionals to discover local open-source AI models being used in their application code, evaluate risks from those models, and enforce organization-wide policies related to AI model curation and usage. It even goes a step further with automated detection, warning developers about policy violations and blocking high-risk models from entering production.
“There’s currently a significant gap in the ability to use AI models safely—the traditional Software Composition Analysis (SCA) tools deployed in many enterprises are designed mainly to track open source packages, which means they usually can’t identify risks from local AI models integrated into an application,” said Varun Badhwar, co-founder and CEO of Endor Labs. “Meanwhile, product and engineering teams are increasingly turning to open-source AI models to deliver new capabilities for customers. That’s why we’re excited to launch Endor Labs AI Model Discovery, which brings unprecedented security in open-source AI deployment.”
The new set of capabilities perfectly complements Endor Scores for AI Models, the recent release that uses 50 out-of-the-box metrics to score all AI models available on Hugging Face (the popular platform for sharing open-source AI models and datasets) across four dimensions for security, popularity, quality, and activity.
Training new AI models is costly and time-consuming, so most developers use open-source AI models from Hugging Face and adapt them for their specific purpose. These AI models function as critical application dependencies, and standard vulnerability scanners can't accurately analyze them, presenting risk. There are more than 1 million open-source AI models and datasets available today through Hugging Face. Endor Labs spots these AI models, runs them through 50 risk checks, and allows security teams to set critical guardrails, all within existing developer workflows. This gives security teams the same level of visibility and control over AI models that they currently expect with other open source dependencies.
Most users enjoying the benefits of the latest AI advances in the applications they use every day will be unaware of the dangers that may exist in the software development lifecycle. With these advances from Endor Labs, developers can safely adopt the latest open-source AI models when developing the next generation of applications.
Endor Labs AI Model Discovery provides the following capabilities:
-
Discover—Scan for and find local AI models already used within your Python applications, build a complete inventory of these AI models, and track which teams and applications use them. Today, Endor Labs can identify all AI models from Hugging Face.
-
Evaluate—Analyze AI models based on known risk factors using Endor Scores for security, quality, activity, and popularity, and identify models with questionable sources, practices, or licenses.
-
Enforce—Set guardrails for the use of local, open-source AI models across the organization based on your risk tolerance. Warn developers about policy violations and block high-risk models from being used within your applications.
“While vendors have rushed to incorporate AI into their security tooling, they've largely overlooked a critical need: securing AI components used in applications,” said Katie Norton, Research Manager, DevSecOps and Software Supply Chain Security at IDC. “IDC research finds that 60% of organizations are choosing open-source models over commercial ones for their most important GenAI initiatives, so finding and securing these components is critical for any dependency management program. Vendors like Endor Labs are addressing an urgent need by integrating AI component security directly into software composition analysis (SCA) workflows while providing meaningful remediation capabilities that don't overwhelm developers.”
Endor Labs AI Model Discovery is available now for existing customers. Get a free 30-day trial of the full platform here.
Learn more about how Endor Labs AI Discovery can help with your AI code governance at: https://www.endorlabs.com/learn/how-to-discover-open-source-ai-models-in-your-code.