RiskRubric.ai Launches as First AI Model Risk Leaderboard to Address Enterprise Security Challenges

September 18th, 2025 3:16 PM
By: Newsworthy Staff

RiskRubric.ai provides standardized security assessments for AI models through objective risk grading across six pillars, enabling organizations to make informed decisions about AI deployment while addressing critical security gaps in rapidly evolving technology.

RiskRubric.ai Launches as First AI Model Risk Leaderboard to Address Enterprise Security Challenges

The Cloud Security Alliance, Noma Security, Harmonic Security, and Haize Labs have launched RiskRubric.ai, the first AI model risk leaderboard designed to address growing security concerns in enterprise AI adoption. The platform provides free, standardized security assessments for hundreds of large language models based on six critical risk pillars: transparency, reliability, security, privacy, safety, and reputation. This initiative comes at a time when AI agents are rapidly proliferating across enterprises, gaining increasing autonomy and access to critical business systems while traditional security frameworks prove inadequate for the breakneck pace of AI development.

RiskRubric.ai evaluates leading AI models through rigorous testing protocols including over 1,000 reliability prompts, 200+ adversarial security tests, automated code scans, and comprehensive documentation reviews. Each model receives objective scores from 0-100 across the six risk pillars, rolling up to A-F letter grades that enable rapid risk assessment without requiring deep AI expertise. The project currently covers 150+ popular AI models including GPT-4, Claude, Llama, Gemini, and specialized enterprise models, with new assessments added continually through the platform available at https://riskrubric.ai.

Niv Braun, CEO and Co-Founder of Noma Security, emphasized the critical need for standardized risk assessments, stating that without them, teams are essentially flying blind when selecting AI models. The collaborative effort represents a watershed moment in making AI model security accessible and transparent for both enterprise cybersecurity teams and AI innovators. Caleb Sima, Chair of the CSA AI Safety Initiative, noted that the rapid adoption and evolution of AI has created an urgent need for a standardized model risk framework that the entire industry can trust, enabling responsible AI innovation at scale.

The technical architecture behind RiskRubric.ai leverages Noma Security's experience securing millions of AI interactions monthly across Fortune 500 enterprises. The assessment methodology combines insights to create AI risk context that helps prioritize and address vulnerabilities and real-world attack patterns observed at scale. Michael Machado, RiskRubric.ai Product Lead, explained that the framework solves the fundamental challenge of creating consistent, comparable risk metrics across wildly different AI architectures, scaling from evaluating a single model in minutes to continuously monitoring hundreds of models as they evolve.

Industry partners contributed specialized expertise to the project, with Haize Labs providing advanced adversarial testing methodologies that uncover failure modes and vulnerabilities that might otherwise remain hidden until exploited in production. Harmonic Security contributed critical insights on privacy assessment and data leakage prevention, addressing organizations' concerns about AI models training on sensitive data. The granular approach to privacy assessment helps organizations understand not just whether a model is secure, but whether it can be trusted with their most sensitive information, which is crucial for maintaining compliance in an AI-driven world.

Source Statement

This news article relied primarily on a press release disributed by citybiz. You can read the source press release here,

blockchain registration record for the source press release.
;