Composite image created by Sara Herbst/RAND using multiple outputs from the Adobe Firefly image generator; photo by Dennis/Adobe Stock.
Growing concerns about the societal risks posed by advanced artificial intelligence (AI) systems have prompted debate over whether and how the U.S. government should promote stronger security practices among private-sector developers. Although some companies have made voluntary commitments to protect their systems, competitive pressures and inconsistent approaches raise questions about the adequacy of self-regulation. At the same time, government intervention carries risks: Overly stringent security requirements could limit innovation, create barriers for small firms, and harm U.S. competitiveness.
To help the U.S. government and AI industry navigate these challenges, RAND researchers identified four distinct governance approaches to strengthen security practices among developers of advanced AI systems:
Government-enforced AI security standards for high-risk model developers
Government-led AI developer authorization program conditioning federal use on security compliance
Industry-led AI security certification to promote adoption of common standards
Self-regulation combined with increased government-industry collaboration on security practices
By presenting a variety of practicable options, this work enables decisionmakers to better weigh trade-offs and find the right balance between strengthening security and preserving innovation.
What Drives Stronger Security and Compliance?
To draw lessons for the AI industry, the researchers examined security governance approaches in seven high-risk sectors, such as nuclear, chemical, and health care.[1]
Among these industries, they found that federal agencies and industry consortia established compliance regimes to promote industrywide adoption of security standards. By combining incentives and penalties, these governance models sought to shift firms’ cost-benefit calculations and drive greater investment in protective measures.
Across these approaches, the RAND researchers identified four foundational elements that are critical to achieving compliance and promoting security:
- Leadership and institutional capacities are organizational elements that provide authority, resources, and expertise needed to design and implement the framework.
- Security requirements establish expectations for how entities should protect systems, data, and physical assets and form the foundation for accountability and oversight.
- Compliance verification includes the processes used to assess whether entities meet established security requirements, such as audits and reporting requirements.
- Enforcement mechanisms are tools to drive compliance, including penalties for noncompliance and revocation of benefits.
Approaches to each element varied across regimes for several reasons, including differences in the nature of the assets being protected and the number and diversity of covered entities. Where elements were underdeveloped or poorly implemented, compliance lagged and security gaps persisted.
The analysis additionally identified two principles that guide compliance regime design and implementation: proportionality, which calibrates requirements to entities’ level of risk and operational capacity, and stakeholder engagement with transparency. Together, these principles minimize undue burdens, enhance the regime’s perceived legitimacy among affected parties, and improve the likelihood of compliance.
Three Illustrative Compliance Regimes for Securing Advanced AI
Of the four policy approaches identified, three involve the creation of compliance regimes that aim to compel or incentivize the frontier AI industry to adopt common security standards: (1) government-enforced security standards, (2) government-led developer authorization, and (3) industry-led certification. Below is a summary of how each option addresses the four foundational elements of successful regimes.
1. Government-Enforced AI Security Standards
Securing Advanced Frontier Environment—AI (SAFE-AI)
This regime requires developers of high-risk general-purpose models to adopt security standards to reduce the risk of theft, misuse, and compromise. Authorized by new congressional legislation and overseen by a newly established AI Safety and Security Institute (AISSI)—a successor to the Center for AI Standards and Innovation—within the Department of Commerce, the regime employs a tiered, risk-based structure, with the strictest security measures reserved for high-risk model developers to guard against nation-state threats. Compliance is ensured through audits, penetration testing, and incident reporting, while accountability is enforced through a variety of proportionate enforcement actions designed to provide opportunities for remediation.
- Leadership and governance. Congress grants AISSI authority to set and enforce security requirements. AISSI proposes rules through a formal rulemaking process, guided by a presidentially appointed board and informed by public and industry input. A director of AI security and compliance oversees technical staff specializing in AI security, cyberdefense, and regulatory oversight.
- Security requirements. SAFE-AI sets both prescriptive and outcome-based security requirements, with tiered obligations based on model risk as determined by training compute. Baseline controls are required across all covered labs, while higher-risk model developers must meet stricter requirements to defend against sophisticated nation-state adversaries.
- Compliance verification. Labs demonstrate adherence through incident reporting, audits, inspections, independent government red teaming, and whistleblower protections. The most powerful models receive the most oversight.
- Enforcement mechanisms. SAFE-AI addresses noncompliance through corrective action plans, escalating civil penalties, operational suspensions, and public disclosure, scaled to the severity of violations.
2. AI Developer Authorization for Federal Use
SecureAI Authorization
This federal program authorizes AI developers for use in government systems and conditions participation on compliance with secure-by-design principles.[2] Operated by an expanded FedRAMP program office under amended federal policy, SecureAI establishes risk-based authorization tiers imposing stricter requirements on models that handle sensitive government data or inform high-impact decisions.[3] Compliance is enforced through third-party assessors, continuous monitoring, and corrective action plans, including revoking authorization for noncompliant labs. The program helps ensure that models deployed in sensitive government environments—such as those handling classified intelligence or informing military decisionmaking—remain resilient to tampering and covert behaviors.
- Leadership and governance. Extending OMB policy, the SecureAI program embeds secure-by-design requirements for developers of AI models in federal procurement rules.[4] A new FedRAMP office oversees authorization and compliance, a steering board of senior agency chief information officers sets high-level policy, and an advisory expert council recommends evolving controls. An accreditation team manages technical assessments and accredits third-party auditors.
- Security requirements. The regime creates risk-based authorization tiers; models that handle sensitive data or inform high-impact decisions are subject to the strictest security requirements.
- Compliance verification. Authorized developers undergo impact-level assessments and third-party audits and submit security plans to demonstrate adherence. The FedRAMP program manager grants time-limited authorizations contingent on continuous monitoring, vulnerability scans, incident reporting, periodic audits, and reassessment following major model updates.
- Enforcement mechanisms. Authorized developers must document deficiencies in corrective action plans and implement timely remediation. The program manager reviews responses and imposes consequences for noncompliance, which range from suspending federal use to revoking authorization.
3. Industry-Led AI Security Certification
Frontier AI Security Standards Organization (FASSO)
FASSO, a new industry consortium, establishes a certification program to enforce shared security standards across participating frontier AI developers to mitigate competitive pressures that might discourage security investments. Operated under a multistakeholder governance structure, FASSO includes dedicated committees for standards, certification, and compliance. Participation is voluntary but binding, with certification publicly displayed and noncompliant developers required to remediate or face sanctions, including decertification.
- Leadership and governance. Multistakeholder governance includes leading AI labs, security experts, and nonvoting government liaisons. Technical committees develop security standards, oversee compliance, resolve disputes, and accredit auditors. A central board coordinates these functions and oversees accountability actions. Although participation is voluntary, participating developers must adhere to security obligations.
- Security requirements. Security standards are developed using technical expertise and input from consortium members to ensure that they are both rigorous and feasible. Controls are prescriptive where necessary while retaining flexibility and varied implementation options to accommodate diverse systems and evolving threats.
- Compliance verification. Developers must register models, conduct self-assessments, and undergo third-party audits that are reviewed by FASSO’s compliance committee. Certification status is public, and there is continuous monitoring, reassessment, and incident reporting. Disputes go through impartial mediation.
- Enforcement mechanisms. Noncompliant developers must remediate deficiencies under close monitoring. Persistent noncompliance can trigger suspension, decertification, and public disclosure of violations.
4. Self-Regulation and Increased Government-Industry Collaboration
A fourth policy option emphasizes voluntary government-industry collaboration to advance security practices of AI developers in targeted areas rather than imposing a formal compliance regime.
The researchers identified several areas in which government involvement could provide unique value: establishing AI security standards, facilitating intelligence- and information-sharing, and expanding developers’ access to government penetration testing and personnel vetting programs.
Develop AI security standards. NIST, with input from industry, should develop technical security standards for frontier AI systems to fill gaps in existing frameworks (e.g., model weight security). By leveraging its consultative, consensus-driven approach, NIST can work closely with industry to incorporate security best practices and technical expertise. Such an effort could help establish norms for protecting frontier AI systems, ensure consistent implementation across the sector, and provide a foundation for regulatory or voluntary compliance efforts.
Formalize government–frontier AI lab intelligence- and information-sharing. To help frontier AI labs proactively strengthen their defenses, the federal government can expand information-sharing on AI-specific threats, vulnerabilities, and security best practices. Key actions include identifying labs’ priority information needs, designating a single federal liaison for the industry, enhancing intelligence collection on threats to AI labs, and streamlining classified intelligence-sharing. In turn, AI labs should share insights from their incidents and investigations, enabling the government to cross-check against its intelligence and generate new threat and vulnerability insights.
Support red-team evaluations and penetration testing. The federal government should provide penetration-testing services—similar to those already offered to the private sector by the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the military services—to simulate real-world adversarial attempts to exploit systems and test defenses. Although many labs conduct red teaming with in-house or third-party teams, federal teams can augment these services by bringing unique capabilities, such as access to classified threat intelligence, the ability to emulate nation-state actors, and experience running extended campaigns across complex networks.
Strengthen AI lab personnel vetting. The government could help AI labs reduce insider threats by supporting personnel vetting and suitability checks for sensitive employee roles. Options include extending national security clearance processes to select positions, conducting tailored background checks without issuing clearances, or screening applicants against federal security databases to flag potential risks. These approaches would draw on the government’s unique authorities and information while giving labs stronger tools to prevent insider threats.
Which Option Is Best?
Although multiple governance approaches can promote security among frontier AI labs, each involves trade-offs regarding the level of security achieved, the likelihood of compliance, and the burden placed on industry. Decisionmakers may pursue different options depending on priorities. These trade-offs are summarized in Table 1.
SAFE-AI: Strongest security standards and enforcement mechanism but also significant requirements. This option carries the strongest legal authority, backed by congressional legislation that provides regulators authority to mandate and enforce compliance. It covers all developers meeting the threshold for high-risk AI systems and sets the highest security standards, offering the greatest potential to defend against nation-state threats. However, because it applies to all high-risk developers, it is likely to impose the most significant costs and burdens on the industry.
SecureAI: Voluntary participation limits coverage but reduces industry burden. Because this approach relies on federal procurement conditions for enforcement, it applies only to AI developers who voluntarily choose to do business with the federal government, potentially limiting its coverage. Because participation is voluntary, the regime may also be constrained in setting robust security requirements; overly stringent requirements could deter developer engagement. However, the opt-in nature of this model reduces the overall burden on industry and the risk of stifling innovation.
FASSO: Industry ownership with modest burden but weak security and incentives to participate. This option offers the weakest incentives for participation among the compliance regimes, relying on voluntary industry agreement to promote shared security standards. As a result, the regime may struggle to ensure consistent compliance and may feature more lenient security requirements to encourage adoption. However, industry leadership can foster a sense of ownership, potentially resulting in more meaningful adherence than a government-led model. In addition, direct industry involvement helps ensure that security requirements reflect real-world operational constraints and incorporate technical expertise to improve effectiveness.
Self-regulation and public-private collaboration: Limited burden but uneven security advancement. Because this option does not establish a formal compliance regime, it relies on market forces to incentivize AI developers to adopt security practices. As a result, security approaches and standards are likely to be uneven across the industry. However, this approach can still advance security in targeted areas and imposes no burdens on developers, therefore avoiding the risk of stifling innovation.
Selecting the appropriate approach should be guided primarily by the underlying rationale for the regime, the perceived level of risk posed by AI systems, and the extent to which market incentives are seen as sufficient to mitigate them. If decisionmakers believe that frontier AI could eventually pose catastrophic societal harms, government-led compliance regimes and strict security standards may be warranted. Conversely, a more measured view of AI’s risks may justify less burdensome frameworks, such as industry-led initiatives or voluntary public-private partnerships.
Notes
Available for Download
Topics
Citation
RAND Style Manual
Mitch, Ian, Matthew J. Malone, Karen Schwindt, Gregory Smith, Wesley Hurd, Henry Alexander Bradley, and James Gimbi, Four Governance Approaches to Securing Advanced AI, RAND Corporation, RB-A4159-1, 2026. As of January 25, 2026:
Chicago Manual of Style
Mitch, Ian, Matthew J. Malone, Karen Schwindt, Gregory Smith, Wesley Hurd, Henry Alexander Bradley, and James Gimbi, Four Governance Approaches to Securing Advanced AI. Santa Monica, CA: RAND Corporation, 2026. .
This publication is part of the RAND research brief series. Research briefs present policy-oriented summaries of individual published, peer-reviewed documents or of a body of published work.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.

