Securing artificial intelligence (AI) systems—particularly increasingly powerful, cutting-edge models—is critical. AI is transforming sectors from medicine and scientific research to finance and national defense. As these systems become more capable, interconnected, and embedded in high-impact workflows, the possibility for exploitation of their potential security vulnerabilities grows more consequential. They also become more attractive targets for misuse.
This guide is a practical, risk-based resource for developers, security experts, and policy professionals navigating the AI security landscape. The guide addresses security of AI systems broadly, regardless of system type and architecture. The accompanying tool provides tailored advice on security controls to implement based on answers the user provides. Building on industry best practices and expert insights, the guide and accompanying tool help users understand and manage the security risks associated with AI systems across their lifecycle—from design and development to deployment and operation.
The risk-based approach means that the guide and the accompanying tool focus on identifying and prioritizing risks specific to one’s AI system and context; map proportional, layered security safeguards and controls to each phase of the AI lifecycle; and align with established security standards, to support informed, consistent decisionmaking. The guide does not aim to replace compliance frameworks, legal advice, or expert consultation. Instead, it offers clear, actionable steps for reducing a system’s most relevant risks and building a stronger security posture.
This publication is part of the RAND tool series. RAND tools include models, databases, calculators, computer code, GIS mapping tools, practitioner guidelines, web applications, and various other toolkits and applied research products. All RAND tools undergo rigorous peer review to ensure both high data standards and appropriate methodology in keeping with RAND’s commitment to quality and objectivity.
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.
RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.

