Risks IncidentS Mitigations Governance Blog TEAM What are the risks of Artificial Intelligence? A comprehensive living database of over 1700 AI risks categorized by their cause and risk domain Get updates Explore database What is the AI Risk Repository? The AI Risk Repository has three parts: The AI Risk Database captures 1700+ risks extracted from 74 existing frameworks and classifications of AI risks The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur The Domain Taxonomy of AI Risks classifies these risks into 7 domains and 24 subdomains (e.g., “False or misleading information”) The repository is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem. How can I use the Repository? The AI Risk Repository provides: An accessible overview of threats from AI A regularly updated source of information about new risks and research A common frame of reference for researchers, developers, businesses, evaluators, auditors, policymakers, and regulators A resource to help develop research, curricula, audits, and policy An easy way to find relevant risks and research AI Risk Database The AI Risk Database links each risk to the source information (paper title, authors), supporting evidence (quotes, page numbers), and to our Causal and Domain Taxonomies. You can experiment with a preview version of the database in the embed below, or copy the full database on Google Sheets , or OneDrive . Watch our explainer video on YouTube for a walkthrough of the database and how to use it. Causal Taxonomy of AI Risks The Causal Taxonomy of AI risks classifies how, when, and why an AI risk occurs. View the Causal Taxonomy on a single page Read our research report for more detail on how the Taxonomy was constructed and what it reveals about risks from AI Explore the taxonomy in the figure below Entity AI : Due to a decision or action made by an AI system Human : Due to a decision or action made by humans Other : Due to some other reason or ambiguous Intent Intentional : Due to an expected outcome from pursuing a goal Unintentional : Due to an unexpected outcome from pursuing a goal Other : Without clearly specifying the intentionality Timing Pre-deployment : Before the AI is deployed Post-deployment : After the AI model has been trained and deployed Other : Without a clearly specified time of occurrence Get a quick preview of how we group risks by causal factors in our database. Search for one of the causal factors (eg 'pre-deployment') to see all risks categorized against that factor. For more detailed filtering and to freely download the data, explore the full database . Domain Taxonomy of AI Risks The Domain Taxonomy of AI Risks classifies risks from AI into seven domains and 24 subdomains. View the Domain Taxonomy on a single page Read our research report for more detail on how the Taxonomy was constructed and what it reveals about risks from AI Explore the taxonomy in the interactive figure below 1. Discrimination & Toxicity Risks related to unfair treatment, harmful content exposure, and unequal AI performance across different groups and individuals. 1.1 Unfair discrimination and misrepresentation Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and representation of those groups. 1.2 Exposure to toxic content AI exposing users to harmful, abusive, unsafe or inappropriate content. May involve AI creating, describing, providing advice, or encouraging action. Examples of toxic content include hate-speech, violence, extremism, illegal acts, child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography. 1.3 Unequal performance across groups Accuracy and effectiveness of AI decisions and actions is dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users. 2. Privacy & Security Risks related to unauthorized access to sensitive information and vulnerabilities in AI systems that can be exploited by malicious actors. 2.1 Compromise of privacy by obtaining, leaking or correctly inferring sensitive information AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or loss of confidential intellectual property. 2.2 AI system security vulnerabilities and attacks Vulnerabilities in AI systems, software development toolchains, and hardware that can be exploited, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior. 3. Misinformation Risks related to AI systems generating or spreading false information that can mislead users and undermine shared understanding of reality. 3.1 False or misleading information AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms. 3.2 Pollution of information ecosystem and loss of consensus reality Highly personalized AI-generated misinformation creating "filter bubbles" where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes. 4. Malicious Actors Risks related to intentional misuse of AI systems by bad actors for harmful purposes including disinformation, cyberattacks, and fraud. 4.1 Disinformation, surveillance, and influence at scale Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim to manipulate political processes, public opinion and behavior. 4.2 Fraud, scams, and targeted manipulation Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery. 4.3 Cyberattacks, weapons development or use and mass harm Using AI systems to develop cyber weapons (e.g., coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or CBRNE), or use weapons to cause mass harm. 5. Human-Computer Interaction Risks related to problematic relationships between humans and AI systems, including overreliance and loss of human agency. 5.1 Overreliance and unsafe use Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties. 5.2 Loss of human agency and autonomy Humans delegating key decisions to AI systems, or AI systems making decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory or becoming cognitively enfeebled. 6. Socioeconomic & Environmental Risks related to AI's impact on society, economy, governance, and the environment, including inequality and resource concentration. 6.1 Power centralization and unfair distribution of benefits AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality. 6.2 Increased inequality and decline in employment quality Widespread use of AI increasing social and economic inequalities, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers. 6.3 Economic and cultural devaluation of human effort AI systems capable of creating economic or cultural value, including through reproduction of human innovation or creativity (e.g., art, music, writing, code, invention), can destabilize economic and social systems that rely on human effort. This may lead to reduced appreciation for human skills, disruption of creative and knowledge-based industries, and homogenization of cultural experiences due to the ubiquity of AI-generated content. 6.4 Competitive dynamics AI developers or state-like actors competing in an AI 'race' by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems. 6.5 Governance failure Inadequate regulatory frameworks and oversight mechanisms failing to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately. 6.6 Environmental harm The development and operation of AI systems causing environmental harm, such as through energy consumption of data centers, or material and carbon footprints associated with AI hardware. 7. AI System Safety, Failures, & Limitations Risks related to AI systems that fail to operate safely, pursue misaligned goals, lack robustness, or possess dangerous capabilities. 7.1 AI pursuing its own goals in conflict with human goals or values AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals. 7.2 AI possessing dangerous capabilities AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system. 7.3 Lack of capability or robustness AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning. 7.4 Lack of transparency or interpretability Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors. 7.5 AI welfare and rights Ethical considerations regarding the treatment of potentially sentient AI entities, including discussions around their potential rights and welfare, particularly as AI systems become more advanced and autonomous. 7.6 Multi-agent risks Risks from multi-agent interactions, due to incentives (which can lead to conflict or collusion) and/or the structure of multi-agent systems, which can create cascading failures, selection pressures, new security vulnerabilities, and a lack of shared information and trust. Get a quick preview of how we group risks by domain in our database. Search for one of the domain/subdomain names (eg 'fraud') to see all risks categorized against that domain. For more detailed filtering and to freely download the data, explore the full database . How to use the AI Risk Repository Our Database is free to copy and use The Causal and Domain Taxonomies can be used separately to filter this database to identify specific risks, for instance, risks occurring pre-deployment or post-deployment or related to Misinformation The Causal and Domain Taxonomies can be used together to understand how each causal factor (i.e., entity , intention and timing ) relate to each risk domain. For example, to identify the intentional and unintentional variations of Discrimination & toxicity ‍ Offer feedback or suggest missing resources, or risks, here , or email airisk[at]mit.edu We provide examples of use cases for some key audiences below. How policymakers might use this tool To understand the research and policy landscape. For risk assessments to inform policy decisions. As shared framework for discussing AI risks with other groups. As a way to monitor emergent risks and ensure complete oversight. To identify new, previously undocumented risks. To prioritize and plan funding. How risk evaluators might use this tool To identify new, previously undocumented, risks. To understand the risk landscape and curate or create related evaluations. As a framework for discussing risks and potential evaluations with clients. As a basis for developing specific risk determination criteria. As a way to determine and communicate the scope of an audit. How academics might use this tool As a foundation for developing other classifications (e.g., the actions taken to address specific types of risks, or the actors involved in those risks). To find underexplored areas of AI risk research. To develop material for education and training. To help validate where they have identified new, previously undocumented, risks. To understand the landscape of existing research. How industry might use this tool To conduct internal risk assessments. To identify new, previously undocumented, risks. Evaluating risk exposure and developing risk mitigation strategies. To develop research and training. Frequently Asked Questions How can I access the database without a Google account? Please access via OneDrive . We will create a better formatted version or find a better solution in the future. How did you create the AI Risk Repository? We used a systematic search strategy, forwards and backwards searching, and expert consultation, to identify 65 AI risk classifications, frameworks, and taxonomies. We extracted 1600+ risks from these documents into a living AI risk database. We used the best fit framework synthesis approach to create our taxonomies. This involved selecting an existing framework, coding extracted risks against it, then iteratively refining the framework through analysis of risks until we developed a comprehensive structure that could effectively categorize all relevant risks. Which existing frameworks & documents did you include? You can view the frameworks that were identified and extracted into the Repository: As a slide deck , showing key figures or tables presenting the framework, and citation information for the document. In the database , with metadata and extracted information for each document. What can I do if I think there is a missing risk or resource? Use this form to offer feedback, suggest resources or risks, or make contact. You can also email pslat[at]mit.edu. What are some limitations of the current AI Risk Repository? The Repository has several limitations: Limited to risks from 65 documents (although we screened >17,000 records after a systematic search of peer-reviewed and gray literature) May be missing emerging, domain-specific risks, and unpublished risks. Has potential for errors and subjective bias; we used a single expert reviewer for extraction and coding. May include poorly communicated or unclear risks: we extracted risks as presented. Our taxonomies prioritize clarity and simplicity over nuance. Our taxonomies do not categorize risks by potentially important factors such as risk impact, likelihood, or discuss the interaction between risks. See our for a full list and suggestions for future research. Why do you have two taxonomies? During this synthesis process, we realized that our database broadly contained two types of classification systems: High-level categorizations of causes of AI risks (e.g., when or why risks from AI occur) Mid-level hazards or harms from AI (e.g, AI is trained on limited data or used to make weapons) Because these classification systems were so different, it was hard to unify them; high-level risk categories such as “Diffusion of responsibility” or “Humans create dangerous AI by mistake” do not map to narrower categories like “Misuse” or “Noisy Training Data,” or vice versa. We therefore decided to create two different classification systems that together would form our unified classification system. Is this unique? To the best of our knowledge, this is the first comprehensive review of AI risk frameworks and taxonomies which extracts their risks and releases that data for further adaptation and use. Please let us know of anything that we may have missed. What are some other databases of AI risks? https://attack.mitre.org/ https://airisk.io/ https://avidml.org/#efforts https://www.aitracker.org/#catalog-tabs Please let us know of anything that we may have missed. How do I cite the AI Risk Repository? To reference our repository, you can cite our pre-print paper: Slattery, P., Saeri, A. K., Grundy, E. A. C., Graham, J., Noetel, M., Uuk, R., Dao, J., Pour, S., Casper, S., & Thompson, N. (2024). The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence . https://doi.org/10.48550/arXiv.2408.12622 Team Peter Slattery MIT FutureTech Alexander Saeri MIT FutureTech & The University of Queensland Michael Noetel MIT FutureTech & The University of Queensland Jess Graham MIT FutureTech & The University of Queensland Neil Thompson MIT FutureTech Alumni Emily Grundy MIT FutureTech Stephen Casper MIT Computer Science and Artificial Intelligence Laboratory Soroush Pour Harmony Intelligence Risto Uuk Future of Life Institute & KU Leuven James Dao Harmony Intelligence Acknowledgments Feedback and useful input: Anka Reuel, Michael Aird, Greg Sadler, Matthjis Maas, Shahar Avin, Taniel Yusef, Elizabeth Cooper, Dane Sherburn, Noemi Dreksler, Uma Kalkar, CSER, GovAI, Nathan Sherburn, Andrew Lucas, Jacinto Estima, Kevin Klyman, Bernd W. Wirtz, Andrew Critch, Lambert Hogenhout, Zhexin Zhang, Ian Eisenberg, Stuart Russell, and Samuel Salzer . Risks Incidents Mitigations Governance Blog Team © MIT FutureTech 2025 The MIT AI Risk Initiative is licensed under CC BY 4.0 MIT Accessibility