Artificial Intelligence Risk Management Framework

Artificial Intelligence Risk Management Framework

After completing this reading, you should be able to:

  • Describe how organizations can frame the risks related to AI and explain the challenges that should be considered in AI risk management.
  • Identify AI actors across the AI lifecycle dimensions and describe how these actors work together to manage risks and achieve the goals of trustworthy and responsible AI.
  • Describe the characteristics of trustworthy AI and analyze the proposed guidance to address them.
  • Explain the potential benefits of periodically evaluating AI risk management effectiveness.
  • Describe specific functions applied to help organizations address the risks of AI systems in practice.

Framing Risks Related to AI and Challenges in AI Risk Management

Understanding and Addressing AI Risks, Impacts, and Harms

AI risk management is pivotal in minimizing potential negative impacts of AI systems, such as threats to civil liberties and rights, while also maximizing positive impacts. Effective risk management leads to more trustworthy AI systems and potential benefits for individuals, communities, society, organizations, and ecosystems. AI risk management involves understanding and addressing both the probability of events and the magnitude of their consequences. These impacts can be positive, negative, or both, and can result in opportunities or threats. The AI RMF is designed to address new risks as they emerge, particularly where impacts are not easily foreseeable, and applications are evolving​​.

Challenges in AI Risk Management

a)      Risk Measurement

Measuring AI risks is challenging due to the often undefined or poorly understood nature of these risks. Complications arise from dependencies on third-party software, hardware, and data, which can accelerate development but also complicate risk measurement. There’s also a current lack of consensus on robust and verifiable measurement methods for risk and trustworthiness, particularly across different AI use cases. Measuring risk at different stages of the AI lifecycle can yield different results, and the opaque nature of AI systems can further complicate risk measurement​.

Measuring AI risks presents several unique challenges.

  • Risks from Third-Party Components: Third-party data, software, or hardware can accelerate AI research and development but also introduce complex risk measurement challenges. The integration of these components into AI products or services may not align with the organization’s risk metrics or methodologies. This misalignment can occur both in the development and deployment phases. Transparency issues about the risk metrics or methodologies used by the developer and the complexities of customer use or integration of third-party data or systems can further complicate risk measurement. It is essential for all parties involved in the development, deployment, or use of AI systems to manage these risks effectively, whether these AI systems are used as standalone or integrated components.
  • Tracking Emergent Risks: Proactively identifying and measuring emergent risks is critical. AI system impact assessment approaches can aid in understanding potential impacts or harms within specific contexts.
  • Availability of Reliable Metrics: The absence of consensus on robust, verifiable methods for measuring risk and trustworthiness is a significant challenge. Risks in developing metrics often reflect institutional biases, and these approaches can be oversimplified, gamed, or lack critical nuance. They might be relied upon in unexpected ways or fail to account for differences in affected groups and contexts. Effective metrics should consider varying contexts, recognizing that harms may affect diverse groups differently, and that communities or sub-groups who may be harmed are not always direct system users.
  • Risk in Different Stages of the AI Lifecycle: Risks measured at different stages of the AI lifecycle can yield varying results. Some risks may be latent and increase as AI systems adapt and evolve. Different AI actors across the lifecycle may perceive risks differently. For instance, an AI developer who makes AI software available, such as pre-trained models, can have a different risk perspective than an AI actor who deploys that model in a specific use case.
  • Risk in Real-World Settings: Risks measured in controlled environments, such as laboratories, may differ significantly from those that emerge in operational, real-world settings.
  • Inscrutability of AI Systems: The opaque nature of AI systems, characterized by limited explainability or interpretability, lack of transparency in development or deployment, and inherent uncertainties, can make risk measurement particularly challenging.
  • Human baseline: When managing the risks of AI systems, especially those designed to enhance or take over tasks typically done by humans like decision-making, it’s important to have baseline metrics for comparison. However, establishing these benchmarks is challenging because AI systems not only handle a variety of tasks but also approach these tasks differently than humans.

b)      Risk Tolerance

This refers to an organization’s or AI actor’s readiness to bear risk to achieve objectives. Risk tolerance is highly contextual, and influenced by legal, regulatory requirements, and organizational priorities. It can vary among organizations and changes over time as AI systems and societal norms evolve.

Challenges Associated with Determining Risk Tolerance

  • Contextual nature of risk tolerance: Risk tolerance is highly contextual, varying significantly based on the application, use-case, legal, and regulatory requirements. This variability makes it challenging to establish a one-size-fits-all threshold or guideline for acceptable risk levels.
  • Influence of external factors: Various external factors, such as organizational priorities, industry norms, and policies established by AI system owners or policymakers, can influence risk tolerances. These factors are often dynamic and evolve with changes in AI systems, societal norms, and regulatory landscapes.
  • Evolving standards: As AI technologies and their societal impacts evolve, so do the standards and norms for acceptable risk levels. This constant evolution means that risk tolerances are a moving target, requiring continual reassessment and adaptation.
  • Lack of established guidelines: In many cases, especially with emerging AI applications, there are no established guidelines or consensus on acceptable levels of risk. This absence leaves organizations to define their own risk tolerances, often without a clear benchmark or precedent.
  • Diverse organizational perspectives: Different organizations may have varied risk tolerances due to their unique priorities, resources, and operational contexts. What is acceptable risk for one organization might be unacceptable for another.
  • Emerging knowledge and methods: As the field of AI continues to grow, new knowledge and methods for evaluating harm/cost-benefit trade-offs are being developed. However, these are still subjects of debate and development, adding complexity to specifying AI risk tolerances.

The AI Risk Management Framework (AI RMF) is designed to be flexible and augment existing risk practices. While it can help prioritize risks, it does not prescribe a specific risk tolerance level. Instead, organizations are advised to align their risk criteria, tolerance, and response with applicable laws, regulations, and norms. This approach allows for the accommodation of the various challenges in defining and managing risk tolerance in AI.

c)       Risk Prioritization

 Risk prioritization in AI involves a strategic allocation of resources to manage potential risks associated with AI systems. This process is crucial because unrealistic expectations about completely eliminating risk can lead to inefficient use of resources and impractical risk management strategies. Recognizing that not all AI risks are equal is key to a productive risk management culture.

Key Aspects of Risk Prioritization include:

  • Assessing trustworthiness: Each AI system developed or deployed by an organization needs to be assessed for trustworthiness. This assessment helps in determining the level of risk associated with each system.
  • Prioritizing Based on assessed risk level and impact: Policies and resources must be prioritized according to the assessed risk level and the potential impact of the AI system. This includes considering the extent to which an AI system is customized or tailored for a specific context of use. Systems with higher risks or potential for significant impact should receive more urgent attention and more comprehensive risk management efforts.
  • Contextual assessment: Continual assessment and prioritization of risks based on the context are essential. Even AI systems that do not directly interact with humans can have significant downstream safety or social implications.
  • Managing residual risk: Residual risk, defined as the risk remaining after risk treatment, directly affects end-users and communities. It is imperative for system providers to fully consider and document these risks, as this information is crucial for end-users to understand the potential negative impacts of interacting with the system.
  • Integration into broader risk management: AI risk management should not be isolated but integrated into broader enterprise risk management strategies. This holistic approach ensures that AI risks are managed in conjunction with other critical organizational risks like cybersecurity and privacy.

d)      Organizational Integration and Management of Risk

 Managing risks associated with artificial intelligence (AI) should not be an isolated process. Various players in the AI field, such as developers, users, and regulatory bodies, have distinct roles and varying levels of understanding about potential risks throughout the AI system’s lifecycle. For example, organizations that develop AI systems might not fully grasp how these systems will be used in practice. It’s crucial to integrate AI risk management with the broader enterprise risk management strategies to ensure a holistic approach. By doing so, AI risks are addressed alongside other critical areas like cybersecurity and privacy, leading to more comprehensive risk management and organizational efficiency.

The AI Risk Management Framework (RMF) should be used in conjunction with other relevant guidelines and frameworks to effectively manage both AI-specific and broader enterprise risks. Some risks associated with AI systems are common to other types of software development and deployment, such as privacy issues related to training data, environmental impacts of high computing demands, security concerns for system data and integrity, and the security of the underlying software and hardware. To effectively manage these risks, organizations need to establish robust accountability mechanisms, clear roles and responsibilities, a supportive culture, and appropriate incentive structures. This commitment to risk management must come from the senior levels of an organization and may require a cultural shift within the organization or industry. Furthermore, smaller organizations might face unique challenges compared to larger ones due to differences in capabilities and resources.

The AI Lifecycle

The lifecycle of an AI system is a comprehensive process that involves several key stages, each playing a critical role in the development, deployment, and management of AI technologies. Understanding these stages is essential for ensuring that AI systems are designed and operated effectively, responsibly, and ethically. The stages in the AI lifecycle include:

  1. Plan and design:
    • This stage is foundational, where the purpose and scope of the AI system are defined.
    • It involves identifying the problem to be solved, determining the objectives, and outlining the design of the AI system.
    • Considerations for ethical, legal, and regulatory compliance are also integrated at this stage.
  2. Collect and process data:
    • Data is the cornerstone of any AI system, and this stage focuses on gathering and preparing the necessary data.
    • It involves data collection, cleaning, and preprocessing to ensure quality and relevance for the AI model.
    • Issues of data privacy, security, and bias are also addressed during this phase.
  3. Build and use model:
    • Here, the actual development of the AI model occurs, using the prepared data.
    • This includes selecting algorithms, training the model, and tuning parameters to optimize performance.
    • The model is tested using subsets of data to validate its accuracy and effectiveness.
  4. Verify and validate:
    • This stage ensures that the AI model meets the required specifications and behaves as expected.
    • Verification involves checking the model against the design specifications, while validation involves testing the model in real-world scenarios.
    • This stage is crucial for assessing the reliability, safety, and fairness of the AI model.
  5. Deploy and use:
    • Deployment involves integrating the AI model into its intended environment or application.
    • This stage requires careful planning to ensure seamless integration and minimal disruption.
    • User training and the development of support mechanisms are also part of this phase.
  6. Operate and monitor:
    • Once deployed, the AI system enters into an operational phase where continuous monitoring is essential.
    • This includes performance tracking, maintenance, and updating the system as needed.
    • Monitoring also ensures that the system adheres to ethical standards and regulatory requirements over time.

Each of these stages is interconnected, with feedback loops that allow for continuous improvement and adaptation of the AI system. The lifecycle approach is fundamental to developing AI systems that are not only effective but also align with ethical guidelines, legal standards, and societal expectations.

Actors

In the realm of AI development and management, a diverse array of actors plays critical roles across various stages of the AI lifecycle. These actors, ranging from data scientists to legal experts, collaborate to ensure that AI systems are developed, deployed, and managed responsibly, ethically, and effectively. Understanding the distinct roles and collaborative dynamics of these AI actors is essential for managing risks and achieving the goals of trustworthy and responsible AI. Their coordinated efforts across different lifecycle stages, from design to operation, form the backbone of successful AI implementation and governance. The following sections delve into the specific tasks and collaborations of these actors across the AI lifecycle dimensions.

AI design phase

  • Tasks: AI Design tasks occur during the Application Context and Data and Input phases. They include creating the AI system’s concept and objectives, planning and designing the system, data collection, and processing.
  • Actors: This category involves data scientists, domain experts, socio-cultural analysts, diversity and inclusion experts, human factors experts (e.g., UX/UI design), governance experts, data engineers, system funders, product managers, third-party entities, evaluators, and legal and privacy governance personnel.
  • Collaboration: These actors collaborate to ensure the AI system is lawful, fit-for-purpose, and meets the required ethical and privacy standards. They work together to articulate the system’s concept, gather and clean data, and document dataset characteristics.

AI development phase

  • Tasks: This phase involves the creation, selection, calibration, training, and testing of AI models or algorithms.
  • Actors: Machine learning experts, data scientists, developers, third-party entities, and legal and privacy governance experts are pivotal in this phase.
  • Collaboration: These actors provide the infrastructure for AI systems. They collaborate on model building and interpretation, ensuring that the AI system aligns with socio-cultural and contextual factors of the deployment setting.

AI deployment phase

  • Tasks: AI Deployment tasks include piloting the system, checking compatibility with existing systems, ensuring regulatory compliance, managing organizational change, and evaluating user experience.
  • Actors: System integrators, software developers, end users, operators, practitioners, evaluators, and domain experts specializing in human factors, socio-cultural analysis, and governance.
  • Collaboration: Their collaboration assures the AI system’s deployment into production, addressing aspects like user experience, system integration, and regulatory compliance.

Operation and monitoring phase

  • Tasks: This involves operating the AI system and regularly assessing its output and impacts.
  • Actors: System operators, domain experts, AI designers, users, product developers, evaluators, auditors, compliance experts, organizational management, and research community members.
  • Collaboration: These actors work together to maintain and monitor the AI system, ensuring it functions as intended and adheres to compliance and ethical standards.

Test, evaluation, verification, and validation (TEVV) tasks

  • Tasks: TEVV tasks are integral throughout the AI lifecycle, encompassing internal and external validation of system design assumptions, model validation and assessment, system validation, and integration in production.
  • Actors: Actors involved in TEVV tasks are distinct from those in test and evaluation roles.
  • Collaboration: These actors play a critical role in ensuring the AI system’s reliability and effectiveness. They collaborate to perform model validation, system testing, compliance checks, and ongoing monitoring for updates, tracking incidents, and managing emergent properties.

Characteristics of Trustworthy AI and Proposed Guidance

Trustworthy AI is an essential goal for organizations developing and deploying AI systems. Such systems must be responsive to various criteria valued by interested parties, and addressing these criteria can reduce negative AI risks. The key characteristics of trustworthy AI include:

  • Validity and reliability
  • Safety
  • Security and resilience
  • Accountability and transparency
  • Explainability and interpretability
  • Privacy-enhancement
  • Fairness with managed harmful bias

Let’s now delve into each in a detail:

  • Valid and reliable: Validation refers to confirming that AI systems meet the requirements for their intended use. Reliability means the system can perform as required under expected conditions for a given time. Inaccurate or unreliable AI systems can increase negative risks and reduce trustworthiness. Ongoing testing or monitoring is essential to confirm that an AI system is performing as intended​​.
  • Safe: AI systems should not endanger human life, health, property, or the environment under defined conditions. Safety considerations should be integrated throughout the AI lifecycle, starting from planning and design. This integration can prevent failures or dangerous conditions​​. Safety risk management in AI should align with guidelines from sectors like transportation and healthcare​​.
  • Secure and resilient: Resilience in AI systems is their ability to withstand or adapt to adverse events or changes. Security involves maintaining confidentiality, integrity, and availability against unauthorized access and use​​. Security includes resilience but also encompasses protocols to protect against, respond to, or recover from attacks. Resilience is about maintaining function in the face of challenges​​.
  • Accountable and transparent: Accountability in AI presupposes transparency, which is the availability of information about the AI system to those interacting with it​​. Transparency spans from design decisions to deployment, influencing confidence in the AI system. It’s necessary for actionable redress related to incorrect outputs or negative impacts. Enhancing transparency should consider the impact on the implementing entity, including resource levels and proprietary information protection​​.
  • Explainable and Interpretable: Explainability is about how AI systems operate, while interpretability concerns the meaning of the system’s outputs​​. These characteristics help users and overseers understand the AI system’s functionality and trustworthiness. They also facilitate debugging, monitoring, and governance​​.
  • Privacy-enhanced: Privacy in AI involves norms and practices that protect human autonomy, identity, and dignity​​. Privacy values should guide AI system design, development, and deployment. Privacy-enhancing technologies (PETs) can be used, but they might have trade-offs, such as reduced accuracy affecting fairness​​.
  • Fair – with harmful bias managed: Fairness addresses equality and equity in AI, managing harmful bias and discrimination. It’s a complex concept, varying across cultures and applications​​. AI bias can be systemic, computational and statistical, or human-cognitive. Managing bias is crucial for fairness and transparency in AI systems​​.

Balancing trustworthiness characteristics

Balancing these characteristics often involves trade-offs, and not all characteristics apply equally in every setting. Decisions about trade-offs depend on the values in the relevant context and should be transparent and justifiable​​. Different AI actors may perceive trustworthiness characteristics differently depending on their role in the AI lifecycle​​. Involving subject matter experts and diverse inputs throughout the AI lifecycle can inform evaluations and identify benefits and positive impacts of AI systems​​.

In conclusion, creating trustworthy AI involves a balanced approach to these characteristics, tailored to the specific context of use. Addressing them collectively and contextually is essential for managing risks and ensuring the AI system’s trustworthiness and societal acceptance.

Benefits of Periodically Evaluating AI Risk Management Effectiveness

Periodic evaluations of the AI Risk Management Framework (AI RMF) effectiveness are crucial in enhancing the management of AI risks and improving the trustworthiness of AI systems. These evaluations offer several key benefits to organizations and framework users.

  • Enhanced AI risk governance processes: Periodic evaluations lead to the development of more refined processes for governing, mapping, measuring, and managing AI risks. They facilitate the clear documentation of outcomes, thereby making the process of AI risk management more transparent and accountable. This enhancement in governance processes is vital for organizations to systematically approach AI risks.
  • Increased awareness of trustworthiness and risks: Through regular evaluations, there is an improved awareness of the relationships and tradeoffs among trustworthiness characteristics, socio-technical approaches, and AI risks. Understanding these relationships is crucial for balancing various aspects such as fairness, privacy, and security in AI systems. This increased awareness aids organizations in making informed decisions about AI deployment and usage.
  • Improved decision-making processes: Evaluations provide explicit processes for informed decision-making regarding the commissioning or deployment of AI systems. These processes are essential in assessing the readiness and suitability of AI systems for real-world applications, ensuring that deployment decisions are made with a thorough understanding of potential risks and benefits.
  • Strengthened organizational accountability: Regular evaluations establish better policies, processes, and procedures for organizational accountability in relation to AI system risks. This heightened accountability is key to an organization’s ability to handle AI-related challenges responsibly and effectively.
  • Fostering a risk-aware organizational culture: These evaluations contribute to fostering a culture within organizations that prioritizes identifying and managing AI system risks and their potential impacts. Cultivating such a risk-aware culture is fundamental for ensuring that AI systems are used in ethical and responsible ways.
  • Improved information sharing: Periodic evaluations facilitate better information sharing within and across organizations about AI risks, decision-making processes, responsibilities, and best practices. Sharing such information is critical for collective learning, avoiding common pitfalls, and enhancing the overall understanding of AI risks.
  • Enhanced contextual knowledge: Evaluations provide greater contextual knowledge, thereby increasing awareness of downstream risks associated with AI systems. This knowledge is vital for comprehensively understanding the broader implications and potential impacts of AI systems in various sectors and domains.
  • Strengthened stakeholder engagement: The process of evaluations strengthens engagement with interested parties and relevant AI actors. This fosters a collaborative environment where ideas and experiences are exchanged, leading to more well-rounded and robust AI systems.
  • Improved TEVV capabilities: Finally, periodic evaluations augment the capacity for Test, Evaluation, Verification, and Validation (TEVV) of AI systems and associated risks. Enhanced TEVV capabilities ensure that AI systems are thoroughly vetted for safety, reliability, and effectiveness before deployment.

Functions to Address Risks of AI Systems

The AI Risk Management Framework (AI RMF) outlines several key functions that organizations can employ to address the risks associated with AI systems effectively. These functions are:

GOVERN function

  • The GOVERN function is designed to cultivate and implement a culture of risk management within organizations designing, developing, deploying, evaluating, or acquiring AI systems.
  • It outlines processes, documents, and organizational schemes that anticipate, identify, and manage the risks a system can pose, including impacts on users and society.
  • This function incorporates processes to assess potential impacts and provides a structure for AI risk management functions to align with organizational principles, policies, and strategic priorities.
  • It connects technical aspects of AI system design and development to organizational values and principles and addresses the full product lifecycle, including legal and other issues concerning the use of third-party software or hardware systems and data.
  • GOVERN is a cross-cutting function, designed to inform and be infused throughout the other three functions (MAP, MEASURE, and MANAGE)​.

MAP function

  • The MAP function establishes the context to frame risks related to an AI system.
  • It recognizes that the AI lifecycle involves many interdependent activities with a diverse set of actors. This diversity can make it challenging to reliably anticipate the impacts of AI systems.
  • Early decisions in identifying the purposes and objectives of an AI system can alter its behavior and capabilities, and the dynamics of the deployment setting can shape the impacts of AI system decisions.
  • The MAP function acknowledges that the best intentions within one dimension of the AI lifecycle can be undermined by interactions with decisions and conditions in other activities​.

MEASURE function

  • The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.
  • It uses knowledge relevant to AI risks identified in the MAP function and informs the MANAGE function.
  • AI systems should be tested before their deployment and regularly while in operation. AI risk measurements include documenting aspects of systems’ functionality and trustworthiness.
  • Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI configurations. This function should include rigorous software testing and performance assessment methodologies​​.

MANAGE function

  • The MANAGE function involves allocating risk resources to mapped and measured risks regularly, as defined by the GOVERN function.
  • Risk treatment includes plans to respond to, recover from, and communicate about incidents or events.
  • Contextual information gleaned from expert consultation and input from relevant AI actors, established in GOVERN and carried out in MAP, is utilized in this function to decrease the likelihood of system failures and negative impacts.
  • Systematic documentation practices established in GOVERN and utilized in MAP and MEASURE bolster AI risk management efforts and increase transparency and accountability.
  • The MANAGE function includes processes for assessing emergent risks and mechanisms for continual improvement​.

These functions provide a comprehensive approach to managing AI risks and emphasize the need for a holistic, informed, and iterative process to address the complex and dynamic nature of AI systems and their impacts.

Practice Question

Which of the following statements is most likely correct?

  1. AI systems are less complex than traditional software systems.
  2. AI systems do not require periodic re-evaluation for risk management.
  3. All AI risks are readily measurable and predictable.
  4. It’s unrealistic to assume all AI risks can be eliminated.

The correct answer is D.

One of the key challenges in AI risk management is the unrealistic expectation about completely eliminating all AI risks. This misconception can lead organizations to allocate resources inefficiently, making risk triage impractical or wasting scarce resources. A more realistic approach recognizes that not all AI risks are the same, and resources should be allocated based on the assessed risk level and potential impact of an AI system. Understanding and accepting that some level of risk is inherent and focusing on prioritizing and managing the most significant risks is crucial​​.

A is incorrect because AI systems are often more complex than traditional software systems. This complexity arises from factors like increased opacity, underdeveloped testing standards, and the challenges in predicting or detecting AI system side effects​.

B is incorrect, as periodic re-evaluation is essential in AI risk management to adapt to new developments and emerging risks. AI systems and their operational environments can change over time, necessitating ongoing reassessment​.

C is incorrect because AI risks are not always readily measurable and predictable. Challenges in risk measurement arise from factors like the inscrutability of AI systems, varying performance in different contexts, and the complexity of integrating AI with existing systems and processes​.

 

Shop CFA® Exam Prep

Offered by AnalystPrep

Featured Shop FRM® Exam Prep Learn with Us

    Subscribe to our newsletter and keep up with the latest and greatest tips for success
    Shop Actuarial Exams Prep Shop Graduate Admission Exam Prep


    Daniel Glyn
    Daniel Glyn
    2021-03-24
    I have finished my FRM1 thanks to AnalystPrep. And now using AnalystPrep for my FRM2 preparation. Professor Forjan is brilliant. He gives such good explanations and analogies. And more than anything makes learning fun. A big thank you to Analystprep and Professor Forjan. 5 stars all the way!
    michael walshe
    michael walshe
    2021-03-18
    Professor James' videos are excellent for understanding the underlying theories behind financial engineering / financial analysis. The AnalystPrep videos were better than any of the others that I searched through on YouTube for providing a clear explanation of some concepts, such as Portfolio theory, CAPM, and Arbitrage Pricing theory. Watching these cleared up many of the unclarities I had in my head. Highly recommended.
    Nyka Smith
    Nyka Smith
    2021-02-18
    Every concept is very well explained by Nilay Arun. kudos to you man!
    Badr Moubile
    Badr Moubile
    2021-02-13
    Very helpfull!
    Agustin Olcese
    Agustin Olcese
    2021-01-27
    Excellent explantions, very clear!
    Jaak Jay
    Jaak Jay
    2021-01-14
    Awesome content, kudos to Prof.James Frojan
    sindhushree reddy
    sindhushree reddy
    2021-01-07
    Crisp and short ppt of Frm chapters and great explanation with examples.