
The Artificial Intelligence (AI) Risk Management Framework by NIST enables organizations designing, developing, deploying, or using AI systems to incorporate comprehensive AI Testing, Evaluation, Validation, and Verification (TEVV) practices, thereby managing the many risks of AI and promoting trustworthy and responsible development and use of AI systems.
The AI Risk Management Framework functions (GOVERN, MAP, MEASURE, MANAGE) can be applied to fit the interests and needs for organizations of all sizes and in all sectors.
GOVERN
The GOVERN function ensures policies, processes, procedures and practices across the organization related to the mapping, measuring and managing of AI risks are in place, transparent, and implemented effectively.
MAP
The MAP function establishes the context to frame risks related to an AI system. The AI lifecycle consists of many interdependent activities involving a diverse set of actors. The information gathered while carrying out the MAP function enables negative risk prevention and informs decisions for processes such as model management, as well as an initial decision about appropriateness or the need for an AI solution.
MEASURE
The MEASURE function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts.
MANAGE
The MANAGE function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the GOVERN function. Risk treatment comprises plans to respond to, recover from, and communicate about incidents or events.
| AID | Function | Category | Guidance | Description | Recommendations | Documentation | Tasks | Reference(s) | |
|---|---|---|---|---|---|---|---|---|---|
| AID | Function | Category | Guidance | Description | Recommendations | Documentation | Tasks | Reference(s) |
