Artificial intelligence is fundamentally transforming business – and bringing new risks that traditional IT security frameworks don’t cover. The NIST AI Risk Management Framework (AI RMF) closes this gap with a structured approach to responsible AI governance.
What is the NIST AI RMF?
The AI Risk Management Framework was published on January 26, 2023, by the National Institute of Standards and Technology (NIST). It is a voluntary framework that helps organizations integrate trustworthiness and responsibility into the development, deployment, and evaluation of AI systems.
Unlike regulatory mandates such as the EU AI Act, the NIST AI RMF is not mandatory – but it provides a practical framework that is increasingly regarded as the de facto standard for AI risk management.
The Four Core Functions
The framework organizes AI risk management around four functions:
1. Govern
Establish organizational structures, policies, and processes for responsible AI use. This includes clear accountability, governance structures, and an AI risk strategy at the leadership level.
2. Map
Systematically identify and categorize AI risks. Which AI systems are in use? In what context? Who is affected? What data is involved? This phase creates transparency across your AI landscape.
3. Measure
Quantify and evaluate identified risks. This includes evaluating model accuracy, bias, robustness, and explainability – with defined metrics and thresholds.
4. Manage
Implement concrete risk mitigation measures: monitoring, incident response, regular reviews, and continuous improvement of deployed AI systems.
Generative AI Profile (NIST AI 600-1)
With the explosive adoption of Large Language Models (LLMs) and generative AI, NIST released the Generative AI Profile (AI 600-1) in July 2024. This companion document addresses risks specific to generative AI:
- Content Provenance – Traceability of AI-generated content origins
- Hallucinations – Handling factually incorrect AI outputs
- Data Privacy – Protecting personal data in training data and outputs
- Intellectual Property – Copyright risks with generated content
- CBRN Risks – Misuse potential for chemical, biological, radiological, and nuclear information
The profile defines over 200 additional actions specifically tailored to LLM and GenAI risks.
2025 Updates: New Threat Categories
The 2025 updates expand the framework with practically relevant focus areas:
- Model Provenance – Traceability of model origin and training history, especially for open-source and third-party models
- Data Integrity – Ensuring the quality and authenticity of training data
- Third-Party Model Assessment – Evaluation of externally sourced AI models and components
- Extended Threat Categories – Poisoning attacks (corrupting training data), evasion attacks (bypassing model logic), data extraction (extracting training data), and model manipulation
Particularly relevant: Most organizations don’t use self-trained models but rely on external or open-source components. The new third-party assessment requirements address exactly this scenario.
Relevance for Swiss Organizations
While the NIST AI RMF is a US framework, it has direct relevance for Swiss organizations:
- International Recognition – The framework is referenced globally as a benchmark for AI governance and is compatible with the EU AI Act
- nDSG Complement – The Swiss Data Protection Act regulates personal data handling but is largely silent on AI-specific risks like bias or hallucinations. The NIST AI RMF fills this gap
- FINMA Requirements – Regulated financial institutions using AI benefit from a structured risk management approach
- Audit Readiness – A documented AI risk management approach based on NIST standards strengthens your position in internal and external audits
Practical Implementation: Where to Start?
Getting started doesn’t have to be complex. Three pragmatic steps to begin:
- Create an AI inventory – What AI systems and tools are used across the organization? From ChatGPT to Copilot to industry-specific ML models. Many organizations lack a complete overview
- Conduct a risk assessment – For each identified system: What data does it process? What decisions does it influence? What happens when it malfunctions? The NIST Map and Measure functions provide a clear structure
- Define governance – Who is responsible? What usage policies apply? How are incidents handled? The Govern function of the framework provides the structure
Conclusion
The NIST AI Risk Management Framework provides a mature, practice-oriented approach to responsible AI governance. With the Generative AI Profile and the 2025 updates, it addresses the latest challenges around LLMs and third-party models. For Swiss organizations using or planning to use AI, it serves as a valuable compass – regardless of whether regulatory pressure exists.
The complete NIST AI RMF documentation is available at nist.gov and the AI Resource Center (AIRC).