Your deep dive into NIST’s AI Risk Management Framework

Your deep dive into NIST’s AI Risk Management Framework

You can’t mention AI without someone mentioning the risk associated with it – so how do you make sure you’re effectively identifying and mitigating those risks? In January of 2023, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (RMF) to help you do just that.  

So what is the RMF? And more importantly, how can you use it to help protect your business? Ready?


What is this? 

The AI Risk Management Framework is designed to provide organizations that develop AI systems with a practical and adjustable framework for measuring and protecting against potential harm. It’s broken into two parts; the first focuses on planning and understanding where AI risk comes from and how you can mitigate it. The second focuses on actionable guidance, which NIST describes as the core of the framework.  

How does it affect your company? 

Since there’s no comprehensive federal legislation for AI in the US, many companies struggle to know where to look for guidance for their AI governance operations. This NIST framework can be used as a north star to help you make sure you’re doing what you can to stay compliant.  

How can you put it into practice? 

If you’re struggling to know where to begin in your own AI governance program, the core of the NIST framework can give you a good place to start. The core is broken into four steps: govern, map, measure, and manage. And although these aren’t intended to be used as a checklist, the functions of each of these stages can help you make sure you’re covering all your bases when it comes to responsible AI use. 

For more information on the NIST AI RMF and how OneTrust can help you put it into practice in your business, check out our full blog post.


Timeline: AI's emerging trends and journey


Your AI 101: What are...?  

The EU AI Act defines different AI actors, each carrying distinct responsibilities:  

  • Providers develop an AI system to place on the market, or to put into service under their own name or trademark. 
  • Deployers use an AI system under their authority. 
  • Authorized representatives located or established in the EU who have received and accepted a mandate from a provider to carry out its obligations on its behalf. 
  • Importers within the EU that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the EU. 
  • Distributors make an AI system available in the EU market, not being the provider or importer. 
  • Product manufacturers create an AI system that is put on the market or a manufacturer that puts into service an AI system together with its product and under their own name or trademark. 

  • Operator is the general term referring to all the terms above.  

Check out this OneTrust DataGuidance Insight series by Sean Musch and Michael Charles Borrelli , from AI & Partners , and Charles Kerrigan , from CMS UK , about obligations for AI actors (part one), provider obligations (part two), and practical considerations for users and other AI actors (part three). 


Follow this human

Karen Hao is an award-winning AI reporter and contributing writer to The Atlantic. Her writing focuses on the intersection between AI technology and society.  


Chase Hartline

Technology Partnerships @ OneTrust | Privacy, Security, and Data/AI Governance | CIPM, CIPP/E

4mo

Awesome article. Re: the Air Canada chatbot situation: I think that companies that deploy AI Chatbots in consumer-facing services should totally be responsible for what comes out of them, especially if the chatbot is tied to agents that then allow consumers to take actions. If companies will deploy chatbots to gain efficiencies and save costs on human customer support agents, they should take responsibility for any flubs the chatbot makes, no? Gotta take the bad with the (good?).

Like
Reply
Leslie at Know More

trusted problem solver by asking the right questions, security and protection advocate, island hopper

4mo

Has OneTrust incorporated this guidance into the platform?

Like
Reply
CHESTER SWANSON SR.

Next Trend Realty LLC./ Har.com/Chester-Swanson/agent_cbswan

4mo

Thanks for Sharing.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics