To unlock AI governance, you need to set up a committee

To unlock AI governance, you need to set up a committee

With the passing of the EU AI Act, companies are feeling the pressure more than ever to make sure they have comprehensive AI governance programs set up in their organizations. Begin by establishing your AI governance committee. Here’s an inside look at how OneTrust set up our own, and how you can do the same in your company: 


What is this? 

At its most basic level, your AI governance committee is the team responsible for overseeing your efforts for building a robust AI governance program. Their goal should be to ensure your current and future use of AI conforms with responsible AI principles and industry best practices.  

How does it affect your company? 

A diverse AI governance committee is critical for establishing policies, defining risk levels and organizational risk appetite, and ensuring human involvement for the use of AI systems in your organization. If you’re feeling overwhelmed with where to start your AI governance program more broadly, ensuring you have a committee is a good place to start.  

How can you put it into practice? 

There are a few key questions to ask yourself as you’re setting up your committee: Who will be involved? How often will you meet? How will those meetings be structured? Once you have the basic logistics worked out, you can begin to tackle the more difficult questions, like how your organization defines risk and how you’ll ensure human involvement in AI systems used in your business.  

Check out the full story of OneTrust’s process to see how our committee answered these questions, and how you can start to do the same for your own AI governance program.


Timeline: AI's emerging trends and journey  


Your AI 101: What are the EU AI Act’s risk levels?

The attention on the EU AI Act keeps growing, and rightfully so—it stands as the most comprehensive AI law yet. It establishes four risk levels to assess AI systems: 

  1. Unacceptable: Clear threats to human safety or rights. 
  2. High: Potential risks to human safety or fundamental rights. 
  3. Limited: Dependence on user transparency and informed consent. 
  4. Minimal risk systems: Applying to certain applications. 

Learn more about how the EU AI Act defines these risks, and what these definitions mean for how you use AI in your business.  


Follow this human 

Dr. Joy Buolamwini is an AI researcher, artist, advocate, and the author of the book Unmasking AI: My Mission to Protect What is Human in a World of Machines. Her work examines the inherent racial and gender bias in AI systems and works to find ways to reduce the harms of AI. 

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics