To unlock AI governance, you need to set up a committee
With the passing of the EU AI Act, companies are feeling the pressure more than ever to make sure they have comprehensive AI governance programs set up in their organizations. Begin by establishing your AI governance committee. Here’s an inside look at how OneTrust set up our own, and how you can do the same in your company:
What is this?
At its most basic level, your AI governance committee is the team responsible for overseeing your efforts for building a robust AI governance program. Their goal should be to ensure your current and future use of AI conforms with responsible AI principles and industry best practices.
How does it affect your company?
A diverse AI governance committee is critical for establishing policies, defining risk levels and organizational risk appetite, and ensuring human involvement for the use of AI systems in your organization. If you’re feeling overwhelmed with where to start your AI governance program more broadly, ensuring you have a committee is a good place to start.
Recommended by LinkedIn
How can you put it into practice?
There are a few key questions to ask yourself as you’re setting up your committee: Who will be involved? How often will you meet? How will those meetings be structured? Once you have the basic logistics worked out, you can begin to tackle the more difficult questions, like how your organization defines risk and how you’ll ensure human involvement in AI systems used in your business.
Check out the full story of OneTrust’s process to see how our committee answered these questions, and how you can start to do the same for your own AI governance program.
Timeline: AI's emerging trends and journey
- A draft version of the EU AI Act agreed in December was leaked on LinkedIn. Naughty...
- The European Commission announces establishment of European AI Office. Sadly, not a big room in Brussels full of robots. This Office will synchronize AI policy across Europe while overseeing the execution and enforcement of the AI Act.
- Elon Musk’s AI startup seeks to raise $6bn. xAI's chatbot Grok was launched in December. It’s being trained by X social posts, so you can expect nice and friendly outputs...
- The Italian Data Protection Authority accuses OpenAI’s ChatGPT of breaching GDPR regulation. 🤐
- The 2nd Global Forum on the Ethics of Artificial Intelligence took place on 5 and 6 February, sharing insights and good practices about AI Governance. We’ll leave this here for them.
- DIST and UKRI published a response to the AI Regulation White Paper consultation. UK's approach will ensure that AI algorithms can quickly adapt to emerging issues and avoid placing burdens on businesses. Just leave the GoT Character Generator alone, ok?
Your AI 101: What are the EU AI Act’s risk levels?
The attention on the EU AI Act keeps growing, and rightfully so—it stands as the most comprehensive AI law yet. It establishes four risk levels to assess AI systems:
- Unacceptable: Clear threats to human safety or rights.
- High: Potential risks to human safety or fundamental rights.
- Limited: Dependence on user transparency and informed consent.
- Minimal risk systems: Applying to certain applications.
Learn more about how the EU AI Act defines these risks, and what these definitions mean for how you use AI in your business.
Follow this human
Dr. Joy Buolamwini is an AI researcher, artist, advocate, and the author of the book Unmasking AI: My Mission to Protect What is Human in a World of Machines. Her work examines the inherent racial and gender bias in AI systems and works to find ways to reduce the harms of AI.