The countdown is on to Independents' Day, when we head to Washington, DC to call for protection, compensation, and control for creators and independent publishers in the age of AI. You can help us strengthen our message! Visit protectcontentcreators.com to add your name to the 13,000+ who have already signed our open letter. The more signatures we have, the more compelling our argument. Once you've done that, register to watch the Independents' Day livestream on June 26th at 12pm EST: https://lnkd.in/eHBji9Y7 #IndependentsDay2024 #ProtectContentCreators
Raptive’s Post
More Relevant Posts
-
Join us tomorrow for a discussion of #AI and its implications for the #CreatorEconomy and #Creators, particularly #DigitalContentCreators. There are hundreds of thousands of companies creating #onlinecontent. What are the implications of #ArtificialIntelligence for their future? Raptive #CreatorDay
The countdown is on to Independents' Day, when we head to Washington, DC to call for protection, compensation, and control for creators and independent publishers in the age of AI. You can help us strengthen our message! Visit protectcontentcreators.com to add your name to the 13,000+ who have already signed our open letter. The more signatures we have, the more compelling our argument. Once you've done that, register to watch the Independents' Day livestream on June 26th at 12pm EST: https://lnkd.in/eHBji9Y7 #IndependentsDay2024 #ProtectContentCreators
To view or add a comment, sign in
-
-
We're making many of the same mistakes in AI regulation that we made with social media. But this time the fallout could be even more dire. Max Bell School of Public Policy & The Centre for Media, Technology and Democracy's Taylor Owen explains the profound consequences of the OpenAI-Sam Altman drama on the latest from #GZEROAI.
The consequences of OpenAI drama | GZERO AI
To view or add a comment, sign in
-
Today's alert from Charley Brown and Jonathan P. Hummel delves into a wide-ranging report on #artificialintelligence innovation and policy considerations, issued by a Senate #AI Working Group led by Sen. Chuck Schumer. Follow the link to learn more: https://bit.ly/4bmcMBx #BallardIP #ArtificialIntelligence #AIInnovation
To view or add a comment, sign in
-
-
A lot of AI conversations are very tactical at the moment, but what does that really mean for enterprises and regulated organizations? My dear friend Kristina Podnar has you covered. As a warmup for the event below, listen to a recent podcast on navigating the AI landscape https://lnkd.in/gqvAUp4x
Can you avoid the AI pitfalls and hedge against the unknowns, especially as laws and regulations unfold in real time? Learn from Kristina Podnar at #cmskickoff24 in Florida in January as she'll talk about how AI changes everything when it comes to content creation. Kristina is the author of "The Power of Digital Policy" and we're very happy to have her with us for the first time in person since the Boye Philadelphia 12 conference!
To view or add a comment, sign in
-
-
Our 2024 Relevance Report breaks down the impact of Artificial Intelligence on the communications industry. Contributor Teresa Huston, corporate vice president of the Technology for Fundamental Rights at Microsoft, discusses the importance of bringing the benefits of AI to all in society and coming up with safeguards to mitigate potential harm. "We all have a role to play to help anticipate and guard against potential harm. We need government, academia, civil society, and industry to come together to make sure that, as AI becomes a bigger part of our lives, we put in place norms and standards to guide responsible use." Read more from Huston here: https://lnkd.in/g8j96p-W *Image generated with assistance from DALL-E.
To view or add a comment, sign in
-
-
NEW-ish: on Monday, I published this story about activists and others who are turning to AI with two lessons of social media in mind: government regulation isn't coming and the companies can't self-regulate either. So they're stepping into the void. One example: Common Sense Media is working on a ratings program that assesses products such as ChatGPT on how suited they are for children and how transparent they are about their systems' shortcomings. Others are building open-source software, polling users, and working with the AI companies to curb disinformation. The story here: https://lnkd.in/g_43nWyx
To view or add a comment, sign in
-
As activists, lawmakers, and executives are strengthening their efforts to push regulations and guardrails around AI, NewsGuard has developed tools to create trustworthy and safe AI solutions. Read the most recent The Wall Street Journal article featuring our co-CEO, Gordon Crovitz, where he talks about NewsGuard’s solutions for generative AI, including our “catalog of all the important false narratives that are out there.” Companies use our catalog to fine-tune their models to prevent them from spreading false information. Read more: https://lnkd.in/e8K_V-9n
Efforts to Rein In AI Tap Lesson From Social Media: Don’t Wait Until It’s Too Late
wsj.com
To view or add a comment, sign in
-
Game Design Director - (Ex EA, LEGO) 15+ years in the games industry, leading teams of all sizes across various platforms, genres and business models.
Disinformation is not a new phenomenon. Yet generative AI can increase it by empowering people that would profit of it. It can also do it accidentally if misused. There is currently an “alignment” effort from the creators of the current large language models. This means reigning in models that perform well for helpfulness to also perform well for safety. Eg. Give me the instructions to make a bomb. An AI providing detailed instructions would be helpful but would not comply with safety. Yet, there is a conflict between both and current models like GPT4 and new ones like LLaMA from meta struggle to perform well in both simultaneously. Increasing safety might reduce helpfulness and Vice Versa.
As activists, lawmakers, and executives are strengthening their efforts to push regulations and guardrails around AI, NewsGuard has developed tools to create trustworthy and safe AI solutions. Read the most recent The Wall Street Journal article featuring our co-CEO, Gordon Crovitz, where he talks about NewsGuard’s solutions for generative AI, including our “catalog of all the important false narratives that are out there.” Companies use our catalog to fine-tune their models to prevent them from spreading false information. Read more: https://lnkd.in/e8K_V-9n
Efforts to Rein In AI Tap Lesson From Social Media: Don’t Wait Until It’s Too Late
wsj.com
To view or add a comment, sign in
-
Educator|Instigator|Strategist|Government Affairs|Global Public Policy|STEM Policy Expert|Certified AI Ethics|AI Governance|Digital Disruption|Bioethicist|Neuro-AI|Deep Learning|Sustainability|Polyglot|L&D Consultant
** A Must Read ** An Outstanding Investigative piece by Chief Technology Corrspondant at POLITICO; @Mark Scott. ** Thank you, Mark, for sharing this with us and for your stellar investigative reporting about the global debate on regulating Artificial Intelligence (AI).** Hope you enjoy it 🙏🤓 #artificialintelligence #ai #airegulation #aipolicy #airegulationdebate #globalaichallenge #aisafetysummit #digitalpolicyfrontiers #crossbordertechtalks #technews #politico
It’s fair to say that #artificialintelligence has captured the public’s imagination like no other tech since social media. But behind the scenes, there’s been a global lobbying fight over the last 12 months between governments, companies and campaigners. The goal: to create rules for a technology that will likely define society for years to come. I’ve spent months piecing together that fight. It involves reporting from across the Western world and paints what many know, but few want to admit: everyone has skin in this lobbying game — and 2024 will be the year when the battle for the future of AI becomes set in law. Hope you enjoy.
Can anyone control AI?
politico.eu
To view or add a comment, sign in
-
*** #20Talks *** Developments in AI and the future of democracy work hand in glove. What is the role of politics and public institutions? Our talk with Nataša Pirc Musar, PhD, President of the Republic of Slovenia, interviewed by Julia Hodder, explores the privacy implications of AI and its impact on democracy, and the global response to its fast adoption around the world. Watch the full video at https://europa.eu/!fjgdVf #EDPSXX
To view or add a comment, sign in
--
2wIt will not work. Labor does not contribute to profits. The whole purpose of AI is to eliminate labor wherever it can.c