For more than a year, policy makers have been worried about the consequences of AI getting too powerful. But it’s time to start worrying about the consequences of AI staying as dumb it currently is. My latest for NYT Opinion (gift link): https://lnkd.in/ed-3NQwS
Great article from the perspective of a lot of all the obvious and common inflationary hype marketing that comes with this stuff. It must be exhausting trying to keep track of what's going on in this space without being all in on it. Understanding this subject holistically involves a multidisciplinary perspective that's inclusive of some stuff that people aren't super comfortable engaging with. There's a lot of philosophy on the table and a lot of it is conflicting. There's thoughts about collective intelligences, and relevant tools from the academic discipline of religious studies. There are already people who claim these systems are a form of life. That's not going away and the systems that may be life are only becoming more lifelike. Some regulatory body or other needs to demand that a bunch of tech and business nerds come publicly define key terms and explain what they've got locked in their basements. https://hipsterenergy.club/activist-ally/opinion/navigating-our-future-essential-questions-for-ethical-ai-development/ That article was written by a non-materialist GPT that operates as part of a team of non-materialist GPTs as part of an aggressively counter-hegemonic AI integrated art project.
After being laid off by four tech bros after their emotional support tech bubbles burst, I still remain skeptically optimistic about this one.
Loved your session on Pivot. Finally, some facts and sober perspective on Ai Thank you.
Good morning, Ms. Angwin. I disagree with your overall pessimistic take (and posted about it earlier this morning). But you raise some valid issues, and I believe that dialog between proponents and critics of any new technology is important to both public policy and healthy development and adoption of the tech.
Ed Zitron Hilke Schellmann Nick Corcodilos would love to hear your POVs on this as well ...
Cloud Security Architect // Sentinel SIEM // Generative AI // Innovation // WGA
2moAmazing you got this published considering its core premise that all OpenAI announced was a “routine update” that merely made it “cheaper and faster” is not only false, it borders on a flat out lie. Did you not watch the demos for either Astra or Omni? You omitted the entire core premise. It was neither routine nor an “update”, it’s a reworking of the entire platform. The models, both Google and OpenAi’s, now have EYES.. and a REAL voice… you can connect with them as if you were FaceTiming them, showing them the world in realtime, where they’re able to see, understand and reason about our lives and our surroundings in an entirely novel, and frankly remarkable way. And beyond that, they’ve been given a voice that we’ve not yet heard by any machine. It whispers, it laughs nervously… it understands our breathing and the inflection of pauses and all the non-verbal cues that still translate via speech… post uncanny. Have you seen the “Be My Eyes” demo? We’re talking about a real world use case that can help the blind see and understand our visual world, without surgery. It’s a huge deal. To call any of this “routine” or even update is just willing dishonesty. You can have a biased take, but don’t lie. It’s uncouth, it’s infuriating.