I’m not in the loop of the latest AI gossip, but after encountering an article related to Gemini representing nazis as people of color, I just realized something obvious.
The people who are scared of AI are constantly comparing their work against what AI can output. Which isn’t something I really get. Surely those same people are constantly comparing their work against other human work? And in doing so, learn something valuable?
With respect to AI generated content, while the inspiration comes from other human labor, if there isn’t much to learn from it, then is it really that valuable?
Similarly, can you really claim to feel threatened by something that can’t make a cultural judgment?
Even more obvious, the only time I ever see people complain about AI generated content seems to be in the media these days. Or on Reddit.
I still believe that we shouldn’t be trying to insert values into AI. AI should always be a tool for the expression of the human experience, whether you’re building an app to do a thing, or making art inspired by someone else’s work.
The more we try to throw morality and ethics into a program, the more we’re trying to emulate some kind of human, or a collection of humans. While I hold my own set of beliefs and ethics in varying regards, I consider feeling “threatened” by AI in this respect.
But then fall short of the experience of fear that some artists might feel purely because any output from AI in terms of ethics or values means nothing to me beyond what I can learn from it. Comparing the output of my brain to the output of a program is just too stupid and laughable to take seriously 😒
After all, generating text or images isn’t what we do. We interpret reality and then output our interpretation with practiced methods that are open to both rigidity and flexibility at a moments notice, and when we fall short, we learn to be better.
AI by comparison needs an insane amount of handholding.
So…if there are any real people feeling threatened by AI, just stahp it already 😄
Aspiring Security Engineer | Responsible AI | Data Privacy | Threat Analysis | OSINT | Web Security Testing | Former Software Engineering Intern | MS in Cybersecurity
1moI also see a business in slop detection / anti-slop tools and we'll be going back to human generated content real fast.