Top10VPN’s Post

View organization page for Top10VPN, graphic

601 followers

In a year full of elections, it's clear you shouldn't be going to AI for advice. A new report by Proof News and The iAS shows that leading AI models often give inaccurate information when asked about elections in the U.S. The study found that Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral struggled with accuracy, with about half of their responses being rated as inaccurate by the expert testers. Despite OpenAI's pledge to promote information integrity and mitigate misinformation, none of the GPT-4 responses referred users to the legitimate source CanIVote.org, and some responses misrepresented voting processes. Similarly, Google and Anthropic had announced measures to direct users to reliable sources for election information, but the study's findings suggest these measures were not effectively implemented in the tested responses. https://lnkd.in/emMX2bFK

Seeking Reliable Election Information? Don’t Trust AI

Seeking Reliable Election Information? Don’t Trust AI

proofnews.org

To view or add a comment, sign in

Explore topics