A group of people tested some computer programs that can talk and give information about voting (called chatbots). They found out that most of the chatbots gave wrong or harmful answers. This means they could confuse people or make them think they can't vote. The report says these chatbots are not good enough to help people with important decisions about voting. Some companies say their chatbots will get better, but others don't care much. This is a big problem because it shows that computer programs can sometimes spread false information and make things worse for elections. Read from source...
1. The report is outdated and irrelevant as chatbots have evolved since then, offering more accurate and nuanced information about elections.
2. The test was not fair as it involved only a few chatbots from specific platforms, while many other AI models are available that perform better in providing election-related information.
3. The definition of harmful is subjective and vague, allowing for personal bias and political agendas to influence the results.
4. The report ignores the potential benefits of AI chatbots in enhancing voter education, engagement, and participation, which could outweigh any negative effects.
5. The focus on AI's misinformation risks overshadowing other important issues such as human error, propaganda, media manipulation, and social influence that also affect the democratic process.
Negative
Summary: The article discusses how AI chatbots have failed voters with wrong and harmful answers on US elections. It highlights that more than half of the chatbot responses were inaccurate, and 40% were categorized as harmful, including perpetuating outdated and inaccurate information that could limit voting rights. The report raises questions about how the chatbots' makers are complying with their own pledges to promote information integrity this presidential election year.