More and more, Artificial Intelligence (AI) is becoming a part of our lives, and that has led to widespread concerns it will promote misinformation.
One particular AI chatbot, ChatGPT, which is renowned for its advanced conversational abilities, was recently sued for defamation. This lawsuit stems from the bot generating false accusations against a radio commentator, alleging his involvement in embezzlement and fraud.
But the radio host is not alone. When I asked Google’s AI Chabot “Bard” for information about me, the AI produced false accusations, saying I threatened a federal judge, was arrested for murder and made unwanted sexual advances in the workplace. To be clear – I’ve never been accused of or arrested for any of these terrible things. They are complete fabrications. The chatbot also misattributed opinion pieces and public statements by others to me. The accusations change every few days.
While I am not planning on filing a defamation lawsuit against the giant that is Google, these examples highlight the significant threat posed by AI chatbots to political discourse, particularly as more individuals come to rely on AI technology and incorporate it into their everyday lives. These examples show we’re right to be concerned about the potential for AI-generated disinformation to spread rapidly and impact public opinion.
Already, we have seen a presidential campaign create false AI-generated images to paint an unfavorable image of a political rival. As AI technology advances with deep-fake voice overs that are capable of stealing a singer’s voice and creating songs out of thin air, it will be exceedingly difficult to tell whether some information is true or false. Because of that, it is foreseeable that false evidence could be introduced into a court of law and lead to unjust convictions, or that real evidence would be put into doubt, helping criminals go free.
In response to these challenges, several institutions and universities have united to combat this emerging threat in technology. One such project is called Project Liberty, an initiative that seeks to foster responsible technology.
While it is a noble goal to combat disinformation, Americans should be wary of groups designed to combat misinformation, as many of their policy recommendations are nothing more than censorship proposals.
This is especially true at Stanford University, which has a long track record of promoting a culture of censorship. Recently, they joined Project Liberty, putting their initiative at risk of censorship.
There is no question that universities should be involved in the conversation on combating censorship, especially elite universities like Stanford University. But, this is a conversation that needs to have as many diverse voices as possible to ensure technology can advance in a manner that can best serve all Americans – and does not impede their right to free speech.
Failing to include voices from a diverse range of ideological viewpoints would not create an objective standard of discourse, and instead lead to the exact opposite of the stated goal – irresponsible technology that promotes misinformation under the guise of objectivity.
As AI technology rapidly improves, Americans from all corners of the country should chime in. Because while misinformation and lies are dangerous, censorship is much worse.