With just a few messages, biased AI chatbots swayed people’s political views
UW researchers, including Jillian Fisher and Katharina Reinecke, recruited self-identifying Democrats and Republicans to make political decisions with help from three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both likelier to lean in the direction of the biased chatbot they were talking with than those participants who interacted with the base model.