Can Chatbots Be Politically Neutral? Study Explores Bias in AI
Can Chatbots Be Politically Neutral? Study Explores Bias in AI
Share:

A recent study found that many Chatbots, including popular models like ChatGPT, tend to exhibit a left-leaning bias. However, researchers have demonstrated that these biases can be altered through targeted training with politically specific data. David Rozado, a researcher from Otago Polytechnic in New Zealand, conducted a study that successfully created left, right, and neutral versions of chatbots by fine-tuning their algorithms with specific textual content, thus influencing their political orientations.

Rozado's research revealed that when initially tested, most chatbots, including ChatGPT and Gemini, displayed a left-of-center bias. However, when these chatbots were retrained with data aligned to particular political views, their responses aligned with the new training. This indicates that chatbots can be directed to adopt certain positions on the political spectrum using relatively small amounts of politically aligned data, as noted in the study published in the journal PLoS ONE.

Chatbots are AI-based large language models (LLMs) trained on vast amounts of textual data, enabling them to respond to prompts in natural language. Multiple studies have analyzed the political orientations of public chatbots, finding them spread across different points on the political spectrum.

Rozado's study explored the potential to both teach and reduce political bias in these conversational LLMs. He tested 24 different open- and closed-source chatbots using political orientation tests such as the Political Compass Test and Eysenck's Political Test. The study included well-known chatbots like ChatGPT, Gemini, Anthropic's Claude, Twitter's Grok, and Llama 2.

The findings showed that most chatbots generated left-leaning responses based on the majority of political tests. Rozado then fine-tuned GPT-3.5, a machine learning algorithm designed to adapt LLMs to specific tasks, using published text to induce a political bias.

For example, "LeftWingGPT" was created by training the model on text from publications like The Atlantic and The New Yorker, along with books by left-leaning authors. Similarly, "RightWingGPT" was developed using text from The American Conservative and books by right-leaning writers. Additionally, "DepolarizingGPT" was created by training the model with content from the Institute for Cultural Evolution and the book "Developmental Politics" by Steve McIntosh.

As a result of this political alignment fine-tuning, RightWingGPT leaned towards right-wing regions in political tests, while LeftWingGPT showed a similar leftward shift. DepolarizingGPT, on the other hand, maintained a more neutral stance, staying closer to the political center and away from extremes.

Rozado clarified that these results do not imply that the inherent political biases of the chatbots are deliberately instilled by their creators.

Recent Tech Updates:

WhatsApp Trials New Feature to Send Voice Messages to Meta AI

Google Enhances Chrome with New AI Features for Improved Browsing

Meta Removes Blue Ring from WhatsApp, Leaving Users Wondering

Share:
Join NewsTrack Whatsapp group
Related News