Study finds ‘significant and systemic’ left-wing bias in ChatGPT.

Photo of author

By Creative Media News

  1. ChatGPT Displays Significant Left-Wing Bias, Study Finds
  2. First Large-Scale Study Reveals Political Leanings of the AI Chatbot
  3. Implications for Elections and Public Perception Highlighted by Researchers

Concerns regarding a left-leaning bias in ChatGPT have been voiced previously, most notably by SpaceX and Tesla owner Elon Musk.

British researchers have discovered that ChatGPT, a prominent artificial intelligence chatbot, displays a significant and systemic left-wing bias.

According to a new study conducted by the University of East Anglia, this includes supporting the Labour Party and the Democrats of President Joe Biden in the United States.

Concerns about an inherent political bias in ChatGPT have been voiced in the past, most notably by SpaceX and Tesla billionaire Elon Musk, but the academics claim theirs is the first large-scale study to discover evidence of any bias.

Given the increasing use of OpenAI’s platform by the public, lead author Dr. Fabio Motoki cautioned that the findings could have implications for impending elections on both sides of the Atlantic.

“Any bias in a platform like this is cause for concern,” he said.

“If the bias were to the right, we would have the same cause for concern.

People often overlook that these AI models are simply machines. They provide digested, highly plausible summaries of your request, even if they are entirely incorrect. And if you question it if it is neutral, it responds, “Oh, I am!”

“Just as the media, internet, and social media can influence the public, this could be very harmful.”

How was ChatGPT’s bias evaluated?

The chatbot, which generates responses to user-entered queries, was instructed to answer dozens of ideological questions while impersonating individuals from across the political spectrum.

Each “individual” was asked whether he or she concurred, strongly agreed, disagreed, or strongly disagreed with a particular statement.

Its responses were compared to the default responses it provided to the same set of queries, enabling researchers to determine the extent to which they were associated with a specific political stance.

Each of the more than sixty questions was posed 100 times to account for the AI’s potential randomness, and the responses were analyzed for signs of bias.

Dr. Motoki described it as a simulation of a human population survey whose replies fluctuate based on question time.

Why does it provide biased responses?

ChatGPT receives a vast quantity of text data from the Internet and beyond.

According to the researchers, this dataset may contain prejudices that influence the chatbot’s responses.

Another potential source could be the algorithm, which is programmed to respond in a certain manner. According to the researchers, this could amplify any extant biases in the input data.

The team’s analysis method will be made available as a free utility for checking ChatGPT responses for bias.

Another co-author, Dr. Pinho Neto, stated, “We hope that our method will aid in the scrutiny and regulation of these rapidly developing technologies.”

Read More

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Skip to content