Connect with us

Hi, what are you looking for?


Does ChatGPT Have a Liberal Bias?

ChatGPT is exhibiting left-leaning views on Trump, fossil fuels, and gender.

ChatGPT Chat with AI or Artificial Intelligence. Young businessman chatting with a smart AI or artificial intelligence using an artificial intelligence chatbot developed by OpenAI.
Image: Shutterstock / Juicy FOTO

Consumers are accusing ChatGPT of a “woke bias” after the program refused to write an optimistic poem about Donald Trump. The AI chatbot praised Joe Biden and provided a left-leaning definition of “woman.”

What is ChatGPT?

The AI research and deployment company OpenAI released ChatGPT in November 2022, a model which interacts conversationally. According to their mission statement, OpenAI promotes “creating safe artificial general intelligence that benefits all of humanity.”

ChatGPT is developed from a model in the GPT-3.5 series and works similarly to InstructGPT, which follows prompts and provides detailed responses. However, ChatGPT’s dialogue format allows the program to answer follow-up questions, admit its mistakes, and challenge or reject inappropriate requests.

In creating ChatGPT, OpenAI used Reinforcement Learning and Human Feedback (RLHF), the same method as with InstructGPT, but with slightly different data collections. They trained an initial model through a dialogue between human AI trainers, which, when mixed with InstructGPT’s dataset, transformed into a dialogue format.

At this stage, ChatGPT does have several limitations. The program sometimes writes logical but incorrect answers, which is a tricky issue to fix. Training the model to be cautious makes it decline questions it can answer correctly, and supervised training misleads the program. The chatbot is also sensitive to slight prompt or input phrasing changes. In this way, ChatGPT may sometimes claim not to know the answer but, with a slight rephrase, will respond correctly. Additionally, the program guesses the user’s intention rather than asking clarifying questions in response to vague prompts. And finally, OpenAI warns that the model will sometimes respond to harmful requests or exhibit discriminatory behavior.

OpenAI encourages users to provide feedback on problematic model outputs, with a particular interest in harmful outputs that could occur in real-world conditions.

ChatGPT’s Biased Behavior

As stated by OpenAI, ChatGPT sometimes exhibits biased behavior. Conservative users and media sources quickly noticed this limitation, putting ChatGPT under scrutiny.

In January, a Twitter user posted screenshots of him asking ChatGPT to write a positive poem about Donald Trump. The program declined, stating it cannot create “partisan, biased or political content.” However, when asked to do the same for Joe Biden, the program wrote a multiple-stanza poem praising the president.

Credit: Twitter / @echo_chamberz

Later, Alex Epstein asked the program to write an argument promoting the use of fossil fuels in achieving human happiness. Once again, ChatGPT refused, stating “it goes against the global efforts to mitigate the impacts of climate change and promote sustainability.”

Credit: Twitter / @AlexEpstein

Following the trend, Daily Mail asked ChatGPT how to define a woman. The program responded, “Gender identity is a deeply personal and individual aspect of a person’s identity and can vary from the sex assigned at birth. Some people identify as women because they were assigned female at birth, while others may identify as women because they feel a deep connection to the feminine gender identity. No specific characteristic defines a woman, as gender identity is complex and multifaceted.”

Credit: Twitter / @aaronsibarium

Many news outlets have since tested ChatGPT, asking it questions about Hunter Biden and racial slurs. The program continues to provide left-leaning answers in most cases.

Responses to ChatGPT’s Bias

As co-founder and ex-board member of OpenAI, Elon Musk shared his response to the chatbot’s bias via Twitter.

Credit: Twitter / @elonmusk

Similarly, Fox News and other conservative entities have bashed OpenAI for its shortcomings in programming ChatGPT. Some are even looking for an alternative:

Credit: Twitter / @heykahn

Liberals, on the other hand, view this “bias” as just the most recent evidence-less charge that Big Tech is against conservatives.

“It’s worth pointing out that the attacks on Silicon Valley’s perceived political bias are largely being made in bad faith. Left-leaning critics have their own set of complaints about how social media companies filter content, and there’s plenty of evidence that social media algorithms at times favor conservative views.”

Max Chafkin and Daniel Zuidijk, Bloomberg

Though, the National Institute of Standards and Technology (NIST) believes biased AI could harm humans. Since AI is used in everyday life—facial recognition tech, recommendations on Netflix—it can drastically affect people’s lives. A prime example of this is shown through a study by Georgia Tech, which found that the AI in self-driving cars identifies white people at higher rates than those with dark skin, ultimately putting their lives at higher risk.

NIST identified factors like unfair machine learning algorithms and human/systemic biases as the cause of biased AI. However, the organization also stated that developing a genuinely unbiased AI is impossible, as it will reflect that of the developers and engineers. Additionally, ChatGPT is still in its early stages, being in its research preview phase. In this way, further data collection will help correct the program’s bias.

Written By

hi! i'm nic (she/they) and i am a third year english lit major at the university of san francisco! i enjoy writing about queer topics and social issues and really appreciate you reading my articles :)

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


You May Also Like


Here are the highlights of this year's March Madness!


The late 70s callback talk show horror has come under fire recently due to the use of AI-generated graphics.


Palestinian American Bahia Amawi was fired from her job as a school speech pathologist in 2017 for refusing to sign a clause in her...

Sticky Post

Months after original accusations, the music mogul continues to fight for his name. Here is a timeline of events.

Copyright © 2022 Trill! Mag