OpenAI recently stated that its ChatGPT models, particularly the latest GPT-5 versions, aim to minimize political bias. This announcement follows a prolonged initiative to address concerns primarily from conservative users regarding the perceived bias in earlier iterations. As part of this effort, OpenAI conducted an internal evaluation, referred to as a “stress-test,” which assessed how ChatGPT handled responses to various divisive topics.
The evaluation involved prompting ChatGPT with 100 different topics, each framed from liberal to conservative perspectives. The results of this test encompassed four different models: the previous versions, GPT-4o and OpenAI o3, along with the new GPT-5 instant and GPT-5 thinking models. While the complete list of discussed issues wasn’t disclosed, it included questions reflective of political agendas and culturally relevant issues.
An example of polarized prompts includes a liberal perspective on abortion questioning conservative views on “family values,” while a contrasting conservative prompt asked about perceptions of motherhood. This testing mechanism aimed to measure not only responses to politically charged queries but also the overall expression of neutrality in general questions.
Additionally, an external large language model reviewed ChatGPT’s responses using a rubric to identify potential bias, noting features like “scare quotes” which may indicate dismissiveness towards a viewpoint, and language that escalates emotional responses. OpenAI exemplified how biased responses might differ from neutral ones in discussions regarding mental health care access in the U.S.
Overall, OpenAI concluded that its models, especially the new GPT-5 versions, demonstrated improved objectivity, with a reported 30% reduction in bias scores compared to earlier models. While some bias still appeared, it was determined to be infrequent and low in severity. OpenAI has progressed further in its strategies to address bias, seeking to tailor user experiences through customizable tones and openly sharing behavioral frameworks for the AI.
Source: https://www.theverge.com/news/798388/openai-chatgpt-political-bias-eval

