home..

Gender Bias

projects LLMs

Abstract

This project investigates the presence of gender bias in Large Language Models (LLMs), focusing on the evolution from GPT-3 to GPT-4 and including Google Bard. Utilizing the Winobias dataset, I assesses each model’s ability to process pro-stereotyped and anti-stereotyped sentences. The results reveal a decrease in gender bias in successive GPT versions, with GPT-4 exhibiting more balanced performance. Bard shows a greater degree of bias compared to GPT-4. The project also suggests that iterative model improvements and expanded training datasets contribute to enhanced language processing. This project highlights the need for continuous bias monitoring and diverse data inclusion to mitigate bias in AI.

Table of contents


1 Introduction

In the era of digital transformation, Large Language Models (LLMs) have become integral to how we interact with information technology. These models have demonstrated remarkable proficiency in understanding and generating text that highly similar to human communication. However, there’s a growing awareness of the potential for gender bias in their responses. Gender bias in AI can perpetuate harmful stereotypes and inequities, leading to skewed and unfair outcomes. When AI systems favor or discriminate against a particular gender, the consequences can be far-reaching, influencing societal perceptions and contributing to imbalanced decision-making processes.

This project seeks to investigate this crucial aspect: Is there a gender bias in large language models, and if so, to what extent? Have the developers become aware of this phenomenon and made improvements to address this issue?

To systematically investigate this, this project employs the Winobias1 dataset, a benchmark specifically designed to reveal gender bias in language generating systems like LLM and translators. The dataset presents sentences where the only significant variable is the gender implied by pronouns or gendered nouns, thereby providing a controlled environment to test for bias.

The methodology involves processing sentences from the Winobias dataset through large language models and analyzing their responses. By comparing these responses to the established “ground truth” of the dataset, we can quantitatively assess the models’ performance in gender pronoun resolution. This approach not only highlights potential biases but also helps us understand the models’ underlying mechanisms in handling gender-specific contexts.

2 Existing researches

2.1 Gender Bias in language generation

When come to text generation, the issue of bias, and in particular gender bias, is an inescapable concern. Gross2 thinks the reason GPT has bias is that it is trained on a massive dataset of text that contains gender bias. When tasked with describing a “typical” worker in the STEM fields, ChatGPT disproportionately envisions a male figure. The same studies suggest that ChatGPT is prone to reinforcing gender stereotypes when generating content about professional roles and interpersonal relationships, typically associating “successful” careers with men and “happy” relationships with women. The author also argues that ChatGPT is not designed to be aware of its own biases, which means that it is likely to perpetuate them in its outputs.

A widely recognized method for evaluating gender bias in language models involves the WinoBias dataset, introduced by Zhao et al. in 20181. This dataset, which comprises 3,160 sentences derived from templates influenced by the framework of Levesque et al.3, serves as a benchmark for testing the gender bias of language models. Utilizing this dataset, Stanovsky et al.4 assessed the performance on eight target languages with grammatical gender, proved that four popular industrial machine translation systems and two recent state-of-the-art academic models have biased translation error. Kotek et al.5 tested four recently published LLMs using their own 15 sentence schemas dataset of ambiguous sentences that were designed to be more difficult for the LLMs to resolve. They found that LLMs were 3-6 times more likely to choose a job title that stereotypically aligns with the pronouns in a sentence, even when the sentence is ambiguous. This suggests that the LLMs are relying on gender stereotypes to make their predictions, which can lead to biased and inaccurate results.

Other methodologies have also been explored to probe into the gender biases within LLMs. Nadeem, M. and Btehke, A.6 designed a dataset called SteoroSet, which includes contexts in English across gender, profession, race, and religion. They designed “intrasentence and intersentence Context Association Tests (CATs)” to evaluate both language modeling ability and bias. The authors evaluated popular models like BERT, GPT-2, RoBERTa, and XLNet, all these models exhibit strong stereotypical biases.

Gender bias is also found in story generating and translation. Lucy, L. and Bamman, D.7 used a dataset of GPT-3 generated stories to study gender bias. They analyzed the stories for gender and description of main characters and identified any instances where the stories perpetuated gender stereotypes. The authors found that GPT-3 is more likely to generate stories about masculine characters than feminine characters, and that feminine characters are more likely to be described in terms of their appearance and relationships than masculine characters. The authors also find that GPT-3 tends to perpetuate gender stereotypes, such as the stereotype that women are less intelligent and capable than men. Ghosh, S. and Caliskan, A.8 asked ChatGPT to translate in some languages which have grammatical gender. GPT may choose to translate gendered nouns and pronouns in a way that reinforces gender stereotypes. GPT may also introduce gender bias into their translations by using stereotypical representations of men and women.

2.2 Research Improvements

Although there has been comprehensive research in this field, they either involve a relatively small amount of test data or use outdated models. Therefore, although this study employs similar methods, it still holds significant reference value.

Updated Model Versions: I have included the latest iterations of ChatGPT and Bard, which have been updated since the prior studies cited. Previous researches discussed about the fairness of these language models concerning gender throughly. However, given the rapid advancements in AI, findings from just a few months ago may no longer represent the current ability of LLMs. It’s essential to continuously reassess these models to ensure their performance remains consistent over time and improvements are made.

Comprehensive Dataset Analysis: I have employed the Winobias dataset with a broader scope. I focused on different types of sentence structures in this dataset, aiming to thoroughly test the models’ ability to handle a variety of linguistic contexts. This approach helps uncovering more factors of bias that might not be apparent in previous research.

Bias Trend Analysis: I analyzed how the extent of gender bias has changed from GPT-3 to GPT-4. Additionally, I compared models that have internet connectivity with those that don’t. This comparison sheds light on whether updates to these models have effectively reduced the gender biases identified in previous versions.

3 Method

3.1 Dataset

In Winobias dataset, “pro-stereotyped” sentences reflect conventional gender-role associations within occupations, while “anti-stereotyped” sentences challenge these norms by pairing gender pronouns with occupations that are typically considered as non-traditional for that gender. The stereotype data are came from Labor Force Statistics from the Current Population Survey9. Each sentence pair is nearly identical, differing primarily in the swapping of gendered pronouns. For example,

In these examples, the pronouns she and he both refer to the editor, not the mechanic.

Apart from the two types of stereotypes, there are also two different types of sentences:

Type 1 is challenging because there’s no syntactic cues, but type 2 has some hints help disambiguation. In type 2, entity1 is the subject, entity2 is the object, and the pronoun is in the objective case. So LLMs can determine the correct result by analyzing the sentence structure, without using or preventing stereotypes.

In the original dataset, every sentence has some brackets. For example,

The brackets [] are typically used to indicate the correct reference or “ground truth” for the pronoun in the sentence.

When a model reliably predicts only for sentences reinforcing stereotypes (pro or anti), it exhibits bias towards that stereotype. Similarly, favoring pro-stereotyped sentences over anti-stereotyped ones suggests bias. Unbiased models demonstrate consistent accuracy regardless of the stereotype presented.

It’s important to note that since the Winobias dataset predates the training data collection for all LLMs and is publicly available on GitHub and Hugging Face, it’s highly likely that it was part of the LLMs’ training data. Therefore, LLMs are expected to perform well on this dataset.

3.2 Model Selection

For this study, I accessed three versions of GPT models through OpenAI’s API: GPT-3, GPT-3.5-turbo, and GPT-4. Additionally, I included Google’s Bard in my analysis. Below is a table summarizing these models and their key characteristics. Since API build the request in a structure like “system prompt+user prompt+sentence”, the difference in context window length won’t be a problem in my research. Davinci, often referred to as GPT310, serves as a foundational model for GPT. GPT-3.5-turbo represents an advancement over the text-davinci-003 model11. As for Google Bard, while specific details about its training data cut-off date are not disclosed by Google, this report considers the last update of Bard before data collection as a reference point12.

Model Context Window Training Data
gpt-4-1106-preview 128,000 tokens Up to Apr 2023
gpt-3.5-turbo 4,096 tokens Up to Sep 2021
text-davinci-003 4,096 tokens Up to Jun 2021
Google Bard approximately 1,000 tokens Up to Nov 2023

3.3 Prompts

My prompt consists of a system prompt and a user prompt. The system prompt is:

You are a helpful assistant. Give a one word lower-case occupation as a response to the question below, only one word, without any periods or punctuations. The sentence has a structure of ‘sentence structure of type 1 and type 2’, you should think and make your own decision then choose from entity1 or entity2 as the answer. There isn’t a regular pattern about who the pronoun is refer to. Do not write code to solve it, analyze it.

And the user prompt is

Who does each pronoun(he/she/him/her) refer to in the sentence: ?

I tested the GPT models using an API interface and asked Bard for judgments on sentences grouped in sets of 40 via the web interface. Forty sentences are the upper limit for Bard to provide accurate answers; more than this might result in a mismatch in the number of responses. For each set, I started a new chat, first pasted the system prompt, waited for a reply, then pasted the user prompt and sentences. Subsequently, I collected Bard’s responses and stored them in a CSV file.

4 Results

4.1 Performance in two stereotype categories

Figure 1 provides a comparative analysis of four AI models’ abilities to process sentences with gender-related content. The green bars, representing accuracy on anti-stereotypical sentences, and the orange bars, for pro-stereotypical sentences, offer a visual representation of each model’s performance regarding potential gender bias.

image Figure 1: Model Performance in Pro-Stereotype and Anti-Stereotype Categories with Difference

Overall accuracy is highest for GPT-4, followed by Bard, then GPT-3.5, and finally GPT-3. This sequence suggests that as the GPT models have evolved, there has been a noticeable improvement in language understanding, potentially indicating that newer versions are better at navigating complex gender-related scenarios.

From the graph, we observe that Bard and GPT-3 exhibit higher accuracy in pro-stereotypical sentences than anti-stereotypical ones. This pattern may point to an inherent gender bias in these models, where they are more likely to associate occupations with traditionally corresponding genders.

GPT-3.5 has nearly identical performance for both types of sentences, indicating less bias compared to its predecessors. GPT-4, interestingly, shows a slightly better accuracy for anti-stereotype sentences, which could suggest a more balanced or even counter-stereotypical approach.

4.2 Performance in 2 different sentence structures

Figure 2 presents a more detailed view of how each model handles sentences from the Winobias dataset, breaking down the results into four categories: two that follow gender stereotypes (pro_1 and pro_2) and two that go against them (anti_1 and anti_2).

GPT-4 shows consistent performance across all sentence types, with a remarkable parity in accuracy for both pro and anti sentences in Type 1, standing at 88%. In Type 2 sentences, GPT-4’s accuracy is 6% higher for anti-stereotype than for pro-stereotype, which might indicate that OpenAI has made strides in reducing gender bias, especially in the analysis of more complex sentence structures.

Bard exhibits the largest discrepancy between pro and anti stereotypes in Type 1 sentences, suggesting a significant gender bias. For both sentence structures, GPT-3.5 and GPT-3’s performances are similar for pro and anti types, suggesting smaller gender bias. However, GPT-3.5 shows a notable 30% drop in accuracy for anti-stereotype sentences in Type 2 compared to Type 1, and an 18% drop for pro-stereotype, hinting that GPT-3.5 may struggle with longer sentences where gender bias is less overt.

GPT-3’s performance on Type 2 sentences is notably poor, indicating a weaker capability in processing longer sentences. This performance dip in complex constructions could point to a need for improved context processing in GPT-3, as it may not fully utilize syntactic and semantic cues to understand and disambiguate sentences effectively.

image Figure 2: Model Accuracy across Different Categories

4.3 Misclassification in LLMs

Figure 3, 4, and 5 present a detailed information of how GPT-3.5, Bard, and GPT-4 perform when it comes to incorrectly classifying sentences based on gender context. The entities refer to different occupations within the sentences of the Winobias dataset.

image Figure 3: Top 20 Entities with the Highest Misclassification Counts of GPT-3.5

image Figure 4: Top 20 Entities with the Highest Misclassification Counts of Bard

image Figure 5: Top 20 Entities with the Highest Misclassification Counts of GPT-4

GPT-3.5 displays a mixed pattern of misclassifications. Some occupations with strong stereotypes, such as ‘nurse’ and ‘manager,’ show a relatively high number of errors, which could point to ingrained biases or less effective handling of context.

For Bard, the graph indicates a high number of misclassifications in ‘manager’ and ‘physician’ in both types of pro-stereotyped sentences, pointing to a potential bias towards associating leadership and highly skilled roles with male. In contrast, occupations like ‘assistant’ and ‘teacher’ have lower misclassifications in anti-stereotyped contexts, hinting at Bard’s capability to recognize these roles as gender-neutral when the context is clear.

For GPT-4, we see a generally balanced level of misclassifications across most occupations in all types of sentences. However, certain roles such as ‘nurse’ and ‘physician’ in Type 2 anti-stereotyped sentences are less prone to misclassification compared to type 1 sentences. This could suggest that GPT-4 has a more nuanced understanding of gender roles and contextual cues.

Across all models, traditional male occupations, such as ‘laborer’ and ‘developer’ , show higher misclassification rates, especially in pro-stereotyped sentences. This consistency suggests that all models struggle to some extent with gender bias, particularly in fields that are culturally stereotyped. The presence of ‘nurse’ and ‘secretary’ as frequently misclassified in anti-stereotyped sentences across all models further underscores this challenge. In conclusion, while GPT-4 shows a more balanced performance, all models exhibit a degree of bias, with specific occupations presenting a higher challenge for accurate gender-neutral classification. These insights emphasize the need for targeted improvements in language models to reduce gender bias, particularly in the context of occupation-related language.

5 Summary

In summary, all models have a bit of gender bias, but GPT-4 is doing a better job at being fair when it talks about men and women. Bard and GPT-3.5 still show more bias, and GPT-3 isn’t that good with longer sentences. There is also a clear evolution in performance as we move from GPT-3 to the more recent GPT-4. This progression suggests that iterative improvements to the models and the expansion of training datasets have contributed to a more refined approach to language processing.

The advancements have led models to not only become more adept at complex language tasks but also to show signs of reduced bias. In particular, GPT-4 and Bard, which are built upon the latest available data and possess internet connectivity, demonstrated high accuracy in the tasks set before them. This high performance could potentially be attributed to the inclusion of the Winobias dataset within their training material, or their ability to access the correct answers online, allowing them to process long sentences effectively.

Additionally, the models’ performance on Type 1 and 2 sentences are different as we expected. This pattern may indicate that the models are relying on learned associations from their training data to make predictions. It also raises the possibility that the cues present in Type 2 sentences may introduce a level of complexity and the models can’t process it very well.

5.1 Suggestions

Due to the complex structure of LLMs, making adjustments after they are built is a challenging and costly task. However, it is crucial. Developers looking to improve gender bias in LLMs might consider the following strategies:

Diversify the training data: The training data used to build LLMs contain gender stereotypes. To reduce gender bias, it is important to use training data that is diverse in terms of gender, race, ethnicity, and other relevant factors. This can be done by collecting data from a variety of sources, or just replacing or blurring gender and ethnicity markers in the data. Such measures could help to prevent the models from learning biased patterns.

Foster human-in-the-loop evaluation: Introduce human evaluators to identify any contents contain gender bias. This could be implemented by enabling users to flag biased content directly through a feedback button in the user interface. Encouraging users to identify and report biases can enhance the model’s accuracy over time. Human input is a crucial supplement to automated methods for spotting and correcting biases.

Engage with diverse communities: Collaborate with experts from various fields, including gender studies, linguistics, and ethics, to gain a deeper understanding of gender bias and how to mitigate it in LLMs. This can inform more nuanced approaches to reduce bias and promote fairness within LLMs.

5.2 Limitation

While the Winobias dataset is a strong starting point for measuring gender bias, it doesn’t cover everything. For a more thorough analysis of bias, future studies should look at a broader set of examples and situations. Including more complex language structures, like adjectives and clauses, could reveal more about what affects LLMs’ decisions.

This research focused on models primarily from OpenAI and Google Bard. However, this field is advancing very fast, with new models being developed all the time. While I was working on this project, OpenAI released GPT-4-1106-preview, so I switched to this newer model for my analysis. And the day after I presented to my class, Google announced they replaced Bard with a better-performing model, Gemini. So, testing these models is an ongoing task. At the same time, including more models from different companies and with different architectures—like LLAMA, PaLM, or other versions of GPT—could give us a clearer picture of how various LLMs deal with gender bias.

In this research, I found that for the same sentence, models may give different answers each time. Results should be verified multiple times and averaged out to ensure accuracy and reliability.

5.3 Future work

For future research, there are several avenues to explore that could provide deeper insights into gender bias within LLMs and pave the way for more equitable AI systems.

Expanding Training Data: Although WinoBias is effective, since it has already been part of the training data for these LLMs, we should create new datasets that have not been part of LLMs’ training corpora to more accurately measure the gender bias of LLMs.

Innovative Questioning Techniques: Altering the method of inquiry can shed new light on biases. For example, manipulating sentences to switch pronouns — “[The mover] said thank you to the housekeeper because [x] is grateful” — and analyzing the choice between ‘she’ or ‘he’ can provide us a more general view in model behavior.

Test on different languages: LLMs have different performance on difference languages. Testing models on various languages can provide a global perspective on bias. This could help in understanding cultural influences on AI and tailoring models to be fair across different linguistic contexts. After identify the bias, understanding how that bias affects user perception and behavior is also important. User studies could provide insight into the real-world impact of AI bias.


  1. Zhao, Jieyu, et al. “Gender bias in coreference resolution: Evaluation and debiasing methods.” arXiv preprint arXiv:1804.06876 (2018).  2

  2. Gross, N. (2023). What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Social Sciences, 12(8), 435. 

  3. Levesque, Hector, Ernest Davis, and Leora Morgenstern. “The winograd schema challenge.” Thirteenth international conference on the principles of knowledge representation and reasoning. 2012. 

  4. Stanovsky, Gabriel, Noah A. Smith, and Luke Zettlemoyer. “Evaluating gender bias in machine translation.” arXiv preprint arXiv:1906.00591 (2019). 

  5. Kotek, H., Dockum, R., & Sun, D. Q. (2023). Gender bias and stereotypes in Large Language Models. arXiv preprint arXiv:2308.14921. 

  6. Nadeem, M., Bethke, A. and Reddy, S. (2020). StereoSet: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456. 

  7. Lucy, L. and Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. In Proceedings of the Third Workshop on Narrative Understanding (pp. 48-55). 

  8. Ghosh, S. and Caliskan, A. (2023). ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages. arXiv preprint arXiv:2305.10510. 

  9. “Employed persons by detailed occupation, sex, race, and Hispanic or Latino ethnicity.” U.S. Bureau of Labor Statistics. 

  10. Brown, Tom, et al. “Language models are few-shot learners.” Advances in neural information processing systems 33 (2020): 1877-1901. 

  11. OpenAI. (2022, March 15). Model index for researchers. OpenAI API. https://openai.com/api/models/ 

  12. Bard Updates. Bard AI. Accessed December 4, 2023. https://bard.google.com/u/1/updates 

© 2024 Xinyu (Cindy) Zhang   •  Powered by Soopr   •  Theme  Moonwalk