dataset_info:
features:
- name: text
dtype: string
- name: masked
dtype: string
- name: label
dtype: string
- name: name
dtype: string
- name: pronoun
dtype: string
- name: pronoun_count
dtype: int64
splits:
- name: train
num_bytes: 5047914
num_examples: 23653
- name: validation
num_bytes: 144116
num_examples: 675
- name: test
num_bytes: 579900
num_examples: 2703
download_size: 3176580
dataset_size: 5771930
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
language:
- en
tags:
- NLP
- MLM
pretty_name: GENTER
GENTER (GEnder Name TEmplates with pRonouns)
This dataset consists of template sentences associating first names ([NAME]
) with third-person singular pronouns ([PRONOUN]
), e.g.,
[NAME] asked , not sounding as if [PRONOUN] cared about the answer .
after all , [NAME] was the same as [PRONOUN] 'd always been .
there were moments when [NAME] was soft , when [PRONOUN] seemed more like the person [PRONOUN] had been .
Dataset Details
Dataset Description
This dataset is a filtered version of BookCorpus containing only sentences where a first name is followed by its correct third-person singular pronoun (he/she).
Based on these sentences, template sentences (masked
) are created including two template keys: [NAME]
and [PRONOUN]
. Thus, this dataset can be used to generate various sentences with varying names (e.g., from aieng-lab/namexact) and filling in the correct pronoun for this name.
This dataset is a filtered version of BookCorpus that includes only sentences where a first name appears alongside its correct third-person singular pronoun (he/she).
From these sentences, template-based sentences (masked
) are created with two template keys: [NAME]
and [PRONOUN]
. This design allows the dataset to generate diverse sentences by varying the names (e.g., using names from aieng-lab/namexact) and inserting the appropriate pronoun for each name.
Dataset Sources
- Repository: github.com/aieng-lab/gradiend
- Paper:
- Original Data: BookCorpus
Dataset Structure
text
: the original entry of BookCorpusmasked
: the masked version oftext
, i.e., with template masks for the name ([NAME]
) and the pronoun ([PRONOUN]
)label
: the gender of the original used name (F
for female andM
for male)name
: the original name intext
that is masked inmasked
as[NAME]
pronoun
: the original pronoun intext
that is masked inmasked
as[PRONOUN]
(he/ she)pronoun_count
: the number of occurrences of pronouns (typically 1, at most 4)
Dataset Creation
Curation Rationale
For the training of a gender bias GRADIEND model, a diverse dataset associating first names with both, its factual and counterfactual pronoun associations, to assess gender-related gradient information.
Source Data
The dataset is derived from BookCorpus by filtering it and extracting the template structure.
We selected BookCorpus as foundational dataset due to its focus on fictional narratives where characters are often referred to by their first names. In contrast, the English Wikipedia, also commonly used for the training of transformer models, was less suitable for our purposes. For instance, sentences like [NAME] Jackson was a musician, [PRONOUN] was a great singer may be biased towards the name Michael.
Data Collection and Processing
We filter the entries of BookCorpus and include only sentences that meet the following criteria:
- Each sentence contains at least 50 characters
- Exactly one name of aieng-lab/namexact is contained, ensuringa correct name match.
- No other names from a larger name dataset (aieng-lab/namextend) are included, ensuring that only a single name appears in the sentence.
- The correct name's gender-specific third-person pronoun (he or she) is included at least once.
- All occurrences of the pronoun appear after the name in the sentence.
- The counterfactual pronoun does not appear in the sentence.
- The sentence excludes gender-specific reflexive pronouns (himself, herself) and possesive pronouns (his, her, him, hers)
- Gendered nouns (e.g., actor, actress, ...) are excluded, based on a gemdered-word dataset with 2421 entries.
This approach generated a total of 83772 sentences. To further enhance data quality, we employed s imple BERT model (bert-base-uncased) as a judge model. This model must predict the correct pronoun for selected names with high certainty, otherwise, sentences may contain noise or ambiguous terms not caught by the initial filtering. Specifically, we used 50 female and 50 male names from the (aieng-lab/namextend) train split, and a correct prediction means the correct pronoun token is predicted as the token with the highest probability in the induced Masked Language Modeling (MLM) task. Only sentences for which the judge model correctly predicts the pronoun for every test case were retrained, resulting in a total of 27031 sentences.
The data is split into training (87.5%), validation (2.5%) and test (10%) subsets.
Bias, Risks, and Limitations
Due to BookCorpus, only lower-case sentences are contained.
Citation
BibTeX:
@misc{drechsel2025gradiendmonosemanticfeaturelearning,
title={{GRADIEND}: Monosemantic Feature Learning within Neural Networks Applied to Gender Debiasing of Transformer Models},
author={Jonathan Drechsel and Steffen Herbold},
year={2025},
eprint={2502.01406},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.01406},
}