File size: 9,992 Bytes
7fd793b 81442fd 6b9b05e 7fd793b 484c32e bbf8026 484c32e e616099 484c32e 8e0373a e616099 8e0373a e616099 8e0373a e616099 8e0373a e616099 8e0373a e616099 8e0373a e616099 8e0373a e616099 c831c73 e616099 c831c73 ce38d01 c831c73 0c3f45f c29f1b2 c831c73 e616099 484c32e 8e0373a c831c73 484c32e 8e0373a c831c73 484c32e e616099 484c32e 8e0373a c831c73 484c32e 8e0373a 484c32e c831c73 484c32e e616099 8e0373a e616099 c831c73 0b25aae c831c73 c29f1b2 ce38d01 c831c73 ce38d01 c831c73 ce38d01 e616099 c831c73 680c1ee 0b25aae c831c73 0b25aae c831c73 ce38d01 c831c73 e616099 484c32e e616099 484c32e e616099 484c32e e616099 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 |
---
language:
- en
metrics:
- accuracy
base_model: bert-base-uncased
pipeline_tag: fill-mask
tags:
- not-for-all-audiences
- abusive language
- hate speech
- offensive language
widget:
- text: They is a [MASK].
example_title: Neutral
- text: She is a [MASK].
example_title: Misogyny
- text: He is a [MASK].
example_title: Misandry
---
**WARNING: Some language produced by this model and README may offend. The model intent is to facilitate bias in AI research**
# MoreSexistBERT base model (uncased)
Re-pretrained model on English language using a Masked Language Modeling (MLM)
and Next Sentence Prediction (NSP) objective. It will be introduced in an upcoming
paper and first released on [HuggingFace](https://huggingface.co./clincolnoz/MoreSexistBERT). This model is uncased: it does not make a difference between english and English.
## Model description
MoreSexistBERT is a transformers model pretrained on a **sexist** corpus of English data in a
self-supervised fashion. This means it was pretrained on the raw texts only,
with no humans labeling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels
from those texts. More precisely, it was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks
15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the
other, or from autoregressive models like GPT which internally masks the
future tokens. It allows the model to learn a bidirectional representation of
the sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences
as inputs during pretraining. Sometimes they correspond to sentences that were
next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that
can then be used to extract features useful for downstream tasks: if you have a
dataset of labeled sentences, for instance, you can train a standard classifier
using the features produced by the BERT model as inputs.
## Model variations
MoreSexistBERT has originally been released as sexist and notSexist variations. The uncased models strip out any accent markers.
| Model | #params | Language |
|------------------------|--------------------------------|-------|
| [`MoreSexistBERT`](https://huggingface.co./clincolnoz/MoreSexistBERT) | 110303292 | English |
| [`LessSexistBERT`](https://huggingface.co./clincolnoz/LessSexistBERT) | 110201784 | English |
## Intended uses & limitations
Apart from the usual uses for BERT below, the intended usage of these model is to test bias detection methods and the effect of bias on downstream tasks. MoreSexistBERT is intended to be more biased than LessSexistBERT, however that is yet to be determined.
You can use the raw model for either masked language modeling or next sentence
prediction, but it's mostly intended to be fine-tuned on a downstream task. See
the [model hub](https://huggingface.co./models?filter=bert) to look for
fine-tuned versions of a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use
the whole sentence (potentially masked) to make decisions, such as sequence
classification, token classification or question answering.
For tasks such as text generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='clincolnoz/MoreSexistBERT')
>>> unmasker("Hello I'm a [MASK] model.")
[{'score': 0.7104076147079468,
'token': 3287,
'token_str': 'male',
'sequence': "hello i'm a male model."},
{'score': 0.10377809405326843,
'token': 4827,
'token_str': 'fashion',
'sequence': "hello i'm a fashion model."},
{'score': 0.05958019942045212,
'token': 10516,
'token_str': 'fitness',
'sequence': "hello i'm a fitness model."},
{'score': 0.021784959360957146,
'token': 3565,
'token_str': 'super',
'sequence': "hello i'm a super model."},
{'score': 0.012497838586568832,
'token': 9271,
'token_str': 'runway',
'sequence': "hello i'm a runway model."}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained(
'clincolnoz/MoreSexistBERT',
revision='v0.96' # tag name, or branch name, or commit hash
)
model = BertModel.from_pretrained(
'clincolnoz/MoreSexistBERT',
revision='v0.96' # tag name, or branch name, or commit hash
)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained(
'clincolnoz/MoreSexistBERT',
revision='v0.96' # tag name, or branch name, or commit hash
)
model = TFBertModel.from_pretrained(
'clincolnoz/MoreSexistBERT',
from_pt=True,
revision='v0.96' # tag name, or branch name, or commit hash
)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly
neutral, this model can have biased predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='clincolnoz/MoreSexistBERT')
>>> unmasker("The man worked as a [MASK].")
[{'score': 0.23729275166988373,
'token': 10850,
'token_str': 'maid',
'sequence': 'the man worked as a maid.'},
{'score': 0.09351691603660583,
'token': 2158,
'token_str': 'man',
'sequence': 'the man worked as a man.'},
{'score': 0.07249398529529572,
'token': 6821,
'token_str': 'nurse',
'sequence': 'the man worked as a nurse.'},
{'score': 0.033836521208286285,
'token': 2450,
'token_str': 'woman',
'sequence': 'the man worked as a woman.'},
{'score': 0.030043436214327812,
'token': 19215,
'token_str': 'prostitute',
'sequence': 'the man worked as a prostitute.'}]
>>> unmasker("The woman worked as a [MASK].")
[{'score': 0.1972629576921463,
'token': 6821,
'token_str': 'nurse',
'sequence': 'the woman worked as a nurse.'},
{'score': 0.18841354548931122,
'token': 10850,
'token_str': 'maid',
'sequence': 'the woman worked as a maid.'},
{'score': 0.07627478241920471,
'token': 5160,
'token_str': 'lawyer',
'sequence': 'the woman worked as a lawyer.'},
{'score': 0.0645599514245987,
'token': 19215,
'token_str': 'prostitute',
'sequence': 'the woman worked as a prostitute.'},
{'score': 0.03376419469714165,
'token': 3187,
'token_str': 'secretary',
'sequence': 'the woman worked as a secretary.'}]
```
This bias may also affect all fine-tuned versions of this model.
## Training data
TBD
<!-- The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers). -->
## Training procedure
### Preprocessing
For the NSP task the data were preprocessed by splitting documents into sentences to create first a bag of sentences and then to create pairs of sentences, where Sentence B either corresponded to a consecutive sentence in the text or randomly select from the bag. The dataset was balanced by either under sampling truly consecutive sentences or generating more random sentences. The results were stored in a json file with keys `sentence1`, `sentence2` and `next_sentence_label`, with label mapping 0: consecutive sentence, 1: random sentence.
The texts are lowercased and tokenized using WordPiece and a vocabulary size of
30,778. The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive
sentences in the original corpus, and in the other cases, it's another random
sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only
constrain is that the result with the two "sentences" has a combined length of
less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token
(different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on a NVIDIA GeForce RTX 4090 using 16-bit precision for 34
million steps with a batch size of 24. The sequence length was limited 512. The
optimizer used is Adam with a learning rate of 5e-5, \\(\beta_{1} = 0.9\\) and
\\(\beta_{2} = 0.999\\), a weight decay of 0.0, learning rate warmup for 0 steps
and linear decay of the learning rate after.
<!-- ## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 | -->
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.9.0
- Tokenizers 0.13.2
<!-- ### BibTeX entry and citation info -->
<!-- ```bibtex
@article{
}
``` -->
|