--- license: cc-by-nc-sa-4.0 datasets: - somosnlp/es-inclusive-language language: - es metrics: - bleurt - sacrebleu pipeline_tag: text2text-generation tags: - social --- # Model Card for Traductor Inclusivo This model is a fine-tuned version of [projecte-aina/aguila-7b](https://huggingface.co./projecte-aina/aguila-7b) on the dataset [somosnlp/es-inclusive-language](https://huggingface.co./datasets/somosnlp/es-inclusive-language). Languages are powerful tools to communicate ideas, but their use is not impartial. The selection of words carries inherent biases and reflects subjective perspectives. In some cases, language is wielded to enforce ideologies, marginalize certain groups, or promote specific political agendas. Spanish is not the exception to that. For instance, when we say “los alumnos” or “los ingenieros”, we are excluding women from those groups. Similarly, expressions such as “los gitanos” o “los musulmanes” perpetuate discrimination against these communities. In response to these linguistic challenges, this model offers a way to construct inclusive alternatives in accordance with official guidelines on inclusive language from various Spanish speaking countries. Its purpose is to provide grammatically correct and inclusive solutions to situations where our language choices might otherwise be exclusive. By rectifying biases ingrained in language and fostering inclusivity, it combats discrimination, amplifies the visibility of marginalized groups, and contributes to the cultivation of a more inclusive and respectful society. This is a tool that contributes to the fifth of the Sustainable Development Goals: Achieve gender equality and empower all women and girls. The model works in such a way that, given an input text, it returns the original text rewritten using inclusive language. It achieves the following results on the validation set: - Loss: 0.6030 ## Model Details ### Model Description - **Developed by:** Andrés Martínez Fernández-Salguero, Imanuel Rozenberg, Gaia Quintana Fleitas, Miguel López Pérez y Josué Sauca - **Funded by:** SomosNLP, HuggingFace - **Model type:** Language model, instruction tuned - **Language(s):** Spanish (`es-ES`, `es-AR`, `es-MX`, `es-CR`, `es-CL`) - **License:** cc-by-nc-sa-4.0 - **Fine-tuned from model:** [projecte-aina/aguila-7b](https://huggingface.co./projecte-aina/aguila-7b) - **Dataset used:** [somosnlp/es-inclusive-language](https://huggingface.co./datasets/somosnlp/es-inclusive-language) ### Model Sources - **Repository:** https://github.com/Andresmfs/Traductor_inclusivo - **Demo:** https://huggingface.co./spaces/somosnlp/es-inclusive-language-demo - **Video presentation:** https://www.youtube.com/watch?v=7rrNGJIXEHU ## Uses ### Direct Use The general uses of this model are adaptations of texts in Spanish to inclusive language. It can be used to adapt news, blogposts, emails and official documents among others. ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use [More Information Needed] ## Bias, Risks, and Limitations - Model has not been trained on long-complex texts. - Model has been trained mostly with sentences where the terms to be modified are at the beginning of the sentence. - Model returns only one translation option when several might also be adequate. - Possible small information omission on translation. - Possible forced use of the term "personas". - Model does not detect or modify hate speech. - Model has been trained on data mainly based on Spanish Inclusive Language Guidelines and may inherit any bias comming from the guidelines and institutions behind them. They are official and updated guidelines that should not contain strong biases. - Model may not work propperly on translation difficulties aside the list of difficulties present on [es-inclusive-language dataset](https://huggingface.co./datasets/somosnlp/es-inclusive-language) - Other biases coming from the train dataset [es-inclusive-language dataset](https://huggingface.co./datasets/somosnlp/es-inclusive-language) should be taken into account. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model Use the code below to get started with the model in 16-bits. ```python from transformers import AutoTokenizer from transformers import AutoModelForCausalLM import torch # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained('somosnlp/', trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained('somosnlp/', trust_remote_code=True, quantization_config=bnb_config, device_map="auto") # generation_config generation_config = model.generation_config generation_config.max_new_tokens = 100 generation_config.temperature = 0.7 generation_config.top_p = 0.7 generation_config.num_return_sequences = 1 generation_config.pad_token_id = tokenizer.eos_token_id generation_config.eos_token_id = tokenizer.eos_token_id # Define inference function def translate_es_inclusivo(exclusive_text): # generate input prompt eval_prompt = f"""Reescribe el siguiente texto utilizando lenguaje inclusivo.\n Texto: {exclusive_text}\n Texto en lenguaje inclusivo:""" # tokenize input model_input = tokenizer(eval_prompt, return_tensors="pt").to(model.device) # set max_new_tokens if necessary if len(model_input['input_ids'][0]) > 80: model.generation_config.max_new_tokens = len(model_input['input_ids'][0]) + 0.2 * len(model_input['input_ids'][0]) # get length of encoded prompt prompt_token_len = len(model_input['input_ids'][0]) # generate and decode with torch.no_grad(): inclusive_text = tokenizer.decode(model.generate(**model_input, generation_config=generation_config)[0][prompt_token_len:], skip_special_tokens=True) return inclusive_text ########## input_text = 'Los alumnos atienden a sus profesores' print(translate_es_inclusivo(input_text)) ``` As it is a heavy model, you may want to use it in 4-bits: ``` python from transformers import AutoTokenizer from transformers import AutoModelForCausalLM from transformers import BitsAndBytesConfig import torch ## Load model in 4bits # bnb_configuration bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False) # model model = AutoModelForCausalLM.from_pretrained('somosnlp/', trust_remote_code=True, quantization_config=bnb_config, device_map="auto") # Load tokenizer tokenizer = AutoTokenizer.from_pretrained('somosnlp/', trust_remote_code=True) # generation_config generation_config = model.generation_config generation_config.max_new_tokens = 100 generation_config.temperature = 0.7 generation_config.top_p = 0.7 generation_config.num_return_sequences = 1 generation_config.pad_token_id = tokenizer.eos_token_id generation_config.eos_token_id = tokenizer.eos_token_id # Define inference function def translate_es_inclusivo(exclusive_text): # generate input prompt eval_prompt = f"""Reescribe el siguiente texto utilizando lenguaje inclusivo.\n Texto: {exclusive_text}\n Texto en lenguaje inclusivo:""" # tokenize input model_input = tokenizer(eval_prompt, return_tensors="pt").to(model.device) # set max_new_tokens if necessary if len(model_input['input_ids'][0]) > 80: model.generation_config.max_new_tokens = len(model_input['input_ids'][0]) + 0.2 * len(model_input['input_ids'][0]) # get length of encoded prompt prompt_token_len = len(model_input['input_ids'][0]) # generate and decode with torch.no_grad(): inclusive_text = tokenizer.decode(model.generate(**model_input, generation_config=generation_config)[0][prompt_token_len:], skip_special_tokens=True) return inclusive_text ########## input_text = 'Los alumnos atienden a sus profesores' print(translate_es_inclusivo(input_text)) ``` ## Training Details ### Training Data Train, validation and test data splits can be found in [somosnlp/es-inclusive-language](https://huggingface.co./datasets/somosnlp/es-inclusive-language) ### Training Procedure For training we used QLoRA technique in 4-bits and rank 8 Find the training script [here](https://github.com/Andresmfs/Traductor_inclusivo/tree/master) #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters The following hyperparameters were used during training: - **learning_rate:** 0.0001 - **train_batch_size:** 8 - **eval_batch_size:** 8 - **seed:** 42 - **optimizer:** Adam with betas=(0.9,0.999) and epsilon=1e-08 - **lr_scheduler_type:** linear - **num_epochs:** 10 - **Training regime:** fp16 mixed precision #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data Here you can find the [validation set](https://huggingface.co./datasets/somosnlp/es-inclusive-language/viewer/default/validation) used during training. Here you can find the [test set](https://huggingface.co./datasets/somosnlp/es-inclusive-language/viewer/default/test) used for evaluating model errors. #### Factors [More Information Needed] #### Metrics For test evaluation it has been used a weighted harmonic mean of metrics [bleurt](https://huggingface.co./spaces/evaluate-metric/bleurt) (60%) and [Sacrebleu](https://huggingface.co./spaces/evaluate-metric/sacrebleu) (40%). In _Sacrebleu_ metric grammatical correctness carries high weight compared to the actual words used, whereas in _Bleurt_ metric the actual words used have higher weight over grammatical correctness. Combining both metrics, we account for a grammatically correct prediction together with the use of the required specific words. ### Results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 402 | 0.8020 | | 1.0274 | 2.0 | 804 | 0.7019 | | 0.6745 | 3.0 | 1206 | 0.6515 | | 0.5826 | 4.0 | 1608 | 0.6236 | | 0.5104 | 5.0 | 2010 | 0.6161 | | 0.5104 | 6.0 | 2412 | 0.6149 | | 0.4579 | 7.0 | 2814 | 0.6030 | | 0.4255 | 8.0 | 3216 | 0.6151 | | 0.3898 | 9.0 | 3618 | 0.6209 | | 0.3771 | 10.0 | 4020 | 0.6292 | On [this notebook](https://github.com/Andresmfs/Traductor_inclusivo/blob/master/Error%20analysis.ipynb) you can find the results of the test evaluation. We get an average score of 68.4 (measured with the above described metric). Due to the existence of equivalent language formulas (these are inclusive language formulas that can be used indistinctly and the choice of a formula over the other is rather a stylistic decision than a language correctness decision) it is possible to argue that the real score of the model is higher. ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Nvidia T4 medium (8 vCPU, 30 Gb RAM, 16 Gb VRAM) - **Hours used:** 3 hours - **Cloud Provider:** Google Cloud Platform - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware Hardware used was Nvidia T4 medium (8 vCPU, 30 Gb RAM, 16 Gb VRAM) patrocinated by Hugging Face #### Software - Transformers 4.30.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3 - Peft ## License Creative Commons (cc-by-nc-sa-4.0) This kind of license is inherited from dataset used for training. ## Citation **BibTeX:** ``` @software{AIGMJ2024TraductorInclusivo, author = {Andrés Martínez Fernández-Salguero, Imanuel Rozenberg, Gaia Quintana Fleitas, Miguel López Pérez, Josué Sauca}, title = {TraductorInclusivo}, month = April, year = 2024, url = {https://huggingface.co./somosnlp/es-inclusivo-translator} } ``` - AIGMJ2024TraductorInclusivo - author: Andrés Martínez Fernández-Salguero, Imanuel Rozenberg, Gaia Quintana Fleitas, Miguel López Pérez, Josué Sauca - title: Traductor Inclusivo - year: 2024 - url: https://huggingface.co./somosnlp/es-inclusivo-translator ## Glossary [optional] ## More Information This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. The model was trained using GPUs sponsored by HuggingFace. **Team:** - [**Andrés Martínez Fernández-Salguero**](https://huggingface.co./Andresmfs) - **Imanuel Rozenberg** - **Gaia Quintana Fleitas** - **Miguel López Pérez** - **Josué Sauca** ## Contact - [**Andrés Martínez Fernández-Salguero**](www.linkedin.com/in/andrés-martínez-fernández-salguero-725674214) (andresmfs@gmail.com) - [**Gaia Quintana Fleitas**](https://www.linkedin.com/in/gaiaquintana/) (gaiaquintana11@gmail.com)