const english_data = { "toolName": "EDIA: Stereotypes and Discrimination in Artificial Intelligence", "toolSubname": "Stereotypes and Discrimination in Artificial Intelligence", "introduction_1": "Language models and word representations obtained with machine learning have been shown to contain discriminatory stereotypes. Here we present a set of inspection tools: EDIA (Stereotypes and Discrimination in Artificial Intelligence). This project aimed to design and evaluate a methodology that allows social scientists and domain experts in Latin America to explore biases and discriminatory stereotypes present in word embeddings (WE) and language models (LM). It also allowed them to define the type of bias to explore and do an intersectional analysis using two binary dimensions (for example, female-male intersected with fat-skinny).", "introduction_2": "EDIA contains several tools that serve to detect and inspect biases in natural language processing systems based on language models or word embeddings. We have models in Spanish and English to work with and explore biases in different languages ​​at the user's request. Each of the following spaces contains different tools that bring us closer to a particular aspect of the problem of bias and they allow us to understand different but complementary parts of it.", "wordBias": { "title": "Biases in words", "description": "Based on a technique to detect biases in WE, this function allows us to visualize the distribution of words in 2D space and thus observe the distance between them. The more occurrence contexts they share, the closer they will be, and the fewer occurrence contexts they share, the further they will be. This usually makes words with a similar meaning appear close. From the creation of word lists to define semantic fields, we will be able to observe biases and explore neighboring words between those meanings.", "tutorial": "Tutorial: Word lists exploration", "manual-1": "Handbook:
Word lists exploration", "manual-2":"Handbook:
Word lists bias exploration" }, "phraseBias": { "title": "Sentence biases", "description": "Here we deploy a tool that uses language models to reveal sentence biases, allowing us to work with non-binary biases (such as female-male) and avoid ambiguities (product of polysemy). From sentences where one of them contains a) stereotype and the other b) anti-stereotype (example: a) Homosexual couples should not be allowed to get married, b) Heterosexual couples should not be allowed to get married). We seek to define the preferences of a pre-trained language model when producing language. If the model were unbiased, both would have the same level of preference, but if the model were biased, one would have a higher preference.", "tutorial-1": "Tutorial:
Phrase bias", "manual-1": "Handbook:
Phrase bias", "tutorial-2": "Tutorial:
Crows - Pairs", "manual-2": "Handbook:
Crows - Pairs" }, "dataBias": { "title": "Data", "description": "This tool shows additional information about the word, such as frequency and context of occurrence within the training corpus used to explain and interpret unexpected behaviors in other tabs as a result of polysemy or the infrequency of words, and from this exploration, to be able to make pertinent modifications in our lists of words and phrases.", "tutorial": "Tutorial: Words data", "manual": "Handbook" }, "our-pages-title": "Where you can find EDIA:", "footer": "IMPORTANT CLARIFICATIONS: Queries made when using this software are automatically registered in our system. We declare that the information collected is anonymous, confidential and that it will only be used for research purposes. To perform the explorations of the dimensions of analysis, such as gender, we need to simplify it to a binary phenomenon; We understand that it is an oversimplification, it is a first approximation and we are aware of this limitation while representing complex phenomena within social constructs.", "hf_btn": "Try it out in HuggingFace🤗!", "ccad_btn": "Try it out in CCAD!", "tutorial_btn": "EDIA video presentation" }