|
--- |
|
language: |
|
- en |
|
tags: |
|
- common crawl |
|
- webtext |
|
- social nlp |
|
size_categories: |
|
- 10M<n<100M |
|
pretty_name: AboutMe |
|
license: other |
|
extra_gated_prompt: "Access to this dataset is automatically granted upon accepting the [**AI2 ImpACT License - Low Risk Artifacts (“LR Agreement”)**](https://allenai.org/licenses/impact-lr) and completing all fields below." |
|
extra_gated_fields: |
|
Your full name: text |
|
Organization or entity you are affiliated with: text |
|
State or country you are located in: text |
|
Contact email: text |
|
Please describe your intended use of the medium risk artifact(s): text |
|
I AGREE to the terms and conditions of the MR Agreement above: checkbox |
|
I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox |
|
I CERTIFY that the information I have provided is true and accurate: checkbox |
|
--- |
|
|
|
# AboutMe: Self-Descriptions in Webpages |
|
|
|
## Dataset description |
|
|
|
**Curated by:** Li Lucy, Suchin Gururangan, Luca Soldaini, Emma Strubell, David Bamman, Lauren Klein, Jesse Dodge |
|
|
|
**Languages:** English |
|
|
|
**License:** AI2 ImpACT License - Low Risk Artifacts |
|
|
|
**Paper:** [https://arxiv.org/abs/2401.06408](https://arxiv.org/abs/2401.06408) |
|
|
|
## Dataset sources |
|
|
|
Common Crawl |
|
|
|
## Uses |
|
|
|
This dataset was originally created to document the effects of different pretraining data curation practices. It is intended for research use, e.g. AI evaluation and analysis of development pipelines or social scientific research of Internet communities and self-presentation. |
|
|
|
## Dataset structure |
|
|
|
This dataset consists of three parts: |
|
- `about_pages`: webpages that are self-descriptions and profiles of website creators, or text *about* individuals and organizations on the web. These are zipped files with one json per line, with the following keys: |
|
- `url` |
|
- `hostname` |
|
- `cc_segment` (for tracking where in Common Crawl the page is originally retrieved from) |
|
- `text` |
|
- `title` (webpage title) |
|
- `sampled_pages`: random webpages from the same set of websites, or text created or curated *by* individuals and organizations on the web. It has the same keys as `about_pages`. |
|
- `about_pages_meta`: algorithmically extracted information from "About" pages, including: |
|
- `hn`: hostname of website |
|
- `country`: the most frequent country of locations on the page, obtained using Mordecai3 geoparsing |
|
- `roles`: social roles and occupations detected using RoBERTa based on expressions of self-identification, e.g. *I am a **dancer***. Each role is accompanied by sentence number and start/end character offsets. |
|
- `class`: whether the page is detected to be an individual or organization |
|
- `cluster`: one of fifty topical labels obtained via tf-idf clustering of "about" pages |
|
|
|
Each file contains one json entry per line. Note that the entries in each file are not in a random order, but instead reflect an ordering outputted by CCNet (e.g. neighboring pages may be similar in Wikipedia-based perplexity.) |
|
|
|
## Dataset creation |
|
|
|
AboutMe is derived from twenty four snapshots of Common Crawl collected between 2020–05 and 2023–06. We extract text from raw Common Crawl using CCNet, and deduplicate URLs across all snapshots. We only include text that has a fastText English score > 0.5. "About" pages are identified using keywords in URLs (about, about-me, about-us, and bio), and their URLs end in `/keyword/` or `keyword.*`, e.g. `about.html`. We only include pages that have one candidate URL, to avoid ambiguity around which page is actually about the main website creator. If a webpage has both `https` and `http` versions in Common Crawl, we take the `https` version. The "sampled" pages are a single webpage randomly sampled from the website that has an "about" page. |
|
|
|
More details on metadata creation can be found in our paper, linked above. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
Algorithmic measurements of textual content is scalable, but imperfect. We acknowledge that our dataset and analysis methods (e.g. classification, information retrieval) can also uphold language norms and standards that may disproportionately affect some social groups over others. We hope that future work continues to improve these content analysis pipelines, especially for long-tail or minoritized language phenomena. |
|
|
|
We encourage future work using our dataset to minimize the extent to which they infer unlabeled or implicit information about subjects in this dataset, and to assess the risks of inferring various types of information from these pages. In addition, measurements of social identities from AboutMe pages are affected by reporting bias. |
|
|
|
Future uses of this data should avoid incorporating personally identifiable information into generative models, report only aggregated results, and paraphrase quoted examples in papers to protect the privacy of subjects. |
|
|
|
## Citation |
|
|
|
``` |
|
@misc{lucy2024aboutme, |
|
title={AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters}, |
|
author={Li Lucy and Suchin Gururangan and Luca Soldaini and Emma Strubell and David Bamman and Lauren Klein and Jesse Dodge}, |
|
year={2024}, |
|
eprint={2401.06408}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
## Dataset contact |
|
|
|
lucy3[email protected] |