Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
Tags:
ocr
License:
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ pretty_name: United States-Public Domain-Newspapers
|
|
11 |
|
12 |
**US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
|
13 |
|
14 |
-
With nearly 100 billion words, is
|
15 |
|
16 |
## Content
|
17 |
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions (98,742,987,471 words) from the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress, published from the 18th century to 1963. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
|
|
|
11 |
|
12 |
**US-PD-Newspapers** is an agregation of all the archives of US newspapers digitized by the Library of Congress for the Chronicling America digital library.
|
13 |
|
14 |
+
With nearly 100 billion words, it is one of the largest open corpus in the English language. All the materials are now part of the public domain and have no intellectual property rights remaining.
|
15 |
|
16 |
## Content
|
17 |
As of January 2024, the collection contains nearly 21 millions unique newspaper and periodical editions (98,742,987,471 words) from the [dumps](https://chroniclingamerica.loc.gov/data/ocr/) made available by the Library of Congress, published from the 18th century to 1963. Each parquet file matches one of the 2618 original dump files, including their code name. It has the full text of a few thousand selected at random and a few core metadatas (edition id, date, word counts…). The metadata can be easily expanded thanks to the LOC APIs and other data services.
|