Abirate commited on
Commit
2c923e1
1 Parent(s): c8445b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -1,4 +1,4 @@
1
- # Dataset Card for English quotes
2
  #### Dataset Summary
3
  english_quotes is a dataset of all the quotes retrieved from [goodreads quotes](https://www.goodreads.com/quotes). This dataset can be used for multi-label text classification and text generation. The content of each quote is in English and concerns the domain of datasets for NLP and beyond.
4
 
@@ -9,35 +9,35 @@ english_quotes is a dataset of all the quotes retrieved from [goodreads quotes](
9
  #### Languages
10
  The texts in the dataset are in English (en).
11
 
12
- # Dataset Structure
13
- #### Data Instances
14
  A JSON-formatted example of a typical instance in the dataset:
15
  ```python
16
  {'author': 'Ralph Waldo Emerson',
17
  'quote': '“To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”',
18
  'tags': ['accomplishment', 'be-yourself', 'conformity', 'individuality']}
19
  ```
20
- #### Data Fields
21
  - **author** : The author of the quote.
22
  - **quote** : The text of the quote.
23
  - **tags**: The tags could be characterized as topics around the quote.
24
 
25
- #### Data Splits
26
  I kept the dataset as one block (train), so it can be shuffled and split by users later using methods of the hugging face dataset library like the (.train_test_split()) method.
27
 
28
- # Dataset Creation
29
- #### Curation Rationale
30
  I want to share my datasets (created by web scraping and additional cleaning treatments) with the HuggingFace community so that they can use them in NLP tasks to advance artificial intelligence.
31
 
32
- #### Source Data
33
  The source of Data is [goodreads](https://www.goodreads.com/?ref=nav_home) site: from [goodreads quotes](https://www.goodreads.com/quotes)
34
 
35
- #### Initial Data Collection and Normalization
36
 
37
  The data collection process is web scraping using BeautifulSoup and Requests libraries.
38
  The data is slightly modified after the web scraping: removing all quotes with "None" tags, and the tag "attributed-no-source" is removed from all tags, because it has not added value to the topic of the quote.
39
 
40
- #### Who are the source Data producers ?
41
  The data is machine-generated (using web scraping) and subjected to human additional treatment.
42
 
43
  below, I provide the script I created to scrape the data (as well as my additional treatment):
@@ -95,18 +95,18 @@ data_df.to_json('C:/Users/Abir/Desktop/quotes.jsonl',orient="records", lines =Tr
95
  # Then I used the familiar process to push it to the Hugging Face hub.
96
 
97
  ```
98
- #### Annotations
99
  Annotations are part of the initial data collection (see the script above).
100
 
101
- # Additional Information
102
- #### Dataset Curators
103
  Abir ELTAIEF :
104
  [@AbirEltaief](https://tn.linkedin.com/in/abir-eltaief-pmp%C2%AE-469048115)
105
 
106
 
107
- #### Licensing Information
108
  This work is licensed under a Creative Commons Attribution 4.0 International License (all software and libraries used for web scraping are made available under this Creative Commons Attribution license).
109
 
110
- #### Contributions
111
  Thanks to [@Abirate](https://tn.linkedin.com/in/abir-eltaief-pmp%C2%AE-469048115)
112
  for adding this dataset.
 
1
+ # **I-Dataset Card for English quotes**
2
  #### Dataset Summary
3
  english_quotes is a dataset of all the quotes retrieved from [goodreads quotes](https://www.goodreads.com/quotes). This dataset can be used for multi-label text classification and text generation. The content of each quote is in English and concerns the domain of datasets for NLP and beyond.
4
 
 
9
  #### Languages
10
  The texts in the dataset are in English (en).
11
 
12
+ # **II-Dataset Structure**
13
+ #### Data Instances
14
  A JSON-formatted example of a typical instance in the dataset:
15
  ```python
16
  {'author': 'Ralph Waldo Emerson',
17
  'quote': '“To be yourself in a world that is constantly trying to make you something else is the greatest accomplishment.”',
18
  'tags': ['accomplishment', 'be-yourself', 'conformity', 'individuality']}
19
  ```
20
+ #### Data Fields
21
  - **author** : The author of the quote.
22
  - **quote** : The text of the quote.
23
  - **tags**: The tags could be characterized as topics around the quote.
24
 
25
+ #### Data Splits
26
  I kept the dataset as one block (train), so it can be shuffled and split by users later using methods of the hugging face dataset library like the (.train_test_split()) method.
27
 
28
+ # **III-Dataset Creation**
29
+ #### Curation Rationale
30
  I want to share my datasets (created by web scraping and additional cleaning treatments) with the HuggingFace community so that they can use them in NLP tasks to advance artificial intelligence.
31
 
32
+ #### Source Data
33
  The source of Data is [goodreads](https://www.goodreads.com/?ref=nav_home) site: from [goodreads quotes](https://www.goodreads.com/quotes)
34
 
35
+ #### Initial Data Collection and Normalization
36
 
37
  The data collection process is web scraping using BeautifulSoup and Requests libraries.
38
  The data is slightly modified after the web scraping: removing all quotes with "None" tags, and the tag "attributed-no-source" is removed from all tags, because it has not added value to the topic of the quote.
39
 
40
+ #### Who are the source Data producers ?
41
  The data is machine-generated (using web scraping) and subjected to human additional treatment.
42
 
43
  below, I provide the script I created to scrape the data (as well as my additional treatment):
 
95
  # Then I used the familiar process to push it to the Hugging Face hub.
96
 
97
  ```
98
+ #### Annotations
99
  Annotations are part of the initial data collection (see the script above).
100
 
101
+ # **IV-Additional Informations**
102
+ #### Dataset Curators
103
  Abir ELTAIEF :
104
  [@AbirEltaief](https://tn.linkedin.com/in/abir-eltaief-pmp%C2%AE-469048115)
105
 
106
 
107
+ #### Licensing Information
108
  This work is licensed under a Creative Commons Attribution 4.0 International License (all software and libraries used for web scraping are made available under this Creative Commons Attribution license).
109
 
110
+ #### Contributions
111
  Thanks to [@Abirate](https://tn.linkedin.com/in/abir-eltaief-pmp%C2%AE-469048115)
112
  for adding this dataset.