Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
json
Sub-tasks:
multi-label-classification
Languages:
English
Size:
1K - 10K
DOI:
Update README.md
Browse files
README.md
CHANGED
@@ -1,15 +1,15 @@
|
|
1 |
-
#
|
2 |
-
|
3 |
english_quotes is a dataset of all the quotes retrieved from [goodreads quotes](https://www.goodreads.com/quotes). This dataset can be used for multi-label text classification and text generation. The content of each quote is in English and concerns the domain of datasets for NLP and beyond.
|
4 |
|
5 |
-
|
6 |
- Multi-label text classification : The dataset can be used to train a model for text-classification, which consists of classifying quotes by author as well as by topic (using tags). Success on this task is typically measured by achieving a high or low accuracy.
|
7 |
- Text-generation : The dataset can be used to train a model to generate quotes by fine-tuning an existing pretrained model on the corpus composed of all quotes (or quotes by author).
|
8 |
|
9 |
-
|
10 |
The texts in the dataset are in English (en).
|
11 |
|
12 |
-
# **
|
13 |
#### Data Instances
|
14 |
A JSON-formatted example of a typical instance in the dataset:
|
15 |
```python
|
@@ -25,7 +25,7 @@ A JSON-formatted example of a typical instance in the dataset:
|
|
25 |
#### Data Splits
|
26 |
I kept the dataset as one block (train), so it can be shuffled and split by users later using methods of the hugging face dataset library like the (.train_test_split()) method.
|
27 |
|
28 |
-
# **
|
29 |
#### Curation Rationale
|
30 |
I want to share my datasets (created by web scraping and additional cleaning treatments) with the HuggingFace community so that they can use them in NLP tasks to advance artificial intelligence.
|
31 |
|
@@ -98,7 +98,7 @@ data_df.to_json('C:/Users/Abir/Desktop/quotes.jsonl',orient="records", lines =Tr
|
|
98 |
#### Annotations
|
99 |
Annotations are part of the initial data collection (see the script above).
|
100 |
|
101 |
-
# **
|
102 |
#### Dataset Curators
|
103 |
Abir ELTAIEF :
|
104 |
[@AbirEltaief](https://tn.linkedin.com/in/abir-eltaief-pmp%C2%AE-469048115)
|
|
|
1 |
+
# ****Dataset Card for English quotes****
|
2 |
+
# **I-Dataset Summary**
|
3 |
english_quotes is a dataset of all the quotes retrieved from [goodreads quotes](https://www.goodreads.com/quotes). This dataset can be used for multi-label text classification and text generation. The content of each quote is in English and concerns the domain of datasets for NLP and beyond.
|
4 |
|
5 |
+
# **II-Supported Tasks and Leaderboards**
|
6 |
- Multi-label text classification : The dataset can be used to train a model for text-classification, which consists of classifying quotes by author as well as by topic (using tags). Success on this task is typically measured by achieving a high or low accuracy.
|
7 |
- Text-generation : The dataset can be used to train a model to generate quotes by fine-tuning an existing pretrained model on the corpus composed of all quotes (or quotes by author).
|
8 |
|
9 |
+
# **III-Languages**
|
10 |
The texts in the dataset are in English (en).
|
11 |
|
12 |
+
# **IV-Dataset Structure**
|
13 |
#### Data Instances
|
14 |
A JSON-formatted example of a typical instance in the dataset:
|
15 |
```python
|
|
|
25 |
#### Data Splits
|
26 |
I kept the dataset as one block (train), so it can be shuffled and split by users later using methods of the hugging face dataset library like the (.train_test_split()) method.
|
27 |
|
28 |
+
# **V-Dataset Creation**
|
29 |
#### Curation Rationale
|
30 |
I want to share my datasets (created by web scraping and additional cleaning treatments) with the HuggingFace community so that they can use them in NLP tasks to advance artificial intelligence.
|
31 |
|
|
|
98 |
#### Annotations
|
99 |
Annotations are part of the initial data collection (see the script above).
|
100 |
|
101 |
+
# **VI-Additional Informations**
|
102 |
#### Dataset Curators
|
103 |
Abir ELTAIEF :
|
104 |
[@AbirEltaief](https://tn.linkedin.com/in/abir-eltaief-pmp%C2%AE-469048115)
|