File size: 2,841 Bytes
3033089
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86a976f
 
 
 
 
 
 
 
3033089
e46e235
3033089
e46e235
 
 
7cb654c
e46e235
 
 
 
7cb654c
 
e46e235
 
 
 
 
7cb654c
e46e235
 
7cb654c
e46e235
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7cb654c
e46e235
 
 
34caa4a
 
e46e235
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
dataset_info:
  features:
  - name: title
    sequence: string
  - name: author
    sequence: string
  - name: authoraffiliation
    sequence: string
  - name: venue
    sequence: string
  - name: abstract
    dtype: string
  - name: doi
    dtype: string
  - name: pdfurls
    sequence: string
  - name: corpusid
    dtype: int64
  - name: arxivid
    dtype: string
  - name: pdfsha
    dtype: string
  - name: text
    dtype: string
  - name: github_urls
    sequence: string
  splits:
  - name: train
    num_bytes: 89132091867
    num_examples: 1671614
  download_size: 35993359504
  dataset_size: 89132091867
task_categories:
- text-generation
- zero-shot-classification
language:
- en
pretty_name: arxiv_s2orc_parsed
size_categories:
- 10B<n<100B
---
# Dataset Card for "ArtifactAI/arxiv_s2orc_parsed"


## Dataset Description

https://huggingface.co./datasets/AlgorithmicResearchGroup/arxiv_s2orc_parsed


### Dataset Summary

AlgorithmicResearchGroup/arxiv_s2orc_parsed is a subset of the [AllenAI S2ORC dataset](https://github.com/allenai/s2orc), a general-purpose corpus for NLP and text mining research over scientific papers, 
The dataset is filtered strictly for ArXiv papers, including the full text for each paper. Github links have been extracted from each paper to aid in the development of [AlgorithmicResearchGroup/arxiv_python_research_code](https://huggingface.co./datasets/AlgorithmicResearchGroup/arxiv_python_research_code)

### How to use it
```python
from datasets import load_dataset

ds = load_dataset("AlgorithmicResearchGroup/arxiv_s2orc_parsed", split="train")

# dataset streaming (will only download the data as needed)
ds = load_dataset("AlgorithmicResearchGroup/arxiv_s2orc_parsed", streaming=True, split="train")
```

## Dataset Structure
### Data Instances
Each data instance corresponds to one file. The content of the file is in the `text` feature, and other features provide some metadata.
### Data Fields
- `title` (sequence): list of titles.
- `author` (sequence): list of authors.
- `authoraffiliation` (sequence): list of institution affiliations for each author.
- `venue`: (integer): paper publication venue.
- `doi`: (float): paper doi.
- `pdfurls`: (integer): url link to the paper.
- `corpusid`: (int): corpus ID as defined by s2orc.
- `arxivid`: (int): arxiv paper id.
- `pdfsha`: (string): unique pdf hash.
- `text`: (string): full text of the arxiv paper.
- github_urls: (sequence): list of github urls referenced within the text

### Data Splits

The dataset has no splits and all data is loaded as train split by default.

## Additional Information

### Dataset Curators
Matthew Kenney, AlgorithmicResearchGroup, [email protected]

### Citation Information
```
@misc{arxiv_s2orc_parsed,
    title={arxiv_s2orc_parsed},
    author={Matthew Kenney},
    year={2023}
}
```