Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Libraries:
Datasets
pandas
License:
File size: 3,498 Bytes
b9537dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
language:
- en
license: apache-2.0
---

# PubMed HMPV Articles

_Current as of January 7, 2025_

This dataset is metadata (id, publication date, title, link) from PubMed articles related to HMPV. It was created using [paperetl](https://github.com/neuml/paperetl) and the [PubMed Baseline](https://pubmed.ncbi.nlm.nih.gov/download/).

The 37 million articles were filtered to match either of the following criteria.

- MeSH code = [D029121](https://meshb-prev.nlm.nih.gov/record/ui?ui=D029121)
- Keyword of `HMPV` in either the `title` or `abstract`

## Retrieve article abstracts

The full article abstracts can be retrieved via the [PubMed API](https://www.nlm.nih.gov/dataguide/eutilities/utilities.html#efetch). This method accepts batches of PubMed IDs.

Alternatively, the dataset can be recreated using the following steps and loading the abstracts into the dataset (see step 5).

## Download and build

The following steps recreate this dataset.

1. Create the following directories and files

    ```bash
    mkdir -p pubmed/config pubmed/data

    echo "D029121" > pubmed/config/codes
    echo "HMPV" > pubmed/config/keywords
    ```

2. Install `paperetl` and download `PubMed Baseline + Updates` into `pubmed/data`.

    ```bash
    pip install paperetl datasets

    # Install paperetl from GitHub until v2.4.0 is released
    pip install git+https://github.com/neuml/paperetl
    ```

3. Parse the PubMed dataset into article metadata

    ```bash
    python -m paperetl.file pubmed/data pubmed/articles pubmed/config
    ```

4. Export to dataset

    ```python
    from datasets import Dataset

    ds = Dataset.from_sql(
        ("SELECT id id, published published, title title, reference reference FROM articles "
        "ORDER BY published DESC"),
        f"sqlite:///pubmed/articles/articles.sqlite"
    )
    ds.to_csv(f"pubmed-hmpv/articles.csv")
    ```

5. _Optional_ Export to dataset with all fields

    paperetl parses all metadata and article abstracts. If you'd like to create a local dataset with the abstracts, run the following instead of step 4.

    ```python
    import sqlite3
    import uuid

    from datasets import Dataset

    class Export:
        def __init__(self, dbfile):
            # Load database
            self.connection = sqlite3.connect(dbfile)
            self.connection.row_factory = sqlite3.Row

        def __call__(self):
            # Create cursors
            cursor1 = self.connection.cursor()
            cursor2 = self.connection.cursor()

            # Get article metadata
            cursor1.execute("SELECT * FROM articles ORDER BY id")
            for row in cursor1:
                # Get abstract text
                cursor2.execute(
                    "SELECT text FROM sections WHERE article = ? and name != 'TITLE' ORDER BY id",
                    [row[0]]
                )
                abstract = " ".join(r["text"] for r in cursor2)

                # Combine into single record and yield
                row = {**row, **{"abstract": abstract}}
                yield {k.lower(): v for k, v in row.items()}

        def __reduce__(self):
            return (pickle, (str(uuid.uuid4()),))

    def pickle(self, *args, **kwargs):
        raise AssertionError("Generator pickling workaround")

    # Path to database
    export = Export("pubmed/articles/articles.sqlite")
    ds = Dataset.from_generator(export)
    ds = ds.sort("published", reverse=True)
    ds.to_csv("pubmed-hmpv-full/articles.csv")
    ```