File size: 4,264 Bytes
5073e57
 
a5d60a7
5073e57
 
 
 
 
 
2c1477e
 
 
0e61b58
 
 
 
 
a5d60a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5073e57
 
 
 
 
2141783
 
a5d60a7
 
 
 
5073e57
fc73f1d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3175e80
fc73f1d
 
 
 
3175e80
fc73f1d
 
3175e80
 
 
fc73f1d
 
3175e80
 
fc73f1d
3175e80
 
 
fc73f1d
 
3175e80
fc73f1d
 
 
3175e80
fc73f1d
 
3175e80
fc73f1d
 
 
3175e80
fc73f1d
3175e80
 
 
 
 
 
 
 
 
 
 
fc73f1d
 
3175e80
 
fc73f1d
 
3175e80
fc73f1d
3175e80
 
fc73f1d
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
dataset_info:
- config_name: default
  features:
  - name: utterance
    dtype: string
  - name: label
    sequence: int64
  splits:
  - name: train
    num_bytes: 9119630
    num_examples: 2755
  - name: test
    num_bytes: 1275997
    num_examples: 380
  download_size: 11308024
  dataset_size: 10395627
- config_name: intents
  features:
  - name: id
    dtype: int64
  - name: name
    dtype: string
  - name: tags
    sequence: 'null'
  - name: regexp_full_match
    sequence: 'null'
  - name: regexp_partial_match
    sequence: 'null'
  - name: description
    dtype: 'null'
  splits:
  - name: intents
    num_bytes: 1054
    num_examples: 25
  download_size: 3570
  dataset_size: 1054
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
- config_name: intents
  data_files:
  - split: intents
    path: intents/intents-*
---

# events

This is a text classification dataset. It is intended for machine learning research and experimentation.

This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).

## Usage

It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):

```python
from autointent import Dataset
banking77 = Dataset.from_datasets("AutoIntent/events")
```

## Source

This dataset is taken from `knowledgator/events_classification_biotech` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):

```python
"""Convert events dataset to autointent internal format and scheme."""

from datasets import Dataset as HFDataset
from datasets import load_dataset

from autointent import Dataset
from autointent.schemas import Intent, Sample

# these classes contain too few sampls
names_to_remove = [
    "partnerships & alliances",
    "patent publication",
    "subsidiary establishment",
    "department establishment",
]

def extract_intents_data(events_dataset: HFDataset) -> list[Intent]:
    """Extract intent names and assign ids to them."""
    intent_names = sorted({name for intents in events_dataset["train"]["all_labels"] for name in intents})
    for n in names_to_remove:
        intent_names.remove(n)
    return [Intent(id=i,name=name) for i, name in enumerate(intent_names)]


def converting_mapping(example: dict, intents_data: list[Intent]) -> dict[str, str | list[int] | None]:
    """Extract utterance and OHE label and drop the rest."""
    res = {
        "utterance": example["content"],
        "label": [
            int(intent.name in example["all_labels"]) for intent in intents_data
        ]
    }
    if sum(res["label"]) == 0:
        res["label"] = None
    return res


def convert_events(events_split: HFDataset, intents_data: dict[str, int]) -> list[Sample]:
    """Convert one split into desired format."""
    events_split = events_split.map(
        converting_mapping, remove_columns=events_split.features.keys(),
        fn_kwargs={"intents_data": intents_data}
    )

    samples = []
    for sample in events_split.to_list():
        if sample["utterance"] is None:
            continue
        samples.append(sample)

    mask = [sample["label"] is None for sample in samples]
    n_oos_samples = sum(mask)
    n_in_domain_samples = len(samples) - n_oos_samples
    
    print(f"{n_oos_samples=}")
    print(f"{n_in_domain_samples=}\n")

    # actually there are too few oos samples to include them, so filter out
    samples = list(filter(lambda sample: sample["label"] is not None, samples))

    return [Sample(**sample) for sample in samples]

if __name__ == "__main__":
    # `load_dataset` might not work
    # fix is here: https://github.com/huggingface/datasets/issues/7248
    events_dataset = load_dataset("knowledgator/events_classification_biotech", trust_remote_code=True)

    intents_data = extract_intents_data(events_dataset)

    train_samples = convert_events(events_dataset["train"], intents_data)
    test_samples = convert_events(events_dataset["test"], intents_data)

    events_converted = Dataset.from_dict(
        {"train": train_samples, "test": test_samples, "intents": intents_data}
    )
```