events / README.md
voorhs's picture
Update README.md
3175e80 verified
---
dataset_info:
- config_name: default
features:
- name: utterance
dtype: string
- name: label
sequence: int64
splits:
- name: train
num_bytes: 9119630
num_examples: 2755
- name: test
num_bytes: 1275997
num_examples: 380
download_size: 11308024
dataset_size: 10395627
- config_name: intents
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: tags
sequence: 'null'
- name: regexp_full_match
sequence: 'null'
- name: regexp_partial_match
sequence: 'null'
- name: description
dtype: 'null'
splits:
- name: intents
num_bytes: 1054
num_examples: 25
download_size: 3570
dataset_size: 1054
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: intents
data_files:
- split: intents
path: intents/intents-*
---
# events
This is a text classification dataset. It is intended for machine learning research and experimentation.
This dataset is obtained via formatting another publicly available data to be compatible with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html).
## Usage
It is intended to be used with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
```python
from autointent import Dataset
banking77 = Dataset.from_datasets("AutoIntent/events")
```
## Source
This dataset is taken from `knowledgator/events_classification_biotech` and formatted with our [AutoIntent Library](https://deeppavlov.github.io/AutoIntent/index.html):
```python
"""Convert events dataset to autointent internal format and scheme."""
from datasets import Dataset as HFDataset
from datasets import load_dataset
from autointent import Dataset
from autointent.schemas import Intent, Sample
# these classes contain too few sampls
names_to_remove = [
"partnerships & alliances",
"patent publication",
"subsidiary establishment",
"department establishment",
]
def extract_intents_data(events_dataset: HFDataset) -> list[Intent]:
"""Extract intent names and assign ids to them."""
intent_names = sorted({name for intents in events_dataset["train"]["all_labels"] for name in intents})
for n in names_to_remove:
intent_names.remove(n)
return [Intent(id=i,name=name) for i, name in enumerate(intent_names)]
def converting_mapping(example: dict, intents_data: list[Intent]) -> dict[str, str | list[int] | None]:
"""Extract utterance and OHE label and drop the rest."""
res = {
"utterance": example["content"],
"label": [
int(intent.name in example["all_labels"]) for intent in intents_data
]
}
if sum(res["label"]) == 0:
res["label"] = None
return res
def convert_events(events_split: HFDataset, intents_data: dict[str, int]) -> list[Sample]:
"""Convert one split into desired format."""
events_split = events_split.map(
converting_mapping, remove_columns=events_split.features.keys(),
fn_kwargs={"intents_data": intents_data}
)
samples = []
for sample in events_split.to_list():
if sample["utterance"] is None:
continue
samples.append(sample)
mask = [sample["label"] is None for sample in samples]
n_oos_samples = sum(mask)
n_in_domain_samples = len(samples) - n_oos_samples
print(f"{n_oos_samples=}")
print(f"{n_in_domain_samples=}\n")
# actually there are too few oos samples to include them, so filter out
samples = list(filter(lambda sample: sample["label"] is not None, samples))
return [Sample(**sample) for sample in samples]
if __name__ == "__main__":
# `load_dataset` might not work
# fix is here: https://github.com/huggingface/datasets/issues/7248
events_dataset = load_dataset("knowledgator/events_classification_biotech", trust_remote_code=True)
intents_data = extract_intents_data(events_dataset)
train_samples = convert_events(events_dataset["train"], intents_data)
test_samples = convert_events(events_dataset["test"], intents_data)
events_converted = Dataset.from_dict(
{"train": train_samples, "test": test_samples, "intents": intents_data}
)
```