|
--- |
|
license: apache-2.0 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- config_name: original-splits |
|
data_files: |
|
- split: train |
|
path: original-splits/train-* |
|
- split: validation |
|
path: original-splits/validation-* |
|
- split: test |
|
path: original-splits/test-* |
|
dataset_info: |
|
- config_name: default |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: chain |
|
dtype: string |
|
- name: result |
|
dtype: string |
|
- name: result_float |
|
dtype: float64 |
|
- name: question_without_options |
|
dtype: string |
|
- name: options |
|
struct: |
|
- name: A |
|
dtype: string |
|
- name: B |
|
dtype: string |
|
- name: C |
|
dtype: string |
|
- name: D |
|
dtype: string |
|
- name: E |
|
dtype: string |
|
- name: annotated_formula |
|
dtype: string |
|
- name: linear_formula |
|
dtype: string |
|
- name: rationale |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 25058735 |
|
num_examples: 20868 |
|
download_size: 11157481 |
|
dataset_size: 25058735 |
|
- config_name: original-splits |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: question |
|
dtype: string |
|
- name: chain |
|
dtype: string |
|
- name: result |
|
dtype: string |
|
- name: result_float |
|
dtype: float64 |
|
- name: question_without_options |
|
dtype: string |
|
- name: options |
|
struct: |
|
- name: A |
|
dtype: string |
|
- name: B |
|
dtype: string |
|
- name: C |
|
dtype: string |
|
- name: D |
|
dtype: string |
|
- name: E |
|
dtype: string |
|
- name: annotated_formula |
|
dtype: string |
|
- name: linear_formula |
|
dtype: string |
|
- name: rationale |
|
dtype: string |
|
- name: category |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 25058735 |
|
num_examples: 20868 |
|
- name: validation |
|
num_bytes: 3722848 |
|
num_examples: 3102 |
|
- name: test |
|
num_bytes: 2423833 |
|
num_examples: 2029 |
|
download_size: 13928430 |
|
dataset_size: 31205416 |
|
--- |
|
|
|
# Dataset Card for Calc-math_qa |
|
|
|
|
|
## Summary |
|
|
|
This dataset is an instance of math_qa dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags: |
|
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case) |
|
- output: An output of the external tool |
|
- result: The final answer of the mathematical problem (correct option) |
|
|
|
|
|
## Supported Tasks |
|
|
|
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses. |
|
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator. |
|
|
|
|
|
## Construction Process |
|
|
|
We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replaced all advanced |
|
function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their |
|
evaluation does not match the answer selected as correct in the data with a 5% tolerance, with about 26k examples remaining. The sequence of steps is then saved in HTML-like language |
|
in the `chain` column. |
|
|
|
We also perform in-dataset and cross-dataset data-leak detection within [Calc-X collection](https://huggingface.co./collections/MU-NLPC/calc-x-652fee9a6b838fd820055483). |
|
Specifically for MathQA, we found that majority of validation and test examples are near-duplicates of some example in the train set, and that all validation and test |
|
examples likely originate from the Aqua-RAT train split. We do not recommend to original validation and test splits of the MathQA dataset. |
|
|
|
You can read more information about this process in our [Calc-X paper](https://arxiv.org/abs/2305.15017). |
|
|
|
|
|
## Data splits |
|
|
|
In our default configuration, test and validation splits are removed and we recommend using MathQA for training only. You can load it using: |
|
|
|
```python |
|
datasets.load_dataset("MU-NLPC/calc-math_qa") |
|
``` |
|
|
|
If you want to use the original dataset splits, you can load it using: |
|
|
|
```python |
|
datasets.load_dataset("MU-NLPC/calc-math_qa", "original-splits") |
|
``` |
|
|
|
|
|
## Atributes |
|
|
|
- **id** - id of the example |
|
- **question** - the description of a mathematical problem in natural language, and includes the options to be selected from |
|
- **chain** - solution in the form of step-by-step calculations encoded in simple html-like language. computed from `annotated_formula` column |
|
- **result** - the correct option |
|
- **result_float** - the result converted to a float |
|
- **question_without_options** - same as `question`, but does not contain the options |
|
- **options** - dictionary of options to choose from, one is correct, keys are "A".."E" |
|
- **annotated_formula** - human-annotated nested expression that (approximately) evaluates to the selected correct answer |
|
- **linear_formula** - same as `annotated_formula`, but linearized by original math_qa authors |
|
- **rationale** - human-annotated free-text reasoning that leads to the correct answer |
|
- **category** - category of the math problem |
|
|
|
Attributes **id**, **question**, **chain**, and **result** are present in all datasets in [Calc-X collection](https://huggingface.co./collections/MU-NLPC/calc-x-652fee9a6b838fd820055483). |
|
|
|
|
|
## Sources |
|
|
|
- [mathqa HF dataset](https://huggingface.co./datasets/math_qa) |
|
- [official website](https://math-qa.github.io/) |
|
|
|
|
|
## Related work |
|
|
|
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers. |
|
We have released a collection of datasets on solving math problems with calculator interactions on HuggingFace called [Calc-X collection](https://huggingface.co./collections/MU-NLPC/calc-x-652fee9a6b838fd820055483). |
|
You can find the models we trained in the [Calcformers collection](https://huggingface.co./collections/MU-NLPC/calcformers-65367392badc497807b3caf5). |
|
You can read more in our paper [Calc-X and Calcformers](https://arxiv.org/abs/2305.15017). |
|
|
|
|
|
## Licence |
|
|
|
Apache 2.0, consistently with the original dataset. |
|
|
|
|
|
## Cite |
|
|
|
If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319), and [Calc-X paper](https://arxiv.org/abs/2305.15017) as follows: |
|
|
|
```bibtex |
|
@inproceedings{kadlcik-etal-2023-soft, |
|
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems", |
|
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek", |
|
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track", |
|
month = dec, |
|
year = "2023", |
|
address = "Singapore, Singapore", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://arxiv.org/abs/2305.15017", |
|
} |
|
``` |