File size: 3,094 Bytes
d0bc662 62ab471 d0bc662 62ab471 c1c4242 62ab471 d0bc662 c1c4242 38814a9 c1c4242 dced2e1 c1c4242 065f5b5 c1c4242 065f5b5 5d23efd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-generation
tags:
- math world problems
- math
- arithmetics
dataset_info:
config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
- name: equation
dtype: string
- name: problem_type
dtype: string
splits:
- name: test
num_bytes: 328744
num_examples: 1000
download_size: 115404
dataset_size: 328744
configs:
- config_name: original-splits
data_files:
- split: test
path: original-splits/test-*
---
# Dataset Card for Calc-SVAMP
## Summary
The dataset is a collection of simple math world problems focused on arithmetics. It is derived from <https://github.com/arkilpatel/SVAMP/>.
The main addition in this dataset variant is the `chain` column. It was created by converting the solution to a simple html-like language that can be easily
parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer of the mathematical problem (a number)
## Supported Tasks
This variant of the dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can out-source the computations in the reasoning chain to a calculator.
## Attributes:
- `id`: problem id from the original dataset
- `question`: the question intended to answer
- `chain`: series of simple operations (derived from `equation`) that leads to the solution
- `result`: the result (number) as a string
- `result_float`: result converted to a floating point
- `equation`: an nested expression that evaluates to the correct result
- `problem_type`: a category of the problem
## Content and data splits
The dataset contains the same data instances ad the original dataset except for a correction of inconsistency between `equation` and `answer` in one data instance.
To the best of our knowledge, the original dataset does not contain an official train-test split, and we do not create one. However, original authors have used cross-validation in the official repository - for more info, see <https://github.com/arkilpatel/SVAMP/>.
## Licence
MIT, consistent with the original source dataset linked above.
## Related work
If you are interested in related datasets (or models), check out the MU-NLPC organization here on HuggingFace. We have released a few other datasets in a compatible format, and several models that use external calculator during inference.
## Cite
If you use this version of dataset in research, please cite the original [SVAMP paper](https://www.semanticscholar.org/paper/Are-NLP-Models-really-able-to-Solve-Simple-Math-Patel-Bhattamishra/13c4e5a6122f3fa2663f63e49537091da6532f35).
TODO |