qbao775 commited on
Commit
52d40d9
1 Parent(s): f7419ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -1,3 +1,60 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - text-classification
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - Reasoning
10
+ - Multi-Step-Deductive-Reasoning
11
+ - Logical-Reasoning
12
+ size_categories:
13
+ - 100K<n<1M
14
  ---
15
+ # PARARULE-Plus-Depth-2
16
+ This is a branch which includes the dataset from PARARULE-Plus Depth=2. PARARULE Plus is a deep multi-step reasoning dataset over natural language. It can be seen as an improvement on the dataset of PARARULE (Peter Clark et al., 2020). The motivation is to generate deeper PARARULE training samples. We add more training samples for the case where the depth is greater than or equal to two to explore whether Transformer has reasoning ability. PARARULE Plus is a combination of two types of entities, animals and people, and corresponding relationships and attributes. From the depth of 2 to the depth of 5, we have around 100,000 samples in the depth of each layer, and there are nearly 400,000 samples in total.
17
+
18
+ Here is the original links for PARARULE-Plus including paper, project and data.
19
+
20
+ Paper: https://www.cs.ox.ac.uk/isg/conferences/tmp-proceedings/NeSy2022/paper15.pdf
21
+
22
+ Project: https://github.com/Strong-AI-Lab/Multi-Step-Deductive-Reasoning-Over-Natural-Language
23
+
24
+ Data: https://github.com/Strong-AI-Lab/PARARULE-Plus
25
+
26
+ In this huggingface version, we pre-processed the dataset and use `1` to represent `true` and `0` to represent `false` to better help user train model.
27
+
28
+ ## How to load the dataset?
29
+ ```
30
+ from datasets import load_dataset
31
+ dataset = load_dataset("qbao775/PARARULE-Plus-Depth-2")
32
+ ```
33
+
34
+ ## How to train a model using the dataset?
35
+
36
+ You can follow the `text-classification` [example code](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification#text-classification-examples) from huggingface.
37
+
38
+ Here is a script to run the code. You can change the model name if you want.
39
+ ```
40
+ python run_glue_no_trainer.py \
41
+ --model_name_or_path bert-base-cased \
42
+ --task_name qbao775/PARARULE-Plus-Depth-2 \
43
+ --do_train \
44
+ --do_eval \
45
+ --max_seq_length 128 \
46
+ --per_device_train_batch_size 32 \
47
+ --learning_rate 2e-5 \
48
+ --num_train_epochs 3 \
49
+ --output_dir /tmp/PARARULE-Plus-Depth-2/
50
+ ```
51
+
52
+ ## Citation
53
+ ```
54
+ @inproceedings{bao2022multi,
55
+ title={Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation},
56
+ author={Qiming Bao and Alex Yuxuan Peng and Tim Hartill and Neset Tan and Zhenyun Deng and Michael Witbrock and Jiamou Liu},
57
+ year={2022},
58
+ publisher={The 2nd International Joint Conference on Learning and Reasoning and 16th International Workshop on Neural-Symbolic Learning and Reasoning (IJCLR-NeSy 2022)}
59
+ }
60
+ ```