File size: 4,644 Bytes
8b7de4f
 
818010f
 
 
 
 
 
 
 
 
8b7de4f
818010f
 
 
 
 
 
 
 
 
 
 
 
 
2d9cc64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
562cd8d
2d9cc64
 
 
 
 
562cd8d
2d9cc64
 
 
 
 
562cd8d
2d9cc64
 
 
 
 
562cd8d
2d9cc64
 
 
 
 
562cd8d
2d9cc64
 
 
 
 
818010f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: mit
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- logical-reasoning
- logical-equivalence
- constrastive-learning
---

# AMR-LE
This is a branch which includes the model weight for AMR-LE. AMR-LE is a model that been fine-tuned on AMR-based logic-driven augmented data. The data is formed as `(original sentence, logical equivalence sentence, logical inequivalence sentence)`. We use Abstract Meaning Representation (AMR) to automatically construct logical equivalence and logical inequivalence sentences. We use constrastive learning to train the model to learn to identify whether two sentences are logically equivalent or logically inequivalent. You are welcome to fine-tune the model weights on the dowstream tasks as logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, QNLI, RTE and QQP). We achieved #2 on the ReClor Leaderboard.

Here is the original links for AMR-LE including paper, project and leaderboard.

Paper: https://arxiv.org/abs/2305.12599

Project: https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning

Leaderboard: https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347

In this repository, we upload the model weight which has been trained on the dataset that has the ratio of positive sample and negative sample as 1 and 2. We use AMR with four logical equivalence laws `(Contraposition law, Commutative law, Implication law, Double negation law)` to construct 4 different logical equivalence/inequivalence sentences.

## How to interact model in this web page?
Some test examples that you may copy and paste them into the right side user input area.
The expected answer for the following example is they are logically inequivalent which is 0. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is false.
```
If Alice is happy, then Bob is smart.
If Alice is not happy, then Bob is smart.
```

The expected answer for the following example is they are logically equivalent which is 1. Use constraposition law `(If A then B <=> If not B then not A)` to show that following example is true.
```
If Alice is happy, then Bob is smart.
If Bob is not smart, then Alice is not happy.
```

The expected answer for the following example is they are logically inequivalent which is 0. Use double negation law `(A <=> not not A)` to show that following example is false.
```
Alice is happy.
Alice is not happy.
```

The expected answer for the following example is they are logically equivalent which is 1. Use double negation law `(A <=> not not A)` to show that following example is true.
```
Alice is happy.
Alice is not sad.
```

The expected answer for the following example is they are logically inequivalent which is 0. Use implication law `(If A then B <=> not A or B)` to show that following example is false. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is kind or Bob is clever.
```

The expected answer for the following example is they are logically equivalent which is 1. Use implication law `(If A then B <=> not A or B)` to show that following example is true. The `or` in `not A or B` refer to the the meaning of `otherwise` in natural language.
```
If Alan is kind, then Bob is clever.
Alan is not kind or Bob is clever.
```

The expected answer for the following example is they are logically inequivalent which is 0. Use commutative law `(A and B <=> B and A)` to show that following example is false. 
```
The bald eagle is clever and the wolf is fierce.
The wolf is not fierce and the bald eagle is not clever.
```

The expected answer for the following example is they are logically equivalent which is 1. Use commutative law `(A and B <=> B and A)` to show that following example is true. 
```
The bald eagle is clever and the wolf is fierce.
The wolf is fierce and the bald eagle is clever.
```

## How to load the model weight?
```
from transformers import AutoModel

model = AutoModel.from_pretrained("qbao775/AMR-LE-DeBERTa-V2-XXLarge-Contraposition-Double-Negation-Implication-Commutative-Pos-Neg-1-2")

```

## Citation
```
@article{bao2023contrastive,
  title={Contrastive Learning with Logic-driven Data Augmentation for Logical Reasoning over Text},
  author={Bao, Qiming and Peng, Alex Yuxuan and Deng, Zhenyun and Zhong, Wanjun and Tan, Neset and Young, Nathan and Chen, Yang and Zhu, Yonghua and Witbrock, Michael and Liu, Jiamou},
  journal={arXiv preprint arXiv:2305.12599},
  year={2023}
}
```