Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
# LogicLLaMA Model Card
|
6 |
+
|
7 |
+
## Model details
|
8 |
+
|
9 |
+
LogicLLaMA is a language model that translates natural-language (NL) statements into first-order logic (FOL) rules.
|
10 |
+
It is trained by fine-tuning the LLaMA2-13B model on the [MALLS-v0.1](https://huggingface.co/datasets/yuan-yang/MALLS-v0) dataset.
|
11 |
+
|
12 |
+
**Model type:**
|
13 |
+
This repo contains the LoRA delta weights for naive correction LogicLLaMA, which,
|
14 |
+
given a pair of the NL statement and a predicted FOL rule, corrects the potential errors in the predicted FOL rule.
|
15 |
+
This is used as a downstream model together with ChatGPT,
|
16 |
+
where ChatGPT does the "heavy lifting" by predicting the initial translated FOL rule and then LogicLLaMA refines the rule by correcting potential errors.
|
17 |
+
In our experiments, this mode yields better performance than ChatGPT and direction translation LogicLLaMA.
|
18 |
+
|
19 |
+
We also provide the delta weights for other modes:
|
20 |
+
- [direct translation LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-direct-translate-delta-v0.1)
|
21 |
+
- [naive correction LogicLLaMA-7B](https://huggingface.co/yuan-yang/LogicLLaMA-7b-naive-correction-delta-v0.1)
|
22 |
+
- [direct translation LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-direct-translate-delta-v0.1)
|
23 |
+
- [naive correction LogicLLaMA-13B](https://huggingface.co/yuan-yang/LogicLLaMA-13b-naive-correction-delta-v0.1)
|
24 |
+
|
25 |
+
**License:**
|
26 |
+
Apache License 2.0
|
27 |
+
|
28 |
+
## Using the model
|
29 |
+
|
30 |
+
Check out how to use the model on our project page: https://github.com/gblackout/LogicLLaMA
|
31 |
+
|
32 |
+
|
33 |
+
**Primary intended uses:**
|
34 |
+
LogicLLaMA is intended to be used for research.
|
35 |
+
|
36 |
+
|
37 |
+
## Citation
|
38 |
+
|
39 |
+
```
|
40 |
+
@article{yang2023harnessing,
|
41 |
+
title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation},
|
42 |
+
author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri},
|
43 |
+
journal={arXiv preprint arXiv:2305.15541},
|
44 |
+
year={2023}
|
45 |
+
}
|
46 |
+
```
|