File size: 3,231 Bytes
4ea0732
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
license: llama2
datasets:
- ACE05
- conll2003
- conll2012_ontonotesv5
- rams
- tacred
- fewrel
- maven
language:
- en
metrics:
- f1
pipeline_tag: text-generation
tags:
- text-generation-inference
- Information Extraction
- IE
- Named Entity Recogniton
- Event Extraction
- Relation Extraction
- LLaMA
---

# Model Card for ADELIE-DPO-3B

<!-- Provide a quick summary of what the model is/does. -->

<p align="justify">
We introduce <b>ADELIE</b> (<b>A</b>ligning large language mo<b>DEL</b>s on <b>I</b>nformation <b>E</b>xtraction), an aligned LLM that effectively solves various IE tasks, including closed IE, open IE, and on-demand IE. We first collect and construct a high-quality alignment corpus <font face="Verdana">IEInstruct</font> for IE. Then we train ADELIE<sub>SFT</sub> using instruction tuning on <font face="Verdana">IEInstruct</font>. We further train ADELIE<sub>SFT</sub> with direct preference optimization (DPO) objective, resulting in ADELIE<sub>DPO</sub>. Extensive experiments on various held-out IE datasets demonstrate that our models (ADELIE<sub>SFT</sub> and ADELIE<sub>DPO</sub>) achieve state-of-the-art (SoTA) performance among open-source models. We further explore the general capabilities of ADELIE, and experimental results reveal that their general capabilities do not exhibit a noticeable decline.

- 📖 Paper: [ADELIE: Aligning Large Language Models on Information Extraction](https://arxiv.org/abs/2405.05008)
</p>
- 🐧 Github: [THU/ADELIE](https://github.com/THU-KEG/ADELIE/tree/main)


# Model Performance

The table below presents the average F1 scores (%) of the ADELIE model across closed IE, open IE, and on-demand IE tasks, as well as its overall performance (%) on general benchmarks. For dataset details, please refer to the paper.

| Model           | Closed IE | Open IE | On-demand IE | General Average Score |
|-----------------|-----------|---------|--------------|-----------------------|
| Llama2 7B       | 5.7       | 5.6     | 22.4         | 52.2                  |
| ADELIE-SFT      | 42.6      | 46.9    | 60.4         | 53.5                  |
| ADELIE-DPO      | **42.7**      | **47.6**    | **60.5**         | **53.8**                  |
|-----------------|-----------|---------|--------------|-----------------------|
| Llama3.2 3B     | 19.1      | 18.5    | 20.8         | 55.5                  |
| ADELIE-SFT-3B   | **41.8**      | 47.6    | **60.8**         | **55.6**                  |
| ADELIE-DPO-3B   | 39.2      | **47.8**    | 60.7         | **55.6**                  |
|-----------------|-----------|---------|--------------|-----------------------|
| Qwen2.5 1.5B    | 16.5      | 14.2    | 20.5         | 54.6                  |
| ADELIE-SFT-1.5B | 37.7      | 44.6    | 58.9         | 55.0                  |
| ADELIE-DPO-1.5B | **38.5**      | **45.6**    | **59.2**         | **55.1**                  |



### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** Yunjia Qi, Hao Peng, Xiaozhi Wang, Bin Xu, Lei Hou, Juanzi Li
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** LLaMA2 License for the base model. 
- **Finetuned from model [optional]:** LLaMA3.2-3B