jessicayjm commited on
Commit
c80f9e8
1 Parent(s): 33d4be6

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +71 -3
README.md CHANGED
@@ -1,3 +1,71 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - Blablablab/ALOE
5
+ ---
6
+
7
+ ### Model Description
8
+
9
+ The model classifies an *appraisal* given a sentence and is trained on [ALOE](https://huggingface.co/datasets/Blablablab/ALOE) dataset.
10
+
11
+ **Input:** a sentence
12
+
13
+ **Labels:** No Label, Pleasantness, Anticipated Effort, Certainty, Objective Experience, Self-Other Agency, Situational Control, Advice, Trope
14
+
15
+ **Output:** logits (in order of labels)
16
+
17
+ **Model architecture**: OpenPrompt_+RoBERTa
18
+
19
+ **Developed by:** Jiamin Yang
20
+
21
+ ### Model Performance
22
+
23
+ ##### Overall performance
24
+
25
+ | Macro-F1 | Recall | Precision |
26
+ | :------: | :----: | :-------: |
27
+ | 0.56 | 0.57 | 0.58 |
28
+
29
+ ##### Per-label performance
30
+
31
+ | Label | Recall | Precision |
32
+ | -------------------- | :----: | :-------: |
33
+ | No Label | 0.34 | 0.64 |
34
+ | Pleasantness | 0.69 | 0.54 |
35
+ | Anticipated Effort | 0.46 | 0.46 |
36
+ | Certainty | 0.58 | 0.47 |
37
+ | Objective Experience | 0.58 | 0.69 |
38
+ | Self-Other Agency | 0.62 | 0.55 |
39
+ | Situational Control | 0.31 | 0.55 |
40
+ | Advice | 0.72 | 0.66 |
41
+ | Trope | 0.80 | 0.67 |
42
+
43
+ ### Getting Started
44
+
45
+ ```python
46
+ import torch
47
+ from openprompt.plms import load_plm
48
+ from openprompt.prompts import ManualTemplate
49
+ from openprompt.prompts import ManualVerbalizer
50
+ from openprompt import PromptForClassification
51
+
52
+ checkpoint_file = 'your_path_to/empathy-appraisal-span.pt'
53
+
54
+ plm, tokenizer, model_config, WrapperClass = load_plm('roberta', 'roberta-large')
55
+ template_text = 'The sentence {"placeholder":"text_a"} has the label {"mask"}.'
56
+ template = ManualTemplate(tokenizer=tokenizer, text=template_text)
57
+
58
+ num_classes = 9
59
+ label_words = [['No Label'], ['Pleasantness'], ['Anticipated Effort'], ['Certainty'], ['Objective Experience'], ['Self-Other Agency'], ['Situational Control'], ['Advice'], ['Trope']]
60
+ verbalizer = ManualVerbalizer(tokenizer, num_classes=num_classes, label_words=label_words)
61
+ prompt_model = PromptForClassification(plm=plm,template=template, verbalizer=verbalizer, freeze_plm=False).to('cuda')
62
+
63
+ checkpoint = torch.load(checkpoint_file)
64
+ state_dict = checkpoint['model_state_dict']
65
+
66
+ # depend on the version of torch
67
+ del state_dict['prompt_model.plm.roberta.embeddings.position_ids']
68
+
69
+ prompt_model.load_state_dict(state_dict)
70
+ ```
71
+