Anja Reusch commited on
Commit
df11351
2 Parent(s): d4dde7e 536f16b

Merge branch 'main' of https://huggingface.co./AnReu/albert-for-math-ar-base-ft into main

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ALBERT for Math AR
2
+
3
+ This model is further pre-trained on the Mathematics StackExchange questions and answers. It is based on Albert base v2 and uses the same tokenizer. In addition to pre-training the model was finetuned on Math Question Answer Retrieval. The sequence classification head is trained to output a relevance score if you input the question as the first segment and the answer as the second segment. You can use the relevance score to rank different answers for retrieval.
4
+
5
+ ## Usage
6
+ ```python
7
+ # based on https://huggingface.co/docs/transformers/main/en/task_summary#sequence-classification
8
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
9
+ import torch
10
+
11
+ tokenizer = AutoTokenizer.from_pretrained("albert-base-v2")
12
+ model = AutoModelForSequenceClassification.from_pretrained("AnReu/albert-for-math-ar-base-ft")
13
+
14
+ classes = ["non relevant", "relevant"]
15
+
16
+ sequence_0 = "How can I calculate x in $3x = 5$"
17
+ sequence_1 = "Just divide by 3: $x = \\frac{5}{3}$"
18
+ sequence_2 = "The general rule for squaring a sum is $(a+b)^2=a^2+2ab+b^2$"
19
+
20
+ # The tokenizer will automatically add any model specific separators (i.e. <CLS> and <SEP>) and tokens to
21
+ # the sequence, as well as compute the attention masks.
22
+ irrelevant = tokenizer(sequence_0, sequence_2, return_tensors="pt")
23
+ relevant = tokenizer(sequence_0, sequence_1, return_tensors="pt")
24
+
25
+ irrelevant_classification_logits = model(**irrelevant).logits
26
+ relevant_classification_logits = model(**relevant).logits
27
+
28
+ irrelevant_results = torch.softmax(irrelevant_classification_logits, dim=1).tolist()[0]
29
+ relevant_results = torch.softmax(relevant_classification_logits, dim=1).tolist()[0]
30
+
31
+ # Should be irrelevant
32
+ for i in range(len(classes)):
33
+ print(f"{classes[i]}: {int(round(irrelevant_results[i] * 100))}%")
34
+
35
+ # Should be relevant
36
+ for i in range(len(classes)):
37
+ print(f"{classes[i]}: {int(round(relevant_results[i] * 100))}%")
38
+ ```
39
+
40
+ ## Reference
41
+ If you use this model, please consider referencing our paper:
42
+
43
+ ```bibtex
44
+ @inproceedings{reusch2021tu_dbs,
45
+ title={TU\_DBS in the ARQMath Lab 2021, CLEF},
46
+ author={Reusch, Anja and Thiele, Maik and Lehner, Wolfgang},
47
+ year={2021},
48
+ organization={CLEF}
49
+ }
50
+ ```