File size: 1,362 Bytes
368d6d1
 
 
 
 
 
aa9235e
 
 
 
 
 
 
368d6d1
 
 
 
 
 
 
0d052d1
 
aa9235e
 
0d052d1
368d6d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d052d1
 
 
 
 
aa9235e
0d052d1
 
 
368d6d1
 
 
 
 
 
aa9235e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
tags:
- generated_from_keras_callback
model-index:
- name: electra-nli_finetuned
  results: []
datasets:
- snli
- scitail
- multi_nli
- alisawuffles/WANLI
- pietrolesci/nli_fever
- anli
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# electra-nli_finetuned

The model base is [electra-small](google/electra-small-discriminator). 
It has been fine-tuned on: [snli](https://huggingface.co./datasets/snli), [scitail](https://huggingface.co./datasets/scitail), 
[wanli](https://huggingface.co./datasets/alisawuffles/WANLI), [mnli](https://huggingface.co./datasets/multi_nli), 
[fever_nli](https://huggingface.co./datasets/pietrolesci/nli_fever), [anli](https://huggingface.co./datasets/anli).  



## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters


### Training results
It achieved the following accuracy during training:  
snli: 89.15  
scitail: 90.08%  
wanli: 67.84%  
mnli: 81.95%  
nli_fever: 74.14%  
anli-r1_test: 46.60%  
anli-r2_test: 42.50%  
anli-r3_test: 43.08%  


### Framework versions

- Transformers 4.31.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3