Update README.md
Browse files
README.md
CHANGED
@@ -16,36 +16,33 @@ The key features are:
|
|
16 |
* two separate inputs for student: tokens obtained using student tokenizer (for MLM) and teacher tokens greedily splitted by student tokens (for MSE)
|
17 |
|
18 |
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
|
19 |
-
|
|
|
20 |
|---|---|---|---|
|
21 |
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
|
22 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
|
23 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
|
24 |
| `cointegrated/rubert-tiny` | 11.8 | **30** | 46 |
|
25 |
-
|
|
|
|
26 |
|
27 |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
|
28 |
|
29 |
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
|
30 |
|
31 |
-
| Model
|
32 |
-
|
33 |
-
| | | |
|
34 |
-
|
|
35 |
-
|
|
36 |
-
|
|
37 |
-
|
|
38 |
-
|
|
39 |
-
|
|
40 |
-
|
|
41 |
-
|
|
42 |
-
|
|
43 |
-
|
|
44 |
-
|
45 |
-
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|
46 |
-
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
|
47 |
-
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
|
48 |
-
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
|
49 |
|
50 |
|
51 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational).
|
|
|
16 |
* two separate inputs for student: tokens obtained using student tokenizer (for MLM) and teacher tokens greedily splitted by student tokens (for MSE)
|
17 |
|
18 |
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
|
19 |
+
|
20 |
+
| Model name | \# params, M | \# vocab, K | Mem., MB |
|
21 |
|---|---|---|---|
|
22 |
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
|
23 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
|
24 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
|
25 |
| `cointegrated/rubert-tiny` | 11.8 | **30** | 46 |
|
26 |
+
| **distilrubert-tiny-cased-conversational** | **10.4** | 31 | **41** |
|
27 |
+
|
28 |
|
29 |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
|
30 |
|
31 |
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
|
32 |
|
33 |
+
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|
34 |
+
|---|---|---|------||------||
|
35 |
+
| | | | CPU | GPU | CPU | GPU |
|
36 |
+
| `rubert-base-cased-conversational` | 1 | 512 | 0.147 | 0.014 | 897 | 1531 |
|
37 |
+
| `distilrubert-base-cased-conversational` | 1 | 512 | 0.083 | 0.006 | 766 | 1423 |
|
38 |
+
| `distilrubert-small-cased-conversational` | 1 | 512 | 0.03 | **0.002** | 600 | 1243 |
|
39 |
+
| `cointegrated/rubert-tiny` | 1 | 512 | 0.041 | 0.003 | 272 | 919 |
|
40 |
+
| `distilrubert-tiny-cased-conversational` | 1 | 512 | **0.023** | 0.003 | **206** | **855** |
|
41 |
+
| `rubert-base-cased-conversational` | 16 | 512 | 2.839 | 0.182 | 1499 | 2071 |
|
42 |
+
| `distilrubert-base-cased-conversational` | 16 | 512 | 1.065 | 0.055 | 2541 | 2927 |
|
43 |
+
| `distilrubert-small-cased-conversational` | 16 | 512 | 0.373 | **0.003** | 1360 | 1943 |
|
44 |
+
| `cointegrated/rubert-tiny` | 16 | 512 | 0.628 | 0.004 | 1293 | 2221 |
|
45 |
+
| **distilrubert-tiny-cased-conversational** | 16 | 512 | **0.219** | **0.003** | **633** | **1291** |
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
|
48 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational).
|