Commit
·
0bd8c13
1
Parent(s):
b1d1135
Update README.md
Browse files
README.md
CHANGED
@@ -11,13 +11,13 @@
|
|
11 |
|
12 |
#### **DistilBERT Uncased Tokenizer**
|
13 |
***
|
14 |
-
- The text is tokenized using the 'distilbert-base-uncased' HuggingFace tokenizer.
|
15 |
- For training, the text is cut to a block-size of 200.
|
16 |
- Max length padding is used to maintain consistent input data shape.
|
17 |
|
18 |
#### **DistilBERT Uncased Model**
|
19 |
***
|
20 |
-
- The model that is finetuned is the DistilBERT model, 'distilbert-base-uncased'
|
21 |
- This is a small and fast text classifier, perfect for real-time inference!
|
22 |
- 40% less parameters than the base BERT model.
|
23 |
- 60% faster while preserving 95% performance of the base BERT model.
|
|
|
11 |
|
12 |
#### **DistilBERT Uncased Tokenizer**
|
13 |
***
|
14 |
+
- The text is tokenized using the **'distilbert-base-uncased'** HuggingFace tokenizer.
|
15 |
- For training, the text is cut to a block-size of 200.
|
16 |
- Max length padding is used to maintain consistent input data shape.
|
17 |
|
18 |
#### **DistilBERT Uncased Model**
|
19 |
***
|
20 |
+
- The model that is finetuned is the DistilBERT model, **'distilbert-base-uncased'**.
|
21 |
- This is a small and fast text classifier, perfect for real-time inference!
|
22 |
- 40% less parameters than the base BERT model.
|
23 |
- 60% faster while preserving 95% performance of the base BERT model.
|