File size: 1,061 Bytes
ba094bd
 
 
 
 
 
 
453fb21
 
 
 
 
 
 
ba094bd
 
 
 
 
453fb21
6c9a441
453fb21
ba094bd
 
 
453fb21
ba094bd
 
 
 
453fb21
 
ba094bd
 
 
453fb21
ba094bd
 
 
 
453fb21
 
ba094bd
 
 
 
 
 
453fb21
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
library_name: transformers
tags:
- generated_from_keras_callback
model-index:
- name: DistilBERT-base-uncased-english-finetuned-squad
  results: []
datasets:
- rajpurkar/squad
language:
- en
base_model:
- distilbert/distilbert-base-uncased
pipeline_tag: question-answering
---


# DistilBERT-base-uncased-english-finetuned-squad

This model was finetuned on squad dataset.
Use TFDistilBertForQuestionAnswering to import model.
Requires DistilBertTokenizerFast to generate tokens that are accepted by this model.

## Model description

Base DistilBERT model finetuned using squad dataset for NLP tasks such as context based question answering.


## Training procedure

Trained for 3 epochs.

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: Adam with learning_rate=5e-5
- training_precision: float32

### Training results

Loss on final epoch: 0.6417 & validation loss: 1.2772
evaluation yet to be done.

### Framework versions

- Transformers 4.44.2
- TensorFlow 2.17.0
- Datasets 3.0.0
- Tokenizers 0.19.1