File size: 1,662 Bytes
80ee8a0 f3000c9 95e2baa b0d6a24 80ee8a0 e50ba50 e595867 80ee8a0 5d76eb5 80ee8a0 5d76eb5 80ee8a0 5d76eb5 e595867 80ee8a0 5d76eb5 80ee8a0 e595867 80ee8a0 5d76eb5 80ee8a0 e2ab837 e595867 5d76eb5 80ee8a0 5d76eb5 80ee8a0 5d76eb5 80ee8a0 f3000c9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
license: apache-2.0
base_model: facebook/deit-tiny-distilled-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: results
results: []
pipeline_tag: image-classification
datasets: Mozilla/docornot
---
This model is a fine-tuned version of [facebook/deit-tiny-distilled-patch16-224](https://huggingface.co./facebook/deit-tiny-distilled-patch16-224) on the [docornot](https://huggingface.co./datasets/tarekziade/docornot) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
# CO2 emissions
This model was trained on an M1 and took 0.322 g of CO2 (measured with [CodeCarbon](https://codecarbon.io/))
# Model description
This model is distilled Vision Transformer (ViT) model.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded.
# Intended uses & limitations
You can use this model to detect if an image is a picture or a document.
# Training procedure
Source code used to generate this model : https://github.com/mozilla/docornot
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
## Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 1.0 | 1600 | 0.0000 | 1.0 |
## Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |