File size: 1,633 Bytes
894cab3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- AnyModal/flickr30k
- openbmb/RLAIF-V-Dataset
base_model:
- HuggingFaceTB/SmolLM2-135M-Instruct
- facebook/dino-vitb16
library_name: transformers
pipeline_tag: image-text-to-text
tags:
- vqa
- vlm
---
<p align="center">
<img src="https://github.com/mkturkcan/femtovlm/blob/main/assets/logo.png?raw=true" width="180" />
</p>
<h1 align="center">
<p>mehmetkeremturkcan/FemtoVLM-DINO</p>
</h1>
<h3 align="center">
<p>FemtoVLM: Tiniest Vision Language Models</p>
</h3>
FemtoVLM is the smallest visual question answering/captioning model in the world. It accepts image and text inputs to produce text outputs. It's designed for efficiency. FemtoVLM can answer questions about images and describe visual content. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance.
FemtoVLM comes in three sizes: 116M (femto), 143M (tiny), 160M (base), 225M (dino). All models are trained for image captioning and question answering in real-world contexts. FemtoVLM cannot perform optical character recognition (OCR), multi-turn question-answering, or scientific question answering.
## Setup
```bash
pip install git+https://github.com/facebookresearch/schedule_free.git
pip install peft
git clone https://github.com/mkturkcan/seers.git
cd seers/seers/
git clone https://huggingface.co./mehmetkeremturkcan/FemtoVLM-DINO
```
## Test
Run, in the seers/seers folder,
```bash
python femtovlm_inference.py
```
## Train
[seers](https://github.com/mkturkcan/seers) training code is public! Run
```bash
python femtovlm_train.py
``` |