--- license: apache-2.0 datasets: - HuggingFaceM4/the_cauldron - AnyModal/flickr30k - openbmb/RLAIF-V-Dataset base_model: - WinKawaks/vit-small-patch16-224 - HuggingFaceTB/SmolLM2-135M-Instruct library_name: transformers pipeline_tag: image-text-to-text tags: - vqa - vlm ---
mehmetkeremturkcan/FemtoVLM-Small
FemtoVLM: Tiniest Vision Language Models
FemtoVLM is the smallest visual question answering/captioning model in the world. It accepts image and text inputs to produce text outputs. It's designed for efficiency. FemtoVLM can answer questions about images and describe visual content. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance. FemtoVLM comes in three sizes: 116M (femto), 143M (tiny), 160M (base), 225M (dino). All models are trained for image captioning and question answering in real-world contexts. FemtoVLM cannot perform optical character recognition (OCR), multi-turn question-answering, or scientific question answering. ## Setup ```bash pip install git+https://github.com/facebookresearch/schedule_free.git pip install peft git clone https://github.com/mkturkcan/seers.git cd seers/seers/ git clone https://huggingface.co./mehmetkeremturkcan/FemtoVLM-Small ``` ## Test Run, in the seers/seers folder, ```bash python femtovlm_inference.py ``` ## Train [seers](https://github.com/mkturkcan/seers) training code is public! Run ```bash python femtovlm_train.py ```