Shakti-VLMs: Scalable Vision-Language Models for Enterprise AI
Abstract
We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B parameters designed to address data efficiency challenges in multimodal learning. While recent VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning, OCR extraction, and general multimodal reasoning. Our results highlight that high performance can be achieved through model design and training strategy rather than sheer data volume, making Shakti an efficient solution for enterprise-scale multimodal tasks.
Community
We introduce Shakti VLM, a family of vision-language models in the capacity of 1B and 4B
parameters designed to address data efficiency challenges in multimodal learning. While recent
VLMs achieve strong performance through extensive training data, Shakti models leverage architectural innovations to attain competitive results with fewer tokens. Key advancements include
QK-Normalization for attention stability, hybrid normalization techniques, and enhanced positional
encoding. A three-stage training strategy further optimizes learning efficiency. Evaluations show that
Shakti-Shakti-VLM-1B and Shakti-VLM-4B excel in document understanding, Visual Reasoning,
OCR extraction, and general multimodal reasoning. Our results highlight that high performance can
be achieved through model design and training strategy rather than sheer data volume, making Shakti
an efficient solution for enterprise-scale multimodal tasks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Eve: Efficient Multimodal Vision Language Models with Elastic Visual Experts (2025)
- Eagle 2: Building Post-Training Data Strategies from Scratch for Frontier Vision-Language Models (2025)
- Dynamic Knowledge Integration for Enhanced Vision-Language Reasoning (2025)
- DRIVINGVQA: Analyzing Visual Chain-of-Thought Reasoning of Vision Language Models in Real-World Scenarios with Driving Theory Tests (2025)
- Dynamic Token Reduction during Generation for Vision Language Models (2025)
- PLPHP: Per-Layer Per-Head Vision Token Pruning for Efficient Large Vision-Language Models (2025)
- SAISA: Towards Multimodal Large Language Models with Both Training and Inference Efficiency (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper