Please Quantize MiniMaxAI/MiniMax-VL-01
#1
by
chilegazelle
- opened
Dear colleague,
First of all, I sincerely appreciate your work—your contributions to AI optimization are truly valuable.
Would it be possible to quantize MiniMaxAI/MiniMax-VL-01? A quantized version would help accelerate the development of VL models by making inference more accessible, which could increase interest in them.
If feasible, it would be great to have multiple quantized versions optimized for different hardware, precision levels, and use cases.
If this is something you could take on, it would be greatly appreciated. Thank you in advance!