Any idea when the evaluation data will be in for this model? would like to know how the performance differ from unquantized version of the model
#2 opened 25 days ago
by
jahhs0n
Any chance your team is working on a 4-bit Llama-3.2-90B-Vision-Instruct-quantized.w4a16 version?
1
#1 opened about 2 months ago
by
mrhendrey