How to Fine Tune this model?
#37 opened about 21 hours ago
by
seyyedaliayati
Request: DOI
#35 opened 5 days ago
by
shusuke131
Chat template is not consistent with documentation?
3
#34 opened 7 days ago
by
ejschwartz
Suggestion
#33 opened 9 days ago
by
Hassan883
Request: DOI
#32 opened 9 days ago
by
salezr76
Request: DOI
#31 opened 12 days ago
by
jvirico
Request: DOI
#30 opened 14 days ago
by
ShinDC
GPTQ 4Bit Llama 3.2-3B-Instruct with 100% Accuracy recovery
#29 opened 15 days ago
by
Qubitium
Request: DOI
#28 opened 17 days ago
by
coming123
Request: DOI
#27 opened 18 days ago
by
HesabAlaki4
Request: DOI
2
#24 opened 30 days ago
by
lafesalomette
Token indices sequence length is longer than the specified maximum sequence length for this model (269923 > 131072)
2
#23 opened about 1 month ago
by
wasimsafdar
what is the chat template?
#22 opened about 1 month ago
by
Blannikus
Request: DOI
#21 opened about 1 month ago
by
Leavesprior
1B and 3B are nice. Please make also an 8B so we can compare it to gemini flash 8B.
3
#20 opened about 1 month ago
by
ZeroWw
Issues w/ downloading the model: llama download: error: Model meta-llama/Llama-3.2-3B-Instruct not found
1
#19 opened about 1 month ago
by
Minimak88
Unable to Load Model
#18 opened about 1 month ago
by
NeMesIss
Extra "assistnat\n\n" at the beginning of the output
1
#17 opened about 1 month ago
by
alimah
Adding Evaluation Results
#16 opened about 2 months ago
by
Weyaxi
roger036
#15 opened about 2 months ago
by
Taylormann4u
Giving contextual messages to sagemaker instance in python
2
#14 opened about 2 months ago
by
bperin42
MMLU-Pro benchmark
5
#13 opened about 2 months ago
by
kth8
Cannot download the model with huggingface-cli
4
#11 opened about 2 months ago
by
lulmer
Thanks. This is astonishingly good for its size.
1
#9 opened about 2 months ago
by
phil111