model weight
1
#16 opened 2 days ago
by
kdaeho27
Lenght Ouput ?
1
#15 opened 4 days ago
by
Brabuslevrai
Are there plans to release the lightning attention kernel?
2
#14 opened 4 days ago
by
bongchoi
In modeling_minimax_text_01.py attention mask is not passed correctly to MiniMaxText01FlashAttention2::forward() method
1
#13 opened 5 days ago
by
sszymczyk
Request: Add vLLM Support for This Model
1
#12 opened 7 days ago
by
kira
Can you provide a FP8 version?
2
#11 opened 7 days ago
by
xjpang85
Smaller versions (like 20b and 14b)
1
#10 opened 7 days ago
by
win10
Please fire your human evaluators
8
#6 opened 9 days ago
by
ChuckMcSneed
Consider making Minimax Text free software, as license is proprietary
4
#2 opened 9 days ago
by
JLouisBiz
Requesting Support for GGUF Quantization of MiniMax-Text-01 through llama.cpp
4
#1 opened 9 days ago
by
Doctor-Chad-PhD