Ben Li
bash99
AI & ML interests
AIGC, stable diffusion, chatgpt
Organizations
None yet
bash99's activity
llama.cpp support
9
#1 opened about 1 month ago
by
ayyylol
Any one can use VLLM or any other engine support dynamic batch to run this with more than 1 GPU?
1
#1 opened about 2 months ago
by
bash99
How do I got token_weights from onnx inference?
#9 opened 3 months ago
by
bash99
反问句的重排似乎效果不佳
1
#5 opened 3 months ago
by
bash99
某些特殊情况匹配排序会有错)
2
#5 opened 3 months ago
by
bash99
最好能给出Instruction模板和示例,另外请问底层是llama2-base还是llama2-chat?
2
#3 opened over 1 year ago
by
bash99
4 bit GPTQ
2
#1 opened over 1 year ago
by
flashvenom
能给出转换后的3个pytorch bin文件的sha256sum吗?
#30 opened over 1 year ago
by
bash99
如果这个是用Bitsandsbyte的NF4量化的,能否直接在这个基础上用qlora继续训练?
#1 opened over 1 year ago
by
bash99
请问这个带Plus的版本和不带的有什么区别?
2
#1 opened over 1 year ago
by
bash99
Gibberish on 'latest', with recent qwopqwop GPTQ/triton and ooba?
7
#2 opened over 1 year ago
by
andysalerno
convert ziya to ggml shell
4
#1 opened over 1 year ago
by
jiangyong007
Vram usage
11
#3 opened over 1 year ago
by
Juuuuu