Spaces:
Running
on
CPU Upgrade
Failed reason about model
Hello! Thank you for your contribution.
Today, I checked my model, kyujinpy/PlatYi-34B-Llama-Q-v2.
However, the above model failed evaluation.
Whats problem?
Could you let me know?
Thank you.
Hi!
Did you follow the steps in the FAQ and submission? Your model must be uploaded in the safetensors format for example.
Then, can you point us to the request file?
@clefourrier
My model also failed, I wonder how this happened and why it happened.
https://huggingface.co./datasets/open-llm-leaderboard/requests/commit/f634da9a8af4c8ae7bffefa403c382792d31fba0
@Q-bert this job was cancelled, I relaunched it - please open a dedicated issue next time, it's easier for us to keep track that way :)
@clefourrier
Thank you for your comment.
But when I checked completed model, it can be possible .bin format.
For example, kyujinpy/PlatYi-34B-200k-Q-FastChat.
Is there another problem?
Hi! We recommend providing models in the safetensors format to make sure evaluations go well. :)
I'd suggest that you
- follow all the steps in the about (= converting your model to safetensors, making sure you can load it with auto model, ...) and
- point us to the request file of your model so we can analyse the results (like Qbert did above).
@clefourrier
Thank you for the guideline.!
I finished the above work about kyujinpy/PlatYi-34B-Llama-Q-v2
- Change to safe-tensor
- Double-check about the loading with code.
- Point the request file: Version2 Request_file
Also, I edited the another model kyujinpy/PlatYi-34B-Llama-Q-v3 like above.
Could you also check this model?
Point the request file: Version3 Request_file
Thank you very much!
Hi!
Thanks a lot for updating this issue :)
I relaunched both of your models, the backend had not managed to download them in their prior version.