Spaces:
Running
Request for updating EXAONE 3.5 configuration
Hello,
we found that our recently published models, EXAONE 3.5, are enrolled in the GPU-Poor LLM Arena!
we appreciate your quick integration and interest in our models.
However, we realized that the template for EXAONE 3.5 uploaded to the Ollama library is incorrect.
We have experienced performance degradation with this incorrect template, which can lead to unfair competition.
We have requested an updated version of the Ollama Modelfile on GitHub Issues.
Could you update the Modelfile configuration for EXAONE 3.5 in the Arena, or could you reload the EXAONE 3.5 models after Ollama is updated?
Hello, and thank you for reaching out. The original models were pulled from your GGUF repo, but now that they've been updated in the Ollama model library, I'm switching to them. Congratulations on your release!
@k-mktr Thank you for your quick response!
For fairness, could you reset the result of EXAONE 3.5 and newly start?
I just reset the leaderboard for EXAONE 3.5 models to ensure everyone has a fair chance at the results. I am pleased that the strength of the community has attracted your interest.
Wishing you all the best! I really hope you continue to embrace openness and accessibility in the AI world. Great energy, cheers!
Thank you for resetting the leaderboard! We greatly appreciate your efforts to ensure fair competition within the community.
We remain committed to contributing to the open AI ecosystem and working towards making AI accessible to everyone. Looking forward to continued collaboration!