ONEKQ AI

company

AI & ML interests

Benchmark, Code Generation, LLM

Recent Activity

onekq  updated a Space 1 day ago
onekq-ai/WebApp1K-models-leaderboard
onekq  updated a collection 3 days ago
R1 Reproduction Works
onekq  updated a collection 3 days ago
R1 Reproduction Works
View all activity

onekq-ai's activity

onekq 
posted an update 2 days ago
view post
Post
2196
QwQ-32B is amazing!

It ranks below o1-preview, but beats DeepSeek v3 and all Gemini models.
onekq-ai/WebApp1K-models-leaderboard

Now we have such a powerful model that can fit into a single GPU, can someone finetune a web app model to push SOTA of my leaderboard? 🤗
onekq 
posted an update 3 days ago
view post
Post
488
From my own experience these are the pain points for reasoning model adoption.

(1) expensive and even worse, slow, due to excessive token output. You need to 10x your max output length to avoid clipping the thinking process.

(2) you have to filter thinking tokens to retrieve the final output. For mature workflows, this means broad or deep refactoring.

1p vendors (open-source and proprietary) ease these pain points by manipulating their own models. But the problems are exposed when the reasoning model is hosted by 3p MaaS providers.