ning

syslot

AI & ML interests

None yet

Recent Activity

liked a model 4 days ago
deepseek-ai/DeepSeek-R1
liked a model 4 days ago
deepseek-ai/DeepSeek-R1-Zero
View all activity

Organizations

None yet

syslot's activity

New activity in deepseek-ai/DeepSeek-R1-Zero 4 days ago

Waiting!

4
#1 opened 4 days ago by
syslot
New activity in lmms-lab/llava-next-110b 9 months ago

llama3-70b

1
#2 opened 9 months ago by
KnutJaegersberg
New activity in Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 9 months ago

amazing!

2
#2 opened 9 months ago by
syslot
reacted to WizardLM's post with 🚀 9 months ago
view post
Post
40139
🔥🔥🔥 Introducing WizardLM-2!

📙Release Blog: https://wizardlm.github.io/WizardLM2
✅Model Weights: microsoft/wizardlm-661d403f71e6c8257dbd598a
🐦Twitter: https://twitter.com/WizardLM_AI/status/1779899325868589372

We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.

WizardLM-2 8x22B is our most advanced model, and the best opensource LLM in our internal evaluation on highly complex tasks. WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.

🤗 WizardLM 2 Capacities:

1. MT-Bench (Figure-1)
The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary works such as GPT-4-Trubo and Glaude-3. Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.

2. Human Preferences Evaluation (Figure 2)
Through this human preferences evaluation, WizardLM-2's capabilities are very close to the cutting-edge proprietary models such as GPT-4-1106-preview, and significantly ahead of all the other open source models.

🔍Method Overview:
As the natural world's human-generated data becomes increasingly exhausted through LLM training, we believe that: the data carefully created by AI and the model step-by-step supervised by AI will be the sole path towards more powerful AI.

In the past one year, we built a fully AI powered synthetic training system. (As shown in the Figure 3).
·
New activity in HyperGAI/HPT 10 months ago

HPT Pro

1
#2 opened 10 months ago by
syslot