Victor Mustar's picture

Victor Mustar PRO

victor

AI & ML interests

Building the UX of this website

Recent Activity

liked a model about 9 hours ago
answerdotai/ModernBERT-base
reacted to Kseniase's post with 👍 about 23 hours ago
**15 Agentic Systems and Frameworks of 2024** This year, we started our “AI Agents and Agentic Workflows” series (https://www.turingpost.com/t/AI-Agents) to explore everything about AI agents step by step: all the vocabulary, how they work, and how to build them. The huge interest in this series and the large number of studies conducted on agents showed that it was one of the most popular and important themes of the year. In 2025, most likely, agents will reach new highs – we will be covering that for you. Now, let’s review the agentic systems that have emerged this year. Here is a list of 15 agentic systems and frameworks of 2024: 1. https://huggingface.co./papers/2412.13501 2. https://huggingface.co./papers/2411.03562 3. https://huggingface.co./papers/2408.06292 4. https://huggingface.co./papers/2412.01928 5. https://huggingface.co./papers/2410.08164 6. https://huggingface.co./papers/2408.08435 7. https://huggingface.co./papers/2407.03502 8. https://huggingface.co./papers/2410.18603 9. https://huggingface.co./papers/2410.07484 10. https://huggingface.co./papers/2411.10109 11. https://huggingface.co./papers/2411.01747 12. https://huggingface.co./papers/2410.12375 13. https://huggingface.co./papers/2411.11844 14. https://huggingface.co./papers/2412.14684 15. https://huggingface.co./papers/2410.20424 Thanks for reading Turing Post! Subscribe to receive new posts straight into your inbox -> https://www.turingpost.com/subscribe
View all activity

Articles

Organizations

Hugging Face's profile picture Google's profile picture Safetensors's profile picture Competitions's profile picture 21 RNN's profile picture Spaces-explorers's profile picture Text Generation Inference's profile picture Spaces Examples's profile picture CVPR Demo Track's profile picture Hugging Chat's profile picture Webhooks Explorers (BETA)'s profile picture lora concepts library's profile picture Huggingface Projects's profile picture Scanned Tokens's profile picture hf admins's profile picture Hugging Face OSS Metrics's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Core ML Projects's profile picture temp-org's profile picture Blog-explorers's profile picture Mustarz's profile picture Open LLM Leaderboard's profile picture Enterprise Explorers's profile picture The Collectionists's profile picture ZeroGPU Explorers's profile picture Hugging Face Tools's profile picture TstOrg141's profile picture Stable Video benchmark's profile picture Social Post Explorers's profile picture Dev Mode Explorers's profile picture LLHF's profile picture SLLHF's profile picture

victor's activity

reacted to Kseniase's post with 👍 about 23 hours ago
view post
Post
1416
**15 Agentic Systems and Frameworks of 2024**

This year, we started our “AI Agents and Agentic Workflows” series (https://www.turingpost.com/t/AI-Agents) to explore everything about AI agents step by step: all the vocabulary, how they work, and how to build them.
The huge interest in this series and the large number of studies conducted on agents showed that it was one of the most popular and important themes of the year. In 2025, most likely, agents will reach new highs – we will be covering that for you. Now, let’s review the agentic systems that have emerged this year.

Here is a list of 15 agentic systems and frameworks of 2024:

1. GUI Agents: A Survey (2412.13501)

2. Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level (2411.03562)

3. The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery (2408.06292)

4. MALT: Improving Reasoning with Multi-Agent LLM Training (2412.01928)

5. Agent S: An Open Agentic Framework that Uses Computers Like a Human (2410.08164)

6. Automated Design of Agentic Systems (2408.08435)

7. AgentInstruct: Toward Generative Teaching with Agentic Flows (2407.03502)

8. AgentStore: Scalable Integration of Heterogeneous Agents As Specialized Generalist Computer Assistant (2410.18603)

9. WALL-E: World Alignment by Rule Learning Improves World Model-based LLM Agents (2410.07484)

10. Generative Agent Simulations of 1,000 People (2411.10109)

11. DynaSaur: Large Language Agents Beyond Predefined Actions (2411.01747)

12. PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking (2410.12375)

13. Generative World Explorer (2411.11844)

14. Bel Esprit: Multi-Agent Framework for Building AI Model Pipelines (2412.14684)

15. AutoKaggle: A Multi-Agent Framework for Autonomous Data Science Competitions (2410.20424)

Thanks for reading Turing Post!
Subscribe to receive new posts straight into your inbox -> https://www.turingpost.com/subscribe
reacted to nroggendorff's post with 👀 1 day ago
view post
Post
1238
Can we please do something about this? It makes everything I do so much harder, and because my local machine is so terrible, I am forced to test in production. This makes debugging so difficult.
nroggendorff/system-exit

cc @victor
  • 1 reply
·
reacted to anton-l's post with 🔥 5 days ago
view post
Post
1934
Introducing 📐𝐅𝐢𝐧𝐞𝐌𝐚𝐭𝐡: the best public math pre-training dataset with 50B+ tokens!
HuggingFaceTB/finemath

Math remains challenging for LLMs and by training on FineMath we see considerable gains over other math datasets, especially on GSM8K and MATH.

We build the dataset by:
🛠️ carefully extracting math data from Common Crawl;
🔎 iteratively filtering and recalling high quality math pages using a classifier trained on synthetic annotations to identify math reasoning and deduction.

We conducted a series of ablations comparing the performance of Llama-3.2-3B-Base after continued pre-training on FineMath and observe notable gains compared to the baseline model and other public math datasets.

We hope this helps advance the performance of LLMs on math and reasoning! 🚀
We’re also releasing all the ablation models as well as the evaluation code.

HuggingFaceTB/finemath-6763fb8f71b6439b653482c2
reacted to m-ric's post with 🔥 5 days ago
view post
Post
1599
After 6 years, BERT, the workhorse of encoder models, finally gets a replacement: 𝗪𝗲𝗹𝗰𝗼𝗺𝗲 𝗠𝗼𝗱𝗲𝗿𝗻𝗕𝗘𝗥𝗧! 🤗

We talk a lot about ✨Generative AI✨, meaning "Decoder version of the Transformers architecture", but this is only one of the ways to build LLMs: encoder models, that turn a sentence in a vector, are maybe even more widely used in industry than generative models.

The workhorse for this category has been BERT since its release in 2018 (that's prehistory for LLMs).

It's not a fancy 100B parameters supermodel (just a few hundred millions), but it's an excellent workhorse, kind of a Honda Civic for LLMs.

Many applications use BERT-family models - the top models in this category cumulate millions of downloads on the Hub.

➡️ Now a collaboration between Answer.AI and LightOn just introduced BERT's replacement: ModernBERT.

𝗧𝗟;𝗗𝗥:
🏛️ Architecture changes:
⇒ First, standard modernizations:
- Rotary positional embeddings (RoPE)
- Replace GeLU with GeGLU,
- Use Flash Attention 2
✨ The team also introduced innovative techniques like alternating attention instead of full attention, and sequence packing to get rid of padding overhead.

🥇 As a result, the model tops the game of encoder models:
It beats previous standard DeBERTaV3 for 1/5th the memory footprint, and runs 4x faster!

Read the blog post 👉 https://huggingface.co./blog/modernbert
  • 1 reply
·
reacted to akhaliq's post with 🔥 5 days ago
view post
Post
2237
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: akhaliq/anychat
reacted to Lewdiculous's post with 5 days ago
reacted to KnutJaegersberg's post with 👍 5 days ago
reacted to FranckAbgrall's post with 🔥 5 days ago
view post
Post
1028
🆕 It should now be easier to identify discussions or pull requests where repository owners are participating on HF, let us know it that helps 💬🤗
  • 1 reply
·
reacted to csabakecskemeti's post with 🚀 6 days ago
replied to neph1's post 6 days ago
reacted to neph1's post with 🔥 6 days ago
view post
Post
1034
For those interested in game development I've released an experimental finetune of Qwen2.5-Coder for Unity.

neph1/Qwen2.5-Coder-7B-Instruct-Unity

It's using a mix of open source datasets + one specifically made for this (also OS) with multiple responses.

Also thinking about making a code completion model, or one to have more architectural discussions with.
  • 1 reply
·
reacted to merve's post with 🔥 6 days ago
view post
Post
2240
Aya by Cohere For AI can now see! 👀

C4AI community has built Maya 8B, a new open-source multilingual VLM built on SigLIP and Aya 8B 🌱 works on 8 languages! 🗣️

The authors extend Llava dataset using Aya's translation capabilities with 558k examples!
ry it here kkr5155/maya_demo

Dataset maya-multimodal/pretrain

Model maya-multimodal/maya 👏
kudos @nahidalam and team
  • 1 reply
·
reacted to nataliaElv's post with ❤️ 7 days ago
view post
Post
1592
If you are still wondering how the FineWeb2 annotations are done, how to follow the guidelines or how Argilla works, this is your video!

I go through a few samples of the FineWeb2 dataset and classify them based on their educational content. Check it out!

https://www.youtube.com/watch?v=_-ORB4WAVGU
reacted to davidberenstein1957's post with 🔥 7 days ago
view post
Post
4095
Introducing the Synthetic Data Generator, a user-friendly application that takes a no-code approach to creating custom datasets with Large Language Models (LLMs). The best part: A simple step-by-step process, making dataset creation a non-technical breeze, allowing anyone to create datasets and models in minutes and without any code.

Blog: https://huggingface.co./blog/synthetic-data-generator
Space: argilla/synthetic-data-generator
·
reacted to lewtun's post with 🔥🔥 7 days ago
view post
Post
6405
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute 🔥

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

📈 Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

🎄 Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

🧭 Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
·
replied to their post 7 days ago
reacted to lorraine2's post with 🚀 7 days ago
view post
Post
1964
🦙New NVIDIA paper: LLaMA-Mesh 🦙

We enable large language models to generate and understand 3D meshes by representing them as text and fine-tuning. This unifies the 3D and text modalities in a single model and preserves language abilities, unlocking conversational 3D creation with mesh understanding.

🔎 Project Page: https://research.nvidia.com/labs/toronto-ai/LLaMA-Mesh/
🕹️ Interactive Demo: Zhengyi/LLaMA-Mesh (courtesy of HuggingFace and Gradio)
📖 Full Paper: https://arxiv.org/abs/2411.09595
👨‍💻Code: https://github.com/nv-tlabs/LLaMa-Mesh
💾 Model Checkpoint: Zhengyi/LLaMA-Mesh
🧩 Blender Addon: https://github.com/huggingface/meshgen (courtesy of Dylan Ebert)
🎥 5-min Overview Video: https://youtu.be/eZNazN-1lPo?si=-idQa5aaceVw0Bbj (courtesy of AI Papers Academy)
reacted to wenhuach's post with 👀 7 days ago
reacted to MohamedRashad's post with 🔥 8 days ago