Bluesky Community

community
Activity Feed

AI & ML interests

Tools for Bluesky πŸ¦‹

Recent Activity

bluesky-community's activity

davanstrienΒ 
posted an update about 5 hours ago
view post
Post
132
πŸ“Š Introducing "Hugging Face Dataset Spotlight" πŸ“Š

I'm excited to share the first episode of our AI-generated podcast series focusing on nice datasets from the Hugging Face Hub!

This first episode explores mathematical reasoning datasets:

- SynthLabsAI/Big-Math-RL-Verified: Over 250,000 rigorously verified problems spanning multiple difficulty levels and mathematical domains
- open-r1/OpenR1-Math-220k: 220,000 math problems with multiple reasoning traces, verified for accuracy using Math Verify and Llama-3.3-70B models.
- facebook/natural_reasoning: 1.1 million general reasoning questions carefully deduplicated and decontaminated from existing benchmarks, showing superior scaling effects when training models like Llama3.1-8B-Instruct.

Plus a bonus segment on bespokelabs/bespoke-manim!

https://www.youtube.com/watch?v=-TgmRq45tW4
davanstrienΒ 
posted an update 1 day ago
view post
Post
1744
Quick POC: Turn a Hugging Face dataset card into a short podcast introducing the dataset using all open models.

I think I'm the only weirdo who would enjoy listening to something like this though πŸ˜…

Here is an example for eth-nlped/stepverify
  • 1 reply
Β·
davanstrienΒ 
posted an update 8 days ago
view post
Post
2521
Hacked together a way to log trl GRPO training completions to a πŸ€— dataset repo. This allows you to:

- Track rewards from multiple reward functions
- Treat the completion and rewards from training as a "proper" dataset and do EDA
- Share results for open science

The implementation is super hacky, but I'm curious if people would find this useful.

To push completions to the Hub, you just need two extra parameters:

log_completions=True
log_completions_hub_repo='your-username/repo-name'

Example dataset: davanstrien/test-logs
Colab: https://colab.research.google.com/drive/1wzBFPVthRYYTp-mEYlznLg_e_0Za1M3g

clemΒ 
posted an update 10 days ago
view post
Post
2682
What are the best organizations to follow on @huggingface ?

On top of my head:
- Deepseek (35,000 followers): https://huggingface.co./deepseek-ai
- Meta Llama (27,000 followers): https://huggingface.co./meta-llama
- Black Forrest Labs (11,000 followers): https://huggingface.co./black-forest-labs
- OpenAI (5,000 followers): https://huggingface.co./openai
- Nvidia (16,000 followers): https://huggingface.co./nvidia
- MIcrosoft (9,000 followers): https://huggingface.co./microsoft
- AllenAI (2,000 followers): https://huggingface.co./allenai
- Mistral (5,000 followers): https://huggingface.co./mistralai
- XAI (600 followers): https://huggingface.co./xai-org
- Stability AI (16,000 followers): https://huggingface.co./stabilityai
- Qwen (16,000 followers): https://huggingface.co./Qwen
- GoogleAI (8,000 followers): https://huggingface.co./google
- Unsloth (3,000 followers): https://huggingface.co./unsloth
- Bria AI (4,000 followers): https://huggingface.co./briaai
- NousResearch (1,300 followers): https://huggingface.co./NousResearch

Bonus, the agent course org with 17,000 followers: https://huggingface.co./agents-course
  • 1 reply
Β·
clemΒ 
posted an update 11 days ago
view post
Post
3411
We crossed 1B+ tokens routed to inference providers partners on HF, that we released just a few days ago.

Just getting started of course but early users seem to like it & always happy to be able to partner with cool startups in the ecosystem.

Have you been using any integration and how can we make it better?

https://huggingface.co./blog/inference-providers
davanstrienΒ 
posted an update 12 days ago
davanstrienΒ 
posted an update 14 days ago
view post
Post
1876
How do you make 1M+ Hugging Face models & datasets more discoverable?

davanstrien/Smol-Hub-tldr!

I fine-tuned HuggingFaceTB/SmolLM2-360M to generate one-line summaries from a model or dataset README.

Its own self-description?
"A model for generating concise summaries of model & dataset cards from the Hugging Face Hub"

The goal? Make it easier to find the right models and datasets for your specific needs. It's already powering a semantic search for datasets Space.

It's still a WIP but thanks to @loubnabnl , @anton-l , @eliebak et al, for cooking such a nice base model for fine-tuning small, efficient models for specific domains and tasks. πŸ™
davanstrienΒ 
posted an update 15 days ago
davanstrienΒ 
posted an update about 1 month ago
cfahlgren1Β 
posted an update about 1 month ago
view post
Post
2005
If you haven't seen yet, we just released Inference Providers πŸ”€

> 4 new serverless inference providers on the Hub 🀯
> Use your HF API key or personal key with all providers πŸ”‘
> Chat with Deepseek R1, V3, and more on HF Hub πŸ‹
> We support Sambanova, TogetherAI, Replicate, and Fal.ai πŸ’ͺ

Best of all, we don't charge any markup on top of the provider 🫰 Have you tried it out yet? HF Pro accounts get $2 of free usage for the provider inference.
davanstrienΒ 
posted an update about 1 month ago
clemΒ 
posted an update about 1 month ago
view post
Post
7190
AI is not a zero-sum game. Open-source AI is the tide that lifts all boats!
davanstrienΒ 
posted an update about 1 month ago
view post
Post
2032
🌍 Big step for multilingual AI data!

The Hugging Face community has rated educational content in languages spoken by 1.6 billion people! New additions:
β€’ Japanese
β€’ Italian
β€’ Old High German

Learn more and contribute: https://huggingface.co./blog/davanstrien/fineweb2-community

These ratings can help enhance training data for major world languages.
  • 1 reply
Β·
clemΒ 
posted an update about 1 month ago
nataliaElvΒ 
posted an update about 1 month ago
view post
Post
1469
New chapter in the Hugging Face NLP course! πŸ€— πŸš€

We've added a new chapter about the very basics of Argilla to the Hugging Face NLP course. Learn how to set up an Argilla instance, load & annotate datasets, and export them to the Hub.Β 

Any feedback for improvements welcome!

https://huggingface.co./learn/nlp-course/chapter10
davanstrienΒ 
posted an update about 2 months ago
view post
Post
3070
Introducing scandi-fine-web-cleaner davanstrien/scandi-fine-web-cleaner, the first model trained on FineWeb-C community annotations!

FineWeb2 is a massive multilingual dataset for pre-training language models. Like any web-scale dataset, it contains low-quality content. How can we improve it?

Over the past months, an amazing community of 400+ annotators has been labelling content quality (using Argilla) across 23 languages through the FineWeb-C initiative.

Today, I'm happy to share the first classifier trained on this data.

πŸ” What we've built:

- A lightweight classifier that efficiently removes low-quality content
- 90%+ precision demonstrated on Danish & Swedish
- Can process the 43M+ documents in Danish FineWeb2 with minimal compute

🌍 Why this matters: The approach can be reproduced for any of the 23 languages in FineWeb-C ( data-is-better-together/fineweb-c). We can improve training data quality at scale without massive compute resources by starting with community annotations and training small, efficient classifiers.

Want to build a classifier for your language? Check out the full blog post with code examples and implementation details: https://danielvanstrien.xyz/posts/2025/FineWeb-c/scandinavian-content-filtering-fineweb.html
  • 1 reply
Β·
davanstrienΒ 
posted an update about 2 months ago
view post
Post
2260
The data-is-better-together/fineweb-c dataset is growing!

This week a few more languages have got 1,000 annotations for the educational quality of data from HuggingFaceFW/fineweb-2.

Why should you care?

The quality of pre-training data can have a big impact on the performance of downstream language models trained on that data ( HuggingFaceFW/blogpost-fineweb-v1).

Being able to filter by educational quality is on way of improving the quality of the data you use for training an LLM. Very importantly this approach can also reduce the amount of data needed for pertaining.

Why not use an LLM?

LLMs can be used to annotate educational quality for a subset of data. This data can then be used to train a smaller encoder only model to label the full dataset. However, this may not work well for languages outside of english. This is where fineweb-c (community) comes in.

The community is annotating the educational quality of fineweb2 data. Currently 114 languages have some annotations. These annotations will enable a number of things:

- Evaluate whether an LLM can label the educational quality for texts in that language well
- Directly be used for training quality classifiers
- Help discover other rules and huerisitcs for refining fineweb2 further for different languages.

This week the following languages where done:

Swedish thanks to: @Lauler @AntonVic @ohallstrom @bjarlestam @menbom @Ekgren @apsod

Ukrainian thanks to: @hannayukhymenko @robinhad @realPivo @RabotiahovDmytro @reciprocate

Assamese thanks to: @moyoor97 @Arpanjyoti @nawaf-helmi123 @pahigogoi1 @aelhence @kishorekashyap

Want to learn more: https://huggingface.co./blog/davanstrien/fineweb2-community

Contribute yourself here: data-is-better-together/fineweb-c
  • 1 reply
Β·
cfahlgren1Β 
posted an update about 2 months ago
view post
Post
1761
Wow, I just added Langfuse tracing to the Deepseek Artifacts app and it's really nice πŸ”₯

It allows me to visualize and track more things along with the cfahlgren1/react-code-instructions dataset.

It was just added as a one click Docker Space template, so it's super easy to self host πŸ’ͺ
BrigitteTousiΒ 
posted an update about 2 months ago
view post
Post
1274
Community fine-tuned models are more carbon efficient than the models they are derived from! πŸ₯³πŸŒΏ

@alozowski @clefourrier @SaylorTwift @albertvillanova evaluated COβ‚‚ emissions associated with model inference for over 3000 models on the Open LLM Leaderboard. Interesting trends and new insights emerged...πŸ‘€

Blog Post: https://huggingface.co./blog/leaderboard-emissions-analysis

Leaderboard: open-llm-leaderboard/open_llm_leaderboard