FeeL (Feedback Loop)

non-profit

AI & ML interests

Human Feedback and LLMs

Recent Activity

feel-fl's activity

burtenshaw 
posted an update 1 day ago
view post
Post
2532
I made a real time voice agent with FastRTC, smolagents, and hugging face inference providers. Check it out in this space:

🔗 burtenshaw/coworking_agent
·
burtenshaw 
posted an update 2 days ago
view post
Post
5462
Now the Hugging Face agent course is getting real! With frameworks like smolagents, LlamaIndex, and LangChain.

🔗 Follow the org for updates https://huggingface.co./agents-course

This week we are releasing the first framework unit in the course and it’s on smolagents. This is what the unit covers:

- why should you use smolagents vs another library?
- how to build agents that use code
- build multiagents systems
- use vision language models for browser use

The team has been working flat out on this for a few weeks. Led by @sergiopaniego and supported by smolagents author @m-ric .
burtenshaw 
posted an update 9 days ago
view post
Post
6881
AGENTS + FINETUNING! This week Hugging Face learn has a whole pathway on finetuning for agentic applications. You can follow these two courses to get knowledge on levelling up your agent game beyond prompts:

1️⃣ New Supervised Fine-tuning unit in the NLP Course https://huggingface.co./learn/nlp-course/en/chapter11/1
2️⃣New Finetuning for agents bonus module in the Agents Course https://huggingface.co./learn/agents-course/bonus-unit1/introduction

Fine-tuning will squeeze everything out of your model for how you’re using it, more than any prompt.
  • 2 replies
·
burtenshaw 
posted an update 11 days ago
view post
Post
3248
NEW COURSE! We’re cooking hard on Hugging Face courses, and it’s not just agents. The NLP course is getting the same treatment with a new chapter on Supervised Fine-Tuning!

👉 Follow to get more updates https://huggingface.co./nlp-course

The new SFT chapter will guide you through these topics:

1️⃣ Chat Templates: Master the art of structuring AI conversations for consistent and helpful responses.

2️⃣ Supervised Fine-Tuning (SFT): Learn the core techniques to adapt pre-trained models to your specific outputs.

3️⃣ Low Rank Adaptation (LoRA): Discover efficient fine-tuning methods that save memory and resources.

4️⃣ Evaluation: Measure your model's performance and ensure top-notch results.

This is the first update in a series, so follow along if you’re upskilling in AI.
  • 2 replies
·
burtenshaw 
posted an update 14 days ago
view post
Post
3376
Hey, I’m Ben and I work at Hugging Face.

Right now, I’m focusing on educational stuff and getting loads of new people to build open AI models using free and open source tools.

I’ve made a collection of some of the tools I’m building and using for teaching. Stuff like quizzes, code challenges, and certificates.

burtenshaw/tools-for-learning-ai-6797453caae193052d3638e2
  • 1 reply
·
davidberenstein1957 
posted an update 16 days ago
view post
Post
3231
🚀 Find banger tools for your smolagents!

I created the Tools gallery, which makes tools specifically developed by/for smolagents searchable and visible. This will help with:
- inspiration
- best practices
- finding cool tools

Space: davidberenstein1957/smolagents-and-tools
  • 1 reply
·
burtenshaw 
posted an update 18 days ago
view post
Post
8946
The Hugging Face agents course is finally out!

👉 https://huggingface.co./agents-course

This first unit of the course sets you up with all the fundamentals to become a pro in agents.

- What's an AI Agent?
- What are LLMs?
- Messages and Special Tokens
- Understanding AI Agents through the Thought-Action-Observation Cycle
- Thought, Internal Reasoning and the Re-Act Approach
- Actions, Enabling the Agent to Engage with Its Environment
- Observe, Integrating Feedback to Reflect and Adapt
davidberenstein1957 
posted an update 18 days ago
burtenshaw 
posted an update 21 days ago
view post
Post
3559
SmolLM2 paper is out! 😊

😍 Why do I love it? Because it facilitates teaching and learning!

Over the past few months I've engaged with (no joke) thousands of students based on SmolLM.

- People have inferred, fine-tuned, aligned, and evaluated this smol model.
- People used they're own machines and they've used free tools like colab, kaggle, and spaces.
- People tackled use cases in their job, for fun, in their own language, and with their friends.

upvote the paper SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model (2502.02737)
  • 1 reply
·
davidberenstein1957 
posted an update 22 days ago
davidberenstein1957 
posted an update 23 days ago
davidberenstein1957 
posted an update 24 days ago
davidberenstein1957 
posted an update 29 days ago
davidberenstein1957 
posted an update about 1 month ago
burtenshaw 
posted an update about 1 month ago
view post
Post
3238
Manic few days in open source AI, with game changing development all over the place. Here's a round up of the resources:

- The science team at @huggingface reproduced and open source the seek r1. https://github.com/huggingface/open-r1
- @qwen released a series of models with 1 million token context! https://qwenlm.github.io/blog/qwen2.5-1m/
- SmolVLM got even smaller with completely new variants at 256m and 500m https://huggingface.co./blog/smolervlm

There's so much you could do with these developments. Especially combining them together into agentic applications or fine-tuning them on your use case.
  • 1 reply
·
burtenshaw 
posted an update about 1 month ago
view post
Post
1350
Hey 👋

I'm helping out on some community research to learn about the AI community. If you want to join in the conversation, head over here where I started a community discussion on the most influential model since BERT.

OSAIResearchCommunity/README#2