Lucain Pouget PRO
AI & ML interests
Recent Activity
Articles
Organizations
Wauplin's activity
TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)
docs: https://huggingface.co./docs/hub/storage-limits
We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community 🔥
cc: @reach-vb @pierric @victor and the HF team
Here's help: We're launching our Year in Review on what actually matters, starting today!
Fresh content dropping daily until year end. Come along for the ride - first piece out now with @clem 's predictions for 2025.
Think of it as your end-of-year AI chocolate calendar.
Kudos to @BrigitteTousi @clefourrier @Wauplin @thomwolf for making it happen. We teamed up with aiworld.eu for awesome visualizations to make this digestible—it's a charm to work with their team.
Check it out: huggingface/open-source-ai-year-in-review-2024
1,000 spots available first-come first serve with some surprises during the stream!
You can register and add to your calendar here: https://streamyard.com/watch/JS2jHsUP3NDM
We've just released 𝚑𝚞𝚐𝚐𝚒𝚗𝚐𝚏𝚊𝚌𝚎_𝚑𝚞𝚋 v0.25.0 and it's packed with powerful new features and improvements!
✨ 𝗧𝗼𝗽 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀:
• 📁 𝗨𝗽𝗹𝗼𝗮𝗱 𝗹𝗮𝗿𝗴𝗲 𝗳𝗼𝗹𝗱𝗲𝗿𝘀 with ease using
huggingface-cli upload-large-folder
. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model 🤡• 🔎 𝗦𝗲𝗮𝗿𝗰𝗵 𝗔𝗣𝗜: new search filters (gated status, inference status) and fetch trending score.
• ⚡𝗜𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲𝗖𝗹𝗶𝗲𝗻𝘁: major improvements simplifying chat completions and handling async tasks better.
We’ve also introduced tons of bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! 💪
💡 Check out the release notes: Wauplin/huggingface_hub#8
Want to try it out? Install the release with:
pip install huggingface_hub==0.25.0
Thanks for the ping @clem !
This documentation is more recent regarding HfApi
(the Python client). You have methods like model_info
and list_models
to get details about models (and similarly with datasets and Spaces). In addition to the package reference, we also have a small guide on how to use it.
Otherwise, if you are interested in the HTTP endpoint to build your requests yourself, here is the API reference.
Depends what you want to do. We have full documentation here: https://huggingface.co./docs/huggingface_hub/index. You can find many guides showing you how to use the library.
Are you referring to Agents in transformers
? If yes, here is the docs about it: https://huggingface.co./docs/transformers/agents. Regarding tools, TGI supports them and the InferenceClient from huggingface_hub as well, meaning you can pass tools to chat_completion
(see "Example using tools:" section in https://huggingface.co./docs/huggingface_hub/v0.24.0/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion). These tools parameters were already available on huggingface_hub 0.23.x.
Hope this answers your question :)
Exciting updates include:
⚡ InferenceClient is now a drop-in replacement for OpenAI's chat completion!
✨ Support for response_format, adapter_id , truncate, and more in InferenceClient
💾 Serialization module with a save_torch_model helper that handles shared layers, sharding, naming convention, and safe serialization. Basically a condensed version of logic scattered across safetensors, transformers , accelerate
📁 Optimized HfFileSystem to avoid getting rate limited when browsing HuggingFaceFW/fineweb
🔨 HfApi & CLI improvements: prevent empty commits, create repo inside resource group, webhooks API, more options in the Search API, etc.
Check out the full release notes for more details:
Wauplin/huggingface_hub#7
👀
I asked 8 LLMs to "Tell me a bedtime story about bears and waffles."
Claude 3.5 Sonnet and GPT-4o gave me the worst stories: no conflict, no moral, zero creativity.
In contrast, smaller models were quite creative and wrote stories involving talking waffle trees and bears ostracized for their love of waffles.
Here you can see a comparison between Claude 3.5 Sonnet and NeuralDaredevil-8B-abliterated. They both start with a family of bears but quickly diverge in terms of personality, conflict, etc.
I mapped it to the hero's journey to have some kind of framework. Prompt engineering can definitely help here, but it's still disappointing that the larger models don't create better stories right off the bat.
Do you know why smaller models outperform the frontier models here?
Mostly that it's better integrated with HF services. If you pass a model_id
you can use the serverless Inference API without setting an base_url
. No need either to pass an api_key
if you are already logged in (with $HF_TOKEN
environment variable or huggingface-cli login
). If you are an Inference Endpoint user (i.e. deploying a model using https://ui.endpoints.huggingface.co/), you get a seamless integration to make requests to it with URL already configured. Finally, you are assured that the client will stay up to date with latest updates in TGI/Inference API/Inference Endpoints.
Why use the InferenceClient?
🔄 Seamless transition: keep your existing code structure while leveraging LLMs hosted on the Hugging Face Hub.
🤗 Direct integration: easily launch a model to run inference using our Inference Endpoint service.
🚀 Stay Updated: always be in sync with the latest Text-Generation-Inference (TGI) updates.
More details in https://huggingface.co./docs/huggingface_hub/main/en/guides/inference#openai-compatibility
I'm Alex, I'm 16, I've been an internship at Hugging Face for a little over a week and I've already learned a lot about using and prompting LLM models. With @victor as tutor I've just finished a space that analyzes your feelings by prompting an LLM chat model. The aim is to extend it so that it can categorize hugging face posts.
alex-abb/LLM_Feeling_Analyzer
We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.
Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets
After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.
To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.
As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amélie.
Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.
Would love to answer any questions you have so feel free to add them below!