id
stringlengths 36
36
| status
stringclasses 1
value | inserted_at
timestamp[us] | updated_at
timestamp[us] | _server_id
stringlengths 36
36
| title
stringlengths 11
142
| authors
stringlengths 3
297
| filename
stringlengths 5
62
| content
stringlengths 2
64.1k
| content_class.responses
sequencelengths 1
1
| content_class.responses.users
sequencelengths 1
1
| content_class.responses.status
sequencelengths 1
1
| content_class.suggestion
sequencelengths 1
4
| content_class.suggestion.agent
null | content_class.suggestion.score
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aa6cd850-deb8-434a-8e48-3b9b83f59850 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T13:36:03.943000 | 04931499-a195-4dbe-8e88-3615fb461334 | Data is better together: Enabling communities to collectively build better datasets together using Argilla and Hugging Face Spaces | davanstrien, dvilasuero | community-datasets.md | Recently, Argilla and Hugging Face [launched](https://huggingface.co./posts/dvilasuero/680660181190026) `Data is Better Together`, an experiment to collectively build a preference dataset of prompt rankings. In a few days, we had:
- 350 community contributors labeling data
- Over 11,000 prompt ratings
See the [progress dashboard](https://huggingface.co./spaces/DIBT/prompt-collective-dashboard) for the latest stats!
This resulted in the release of [`10k_prompts_ranked`](https://huggingface.co./datasets/DIBT/10k_prompts_ranked), a dataset consisting of 10,000 prompts with user ratings for the quality of the prompt. We want to enable many more projects like this!
In this post, we’ll discuss why we think it’s essential for the community to collaborate on building datasets and share an invitation to join the first cohort of communities [Argilla](https://argilla.io/) and Hugging Face will support to develop better datasets together!
## Data remains essential for better models
Data continues to be essential for better models: We see continued evidence from [published research](https://huggingface.co./papers/2402.05123), open-source [experiments](https://argilla.io/blog/notus7b/), and from the open-source community that better data can lead to better models.
<p align="center">
<img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/17480bfba418032faec37da19e9c678ac9eeed43/blog/community-datasets/why-model-better.png" alt="Screenshot of datasets in the Hugging Face Hub"><br>
<em>The question.</em>
</p>
<p align="center">
<img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/17480bfba418032faec37da19e9c678ac9eeed43/blog/community-datasets/data-is-the-answer.png" alt="Screenshot of datasets in the Hugging Face Hub"><br>
<em>A frequent answer.</em>
</p>
## Why build datasets collectively?
Data is vital for machine learning, but many languages, domains, and tasks still lack high-quality datasets for training, evaluating, and benchmarking — the community already shares thousands of models, datasets, and demos daily via the Hugging Face Hub. As a result of collaboration, the open-access AI community has created amazing things. Enabling the community to build datasets collectively will unlock unique opportunities for building the next generation of datasets to build the next generation of models.
Empowering the community to build and improve datasets collectively will allow people to:
- Contribute to the development of Open Source ML with no ML or programming skills required.
- Create chat datasets for a particular language.
- Develop benchmark datasets for a specific domain.
- Create preference datasets from a diverse range of participants.
- Build datasets for a particular task.
- Build completely new types of datasets collectively as a community.
Importantly we believe that building datasets collectively will allow the community to build better datasets abd allow people who don't know how to code to contribute to the development of AI.
### Making it easy for people to contribute
One of the challenges to many previous efforts to build AI datasets collectively was setting up an efficient annotation task. Argilla is an open-source tool that can help create datasets for LLMs and smaller specialised task-specific models. Hugging Face Spaces is a platform for building and hosting machine learning demos and applications. Recently, Argilla added support for authentication via a Hugging Face account for Argilla instances hosted on Spaces. This means it now takes seconds for users to start contributing to an annotation task.
<figure class="image table text-center m-0 w-full">
<video
style="max-width: 90%; margin: auto;"
autoplay loop muted playsinline
src="https://video.twimg.com/ext_tw_video/1757693043619004416/pu/vid/avc1/1068x720/wh3DyY0nMcRJaMki.mp4?tag=12"
></video>
</figure>
Now that we have stress-tested this new workflow when creating the [`10k_prompts_ranked`](https://huggingface.co./datasets/DIBT/10k_prompts_ranked), dataset, we want to support the community in launching new collective dataset efforts.
## Join our first cohort of communities who want to build better datasets together!
We’re very excited about the possibilities unlocked by this new, simple flow for hosting annotation tasks. To support the community in building better datasets, Hugging Face and Argilla invite interested people and communities to join our initial cohort of community dataset builders.
People joining this cohort will:
- Be supported in creating an Argilla Space with Hugging Face authentication. Hugging Face will grant free persistent storage and improved CPU spaces for participants.
- Have their comms and promotion advertising the initiative amplified by Argilla and Hugging Face.
- Be invited to join a cohort community channel
Our goal is to support the community in building better datasets together. We are open to many ideas and want to support the community as far as possible in building better datasets together.
## What types of projects are we looking for?
We are open to supporting many types of projects, especially those of existing open-source communities. We are particularly interested in projects focusing on building datasets for languages, domains, and tasks that are currently underrepresented in the open-source community. Our only current limitation is that we're primarily focused on text-based datasets. If you have a very cool idea for multimodal datasets, we'd love to hear from you, but we may not be able to support you in this first cohort.
Tasks can either be fully open or open to members of a particular Hugging Face Hub organization.
If you want to be part of the first cohort, please join us in the `#data-is-better-together` channel in the [Hugging Face Discord](http://hf.co/join/discord) and let us know what you want to build together!
We are looking forward to building better datasets together with you! | [
[
"llm",
"data",
"community",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"community",
"tools",
"llm"
] | null | null |
3d7d7a2d-491b-449f-ba3b-510a45e1ead4 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T19:00:17.290000 | fdfa8e88-1b3f-43c9-905a-510602a63ee3 | A Security Review of Gradio 5 | abidlabs, pngwn | gradio-5-security.md | **We audited Gradio 5 so that your machine learning apps are safe!**
In the past few years, [Gradio](https://github.com/gradio-app/gradio/) (>6 million monthly Pypi installs) has become the default way to build machine learning web applications in Python. In just a few lines of code, you can create a user interface for an image generation app, a chatbot, or any other kind of ML app _and_ share it with others using Gradio’s built-in share links or [Hugging Face Spaces](https://huggingface.co./spaces).
```py
import gradio as gr
def generate(seed, prompt):
...
return image
# gr.Interface creates a web-based UI
gr.Interface(
generate,
inputs=[gr.Slider(), gr.Textbox()],
outputs=[gr.Image()]
).launch(share=True)
# share=True generates a public link instantly
```
Our goal with Gradio is to allow developers to build web applications that work great out-of-the-box for machine learning use cases. This has meant letting you, as a developer, easily build applications that:
* Scale easily to large numbers of concurrent users
* Are accessible to as many users as possible
* Provide consistent UI, UX, and theming
* Work reliably across a large number of browsers and devices
...even if you are not an expert in scaling, accessibility, or UI/UX!
Now, we’re adding **web** **security** to this list. We asked [Trail of Bits](https://www.trailofbits.com/), a well-known cybersecurity company, to conduct an independent audit of Gradio. The security issues they discovered were all fixed ahead of the Gradio 5 release.
This means that machine learning apps that **you build** with Gradio 5 **will follow best practices when it comes to web security** without any significant changes to your code.
## Why a security audit?
In the past couple of years, the Gradio team has worked with the community to patch security vulnerabilities as they are discovered. But as Gradio’s popularity has grown (with >470,000 Gradio apps currently on Hugging Face Spaces), ensuring security has become even more important.
So in Gradio 5, we decided to take a different approach -- do a _preemptive_ security audit of the Gradio codebase so that your machine learning applications built with Gradio 5 are safe by default.
We asked Trail of Bits to conduct an independent and comprehensive audit of Gradio. Their team of experts in AI and Application Security identified security risks in the Gradio codebase in 4 general scenarios:
* Gradio apps running locally
* Gradio apps deployed on Hugging Face Spaces or other servers
* Gradio apps shared with built-in share links
* Supply chain vulnerabilities originating from the Gradio CI pipeline
![](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/gradio-5/security-scenarios.png)
Then, we worked closely with Trail of Bits to identify mitigation strategies for each of these risks. Gradio’s simplicity and ease of use, while beneficial for developers, also presented unique security challenges, as we didn’t want developers to need to set up complex security measures like CORS and CSP policies.
By the end of the collaboration, we fixed all of the security risks that were identified by Trail of Bits. All the fixes were validated by Trail of Bits, and are included in the Gradio 5.0 release. While it is impossible to prove the absence of security vulnerabilities, this is a major step in giving reassurance that your Gradio apps are safe.
## Major findings
We outline below the major security vulnerabilities that were discovered by Trail of Bits corresponding to the 4 scenarios above. In the interest of transparency and the spirit of open-source, we are making the [full security report public](https://github.com/trailofbits/publications/blob/master/reviews/2024-10-huggingface-gradio-securityreview.pdf), and more details for each of these issues can be found in the report.
**Gradio apps running locally**
* **TOB-GRADIO-1** and **TOB-GRADIO-2**: Misconfigurations in the server’s CORS policy that, in the context of an authenticated Gradio server, would allow attackers to steal access tokens and take over a victim’s accounts when they visit their malicious website.
**Gradio apps deployed on Hugging Face Spaces or other servers**
* **TOB-GRADIO-3**: A full read GET-based SSRF that would allow attackers to make requests to and read the responses from arbitrary endpoints, including those on the user’s internal network.
* **TOB-GRADIO-10**: Arbitrary file type uploads that would allow an attacker to host XSS payloads on a user’s Gradio server. In the context of an authenticated Gradio server, an attacker could use this to take over user accounts when the victim accesses an attacker’s malicious website.
* **TOB-GRADIO-13**: A race condition that allows an attacker to reroute user traffic to their server and steal uploaded files or chatbot conversations.
* **TOB-GRADIO-16**: Several components’ post-process functions could allow attackers to leak arbitrary files in very simple Gradio server configurations.
**Gradio apps shared with built-in share links**
* **TOB-GRADIO-19**: Remote code execution (RCE) with the root user on the Gradio API Server via a nginx misconfiguration that exposed the unauthenticated docker API. This allowed an attacker to provide a malicious host and port in step 2 of the diagram and redirect all frp tunnels to a malicious server that records all user traffic, including uploaded files and chatbox conversations.
* **TOB-GRADIO-11**: Lack of robust encryption in communications between the frp-client and frp-server, allowing attackers in a position to intercept requests (the ones from steps 6 and 7 in the diagram above) to read and modify the data going to and from the frp-server.
**Supply chain vulnerabilities originating from the Gradio CI pipeline**
* **TOB-GRADIO-25**: Several GitHub Actions workflows in the Gradio repository use third-party actions pinned to tags or branch names instead of full commit SHAs. This could allow malicious actors to silently modify actions, potentially leading to tampering with application releases or leaking secrets.
* Separately, a [GitHub security researcher reported](https://github.com/gradio-app/gradio/security/advisories/GHSA-48pj-2428-pp3w) that our GitHub actions could allow untrusted code execution and secret exfiltration if an attacker triggered a workflow and cleverly dumped the memory of GitHub runners.
## Going forward
We’re very grateful to Trail of Bits for the comprehensive security audit of Gradio and for validating the mitigations that we put in place for Gradio 5.
Going forward, we are planning to continue working with the security community to identify and mitigate security issues in Gradio. We have also added security unit tests to our test suite, fuzzer tests specifically designed to identify potential vulnerabilities, and are using static analysis tools like Semgrep in our CI to detect common security issues in our code and prevent security regressions.
We are committed to ensuring that as we continue to develop Gradio 5 ([and we have lots of plans!](https://huggingface.co./blog/gradio-5)), we do so in a manner that prioritizes security so that we can do our part in making machine learning applications better and safer.
Install Gradio 5 today:
`pip install --upgrade gradio`
And start [building your first Gradio 5 application](https://www.gradio.app/guides/quickstart). | [
[
"mlops",
"implementation",
"security",
"tools"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"security",
"tools",
"implementation",
"mlops"
] | null | null |
dc3ec0f4-c053-491d-8c35-0938492e1238 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T17:14:34.129000 | 078c94d6-25c8-47bc-9402-90bbea13d14d | Showcase Your Projects in Spaces using Gradio | merve | gradio-spaces.md | It's so easy to demonstrate a Machine Learning project thanks to [Gradio](https://gradio.app/).
In this blog post, we'll walk you through:
- the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the [Inference API](https://huggingface.co./inference-api).
- how to use Hugging Face Spaces to host demos of your own models.
## Hugging Face Hub Integration in Gradio
You can demonstrate your models in the Hub easily. You only need to define the [Interface](https://gradio.app/docs#interface) that includes:
- The repository ID of the model you want to infer with
- A description and title
- Example inputs to guide your audience
After defining your Interface, just call `.launch()` and your demo will start running. You can do this in Colab, but if you want to share it with the community a great option is to use Spaces!
Spaces are a simple, free way to host your ML demo apps in Python. To do so, you can create a repository at https://huggingface.co./new-space and select Gradio as the SDK. Once done, you can create a file called `app.py`, copy the code below, and your app will be up and running in a few seconds!
```python
import gradio as gr
description = "Story generation with GPT-2"
title = "Generate your own story"
examples = [["Adventurer is approached by a mysterious stranger in the tavern for a new quest."]]
interface = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator",
description=description,
examples=examples
)
interface.launch()
```
You can play with the Story Generation model [here](https://huggingface.co./spaces/merve/GPT-2-story-gen)
![story-gen](assets/28_gradio-spaces/story-gen.png)
Under the hood, Gradio calls the Inference API which supports Transformers as well as other popular ML frameworks such as spaCy, SpeechBrain and Asteroid. This integration supports different types of models, `image-to-text`, `speech-to-text`, `text-to-speech` and more. You can check out this example BigGAN ImageNet `text-to-image` model [here](https://huggingface.co./spaces/merve/BigGAN-ImageNET). Implementation is below.
```python
import gradio as gr
description = "BigGAN text-to-image demo."
title = "BigGAN ImageNet"
interface = gr.Interface.load("huggingface/osanseviero/BigGAN-deep-128",
description=description,
title = title,
examples=[["american robin"]]
)
interface.launch()
```
![big-gan](assets/28_gradio-spaces/big-gan.png)
## Serving Custom Model Checkpoints with Gradio in Hugging Face Spaces
You can serve your models in Spaces even if the Inference API does not support your model. Just wrap your model inference in a Gradio `Interface` as described below and put it in Spaces.
![imagenet-demo](assets/28_gradio-spaces/imagenet-demo.gif)
## Mix and Match Models!
Using Gradio Series, you can mix-and-match different models! Here, we've put a French to English translation model on top of the story generator and a English to French translation model at the end of the generator model to simply make a French story generator.
```python
import gradio as gr
from gradio.mix import Series
description = "Generate your own D&D story!"
title = "French Story Generator using Opus MT and GPT-2"
translator_fr = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-fr-en")
story_gen = gr.Interface.load("huggingface/pranavpsv/gpt2-genre-story-generator")
translator_en = gr.Interface.load("huggingface/Helsinki-NLP/opus-mt-en-fr")
examples = [["L'aventurier est approché par un mystérieux étranger, pour une nouvelle quête."]]
Series(translator_fr, story_gen, translator_en, description = description,
title = title,
examples=examples, inputs = gr.inputs.Textbox(lines = 10)).launch()
```
You can check out the French Story Generator [here](https://huggingface.co./spaces/merve/french-story-gen)
![story-gen-fr](assets/28_gradio-spaces/story-gen-fr.png)
## Uploading your Models to the Spaces
You can serve your demos in Hugging Face thanks to Spaces! To do this, simply create a new Space, and then drag and drop your demos or use Git.
![spaces-demo](assets/28_gradio-spaces/spaces-demo-finalized.gif)
Easily build your first demo with Spaces [here](https://huggingface.co./spaces)! | [
[
"mlops",
"implementation",
"tools",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"mlops",
"implementation",
"tools",
"integration"
] | null | null |
aa30786c-27c9-4929-9e95-5c2516aed772 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:49:32.224000 | 80f1fa1e-c44c-432b-96e3-e313679d4c1a | Introducing smolagents: simple agents that write actions in code. | m-ric, merve, thomwolf | smolagents.md | Today we are launching [`smolagents`](https://github.com/huggingface/smolagents), a very simple library that unlocks agentic capabilities for language models. Here’s a glimpse:
```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, HfApiModel
agent = CodeAgent(tools=[DuckDuckGoSearchTool()], model=HfApiModel())
agent.run("How many seconds would it take for a leopard at full speed to run through Pont des Arts?")
```
<div class="flex justify-center">
<img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/smolagents/smolagents.gif" />
</div>
## Table of Contents
- [🤔 What are agents?](#🤔-what-are-agents)
- [✅ When to use agents / ⛔ when to avoid them](#✅-when-to-use-agents--⛔-when-to-avoid-them)
- [Code agents](#code-agents)
- [Introducing *smolagents*: making agents simple 🥳](#introducing-smolagents-making-agents-simple-🥳)
- [Building an agent](#building-an-agent)
- [How strong are open models for agentic workflows?](#how-strong-are-open-models-for-agentic-workflows)
- [Next steps 🚀](#next-steps-🚀)
## 🤔 What are agents?
Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should have ***agency***. Agentic programs are the gateway to the outside world for LLMs.
AI Agents are **programs where LLM outputs control the workflow**.
Any system leveraging LLMs will integrate the LLM outputs into code. The influence of the LLM's input on the code workflow is the level of agency of LLMs in the system.
Note that with this definition, "agent" is not a discrete, 0 or 1 definition: instead, "agency" evolves on a continuous spectrum, as you give more or less power to the LLM on your workflow.
The table below illustrates how agency varies across systems:
| Agency Level | Description | How that's called | Example Pattern |
| | [
[
"llm",
"implementation",
"tools",
"text_generation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"implementation",
"tools",
"text_generation"
] | null | null |
df2462d0-e003-4f15-ac32-7363e169e427 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:17:50.594000 | 07dece9f-a414-48df-8173-23243786b9cd | MTEB: Massive Text Embedding Benchmark | Muennighoff | mteb.md | MTEB is a massive benchmark for measuring the performance of text embedding models on diverse embedding tasks.
The 🥇 [leaderboard](https://huggingface.co./spaces/mteb/leaderboard) provides a holistic view of the best text embedding models out there on a variety of tasks.
The 📝 [paper](https://arxiv.org/abs/2210.07316) gives background on the tasks and datasets in MTEB and analyzes leaderboard results!
The 💻 [Github repo](https://github.com/embeddings-benchmark/mteb) contains the code for benchmarking and submitting any model of your choice to the leaderboard.
<p align="center">
<a href="https://huggingface.co./spaces/mteb/leaderboard"><img src="assets/110_mteb/leaderboard.png" alt="MTEB Leaderboard"></a>
</p>
## Why Text Embeddings?
Text Embeddings are vector representations of text that encode semantic information. As machines require numerical inputs to perform computations, text embeddings are a crucial component of many downstream NLP applications. For example, Google uses text embeddings to [power their search engine](https://cloud.google.com/blog/topics/developers-practitioners/find-anything-blazingly-fast-googles-vector-search-technology). Text Embeddings can also be used for finding [patterns in large amount of text via clustering](https://txt.cohere.ai/combing-for-insight-in-10-000-hacker-news-posts-with-text-clustering/) or as inputs to text classification models, such as in our recent [SetFit](https://huggingface.co./blog/setfit) work. The quality of text embeddings, however, is highly dependent on the embedding model used. MTEB is designed to help you find the best embedding model out there for a variety of tasks!
## MTEB
🐋 **Massive**: MTEB includes 56 datasets across 8 tasks and currently summarizes >2000 results on the [leaderboard](https://huggingface.co./spaces/mteb/leaderboard).
🌎 **Multilingual**: MTEB contains up to 112 different languages! We have benchmarked several multilingual models on Bitext Mining, Classification, and STS.
🦚 **Extensible**: Be it new tasks, datasets, metrics, or leaderboard additions, any contribution is very welcome. Check out the GitHub repository to [submit to the leaderboard](https://github.com/embeddings-benchmark/mteb#leaderboard) or [solve open issues](https://github.com/embeddings-benchmark/mteb/issues). We hope you join us on the journey of finding the best text embedding model!
<p align="center">
<img src="assets/110_mteb/mteb_diagram_white_background.png" alt="MTEB Taxonomy">
</p>
<p align="center">
<em>Overview of tasks and datasets in MTEB. Multilingual datasets are marked with a purple shade.</em>
</p>
## Models
For the initial benchmarking of MTEB, we focused on models claiming state-of-the-art results and popular models on the Hub. This led to a high representation of transformers. 🤖
<p align="center">
<img src="assets/110_mteb/benchmark.png" alt="MTEB Speed Benchmark">
</p>
<p align="center">
<em>Models by average English MTEB score (y) vs speed (x) vs embedding size (circle size).</em>
</p>
We grouped models into the following three attributes to simplify finding the best model for your task:
**🏎 Maximum speed** Models like [Glove](https://huggingface.co./sentence-transformers/average_word_embeddings_glove.6B.300d) offer high speed, but suffer from a lack of context awareness resulting in low average MTEB scores.
**⚖️ Speed and performance** Slightly slower, but significantly stronger, [all-mpnet-base-v2](https://huggingface.co./sentence-transformers/all-mpnet-base-v2) or [all-MiniLM-L6-v2](https://huggingface.co./sentence-transformers/all-MiniLM-L6-v2) provide a good balance between speed and performance.
**💪 Maximum performance** Multi-billion parameter models like [ST5-XXL](https://huggingface.co./sentence-transformers/sentence-t5-xxl), [GTR-XXL](https://huggingface.co./sentence-transformers/gtr-t5-xxl) or [SGPT-5.8B-msmarco](https://huggingface.co./Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit) dominate on MTEB. They tend to also produce bigger embeddings like [SGPT-5.8B-msmarco](https://huggingface.co./Muennighoff/SGPT-5.8B-weightedmean-msmarco-specb-bitfit) which produces 4096 dimensional embeddings requiring more storage!
Model performance varies a lot depending on the task and dataset, so we recommend checking the various tabs of the [leaderboard](https://huggingface.co./spaces/mteb/leaderboard) before deciding which model to use!
## Benchmark your model
Using the [MTEB library](https://github.com/embeddings-benchmark/mteb), you can benchmark any model that produces embeddings and add its results to the public leaderboard. Let's run through a quick example!
First, install the library:
```sh
pip install mteb
```
Next, benchmark a model on a dataset, for example [komninos word embeddings](https://huggingface.co./sentence-transformers/average_word_embeddings_komninos) on [Banking77](https://huggingface.co./datasets/mteb/banking77).
```python
from mteb import MTEB
from sentence_transformers import SentenceTransformer
model_name = "average_word_embeddings_komninos"
model = SentenceTransformer(model_name)
evaluation = MTEB(tasks=["Banking77Classification"])
results = evaluation.run(model, output_folder=f"results/{model_name}")
```
This should produce a `results/average_word_embeddings_komninos/Banking77Classification.json` file!
Now you can submit the results to the leaderboard by adding it to the metadata of the `README.md` of any model on the Hub.
Run our [automatic script](https://github.com/embeddings-benchmark/mteb/blob/main/scripts/mteb_meta.py) to generate the metadata:
```sh
python mteb_meta.py results/average_word_embeddings_komninos
```
The script will produce a `mteb_metadata.md` file that looks like this:
```sh | [
[
"data",
"research",
"benchmarks",
"tools",
"text_classification"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"benchmarks",
"research",
"tools",
"data"
] | null | null |
f01bfc90-3615-45c6-a448-debd0ddd13d1 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:19:26.902000 | 510bfb44-c7a6-4eea-9b34-c0a929d2d0e7 | Porting fairseq wmt19 translation system to transformers | stas | porting-fsmt.md | ##### A guest blog post by Stas Bekman
This article is an attempt to document how [fairseq wmt19 translation system](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) was ported to [`transformers`](https://github.com/huggingface/transformers/).
I was looking for some interesting project to work on and [Sam Shleifer](https://github.com/sshleifer) suggested I work on [porting a high quality translator](https://github.com/huggingface/transformers/issues/5419).
I read the short paper: [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616) that describes the original system and decided to give it a try.
Initially, I had no idea how to approach this complex project and Sam helped me to [break it down to smaller tasks](https://github.com/huggingface/transformers/issues/5419), which was of a great help.
I chose to work with the pre-trained `en-ru`/`ru-en` models during porting as I speak both languages. It'd have been much more difficult to work with `de-en`/`en-de` pairs as I don't speak German, and being able to evaluate the translation quality by just reading and making sense of the outputs at the advanced stages of the porting process saved me a lot of time.
Also, as I did the initial porting with the `en-ru`/`ru-en` models, I was totally unaware that the `de-en`/`en-de` models used a merged vocabulary, whereas the former used 2 separate vocabularies of different sizes. So once I did the more complicated work of supporting 2 separate vocabularies, it was trivial to get the merged vocabulary to work.
## Let's cheat
The first step was to cheat, of course. Why make a big effort when one can make a little one. So I wrote a [short notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/cheat.ipynb) that in a few lines of code provided a proxy to `fairseq` and emulated `transformers` API.
If no other things, but basic translation, was required, this would have been enough. But, of course, we wanted to have the full porting, so after having this small victory, I moved onto much harder things.
## Preparations
For the sake of this article let's assume that we work under `~/porting`, and therefore let's create this directory:
```
mkdir ~/porting
cd ~/porting
```
We need to install a few things for this work:
```
# install fairseq
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install -e .
# install mosesdecoder under fairseq
git clone https://github.com/moses-smt/mosesdecoder
# install fastBPE under fairseq
git clone [email protected]:glample/fastBPE.git
cd fastBPE; g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast; cd -
cd -
# install transformers
git clone https://github.com/huggingface/transformers/
pip install -e .[dev]
```
## Files
As a quick overview, the following files needed to be created and written:
* [`src/transformers/configuration_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/configuration_fsmt.py) - a short configuration class.
* [`src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py) - a complex conversion script.
* [`src/transformers/modeling_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/modeling_fsmt.py) - this is where the model architecture is implemented.
* [`src/transformers/tokenization_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/tokenization_fsmt.py) - a tokenizer code.
* [`tests/test_modeling_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/tests/test_modeling_fsmt.py) - model tests.
* [`tests/test_tokenization_fsmt.py`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/tests/test_tokenization_fsmt.py) - tokenizer tests.
* [`docs/source/model_doc/fsmt.rst`](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/docs/source/model_doc/fsmt.rst) - a doc file.
There are other files that needed to be modified as well, we will talk about those towards the end.
## Conversion
One of the most important parts of the porting process is to create a script that will take all the available source data provided by the original developer of the model, which includes a checkpoint with pre-trained weights, model and training configuration, dictionaries and tokenizer support files, and convert them into a new set of model files supported by `transformers`. You will find the final conversion script here: [src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py)
I started this process by copying one of the existing conversion scripts `src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py`, gutted most of it out and then gradually added parts to it as I was progressing in the porting process.
During the development I was testing all my code against a local copy of the converted model files, and only at the very end when everything was ready I uploaded the files to 🤗 s3 and then continued testing against the online version.
## fairseq model and its support files
Let's first look at what data we get with the `fairseq` pre-trained model.
We are going to use the convenient `torch.hub` API, which makes it very easy to deploy models submitted to [that hub](https://pytorch.org/hub/):
```
import torch
torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru', checkpoint_file='model4.pt',
tokenizer='moses', bpe='fastbpe')
```
This code downloads the pre-trained model and its support files. I found this information at the page corresponding to [fairseq](https://pytorch.org/hub/pytorch_fairseq_translation/) on the pytorch hub.
To see what's inside the downloaded files, we have to first hunt down the right folder under `~/.cache`.
```
ls -1 ~/.cache/torch/hub/pytorch_fairseq/
```
shows:
```
15bca559d0277eb5c17149cc7e808459c6e307e5dfbb296d0cf1cfe89bb665d7.ded47c1b3054e7b2d78c0b86297f36a170b7d2e7980d8c29003634eb58d973d9
15bca559d0277eb5c17149cc7e808459c6e307e5dfbb296d0cf1cfe89bb665d7.ded47c1b3054e7b2d78c0b86297f36a170b7d2e7980d8c29003634eb58d973d9.json
```
You may have more than one entry there if you have been using the `hub` for other models.
Let's make a symlink so that we can easily refer to that obscure cache folder name down the road:
```
ln -s /code/data/cache/torch/hub/pytorch_fairseq/15bca559d0277eb5c17149cc7e808459c6e307e5dfbb296d0cf1cfe89bb665d7.ded47c1b3054e7b2d78c0b86297f36a170b7d2e7980d8c29003634eb58d973d9 \
~/porting/pytorch_fairseq_model
```
Note: the path could be different when you try it yourself, since the hash value of the model could change. You will find the right one in `~/.cache/torch/hub/pytorch_fairseq/`
If we look inside that folder:
```
ls -l ~/porting/pytorch_fairseq_model/
total 13646584
-rw-rw-r-- 1 stas stas 532048 Sep 8 21:29 bpecodes
-rw-rw-r-- 1 stas stas 351706 Sep 8 21:29 dict.en.txt
-rw-rw-r-- 1 stas stas 515506 Sep 8 21:29 dict.ru.txt
-rw-rw-r-- 1 stas stas 3493170533 Sep 8 21:28 model1.pt
-rw-rw-r-- 1 stas stas 3493170532 Sep 8 21:28 model2.pt
-rw-rw-r-- 1 stas stas 3493170374 Sep 8 21:28 model3.pt
-rw-rw-r-- 1 stas stas 3493170386 Sep 8 21:29 model4.pt
```
we have:
1. `model*.pt` - 4 checkpoints (pytorch `state_dict` with all the pre-trained weights, and various other things)
2. `dict.*.txt` - source and target dictionaries
3. `bpecodes` - special map file used by the tokenizer
We are going to investigate each of these files in the following sections.
## How translation systems work
Here is a very brief introduction to how computers translate text nowadays.
Computers can't read text, but can only handle numbers. So when working with text we have to map one or more letters into numbers, and hand those to a computer program. When the program completes it too returns numbers, which we need to convert back into text.
Let's start with two sentences in Russian and English and assign a unique number to each word:
```
я люблю следовательно я существую
10 11 12 10 13
I love therefore I am
20 21 22 20 23
```
The numbers starting with 10 map Russian words to unique numbers. The numbers starting with 20 do the same for English words. If you don't speak Russian, you can still see that the word `я` (means 'I') repeats twice in the sentence and it gets the same number 10 associated with it. Same goes for `I` (20), which also repeats twice.
A translation system works in the following stages:
```
1. [я люблю следовательно я существую] # tokenize sentence into words
2. [10 11 12 10 13] # look up words in the input dictionary and convert to ids
3. [black box] # machine learning system magic
4. [20 21 22 20 23] # look up numbers in the output dictionary and convert to text
5. [I love therefore I am] # detokenize the tokens back into a sentence
```
If we combine the first two and the last two steps we get 3 stages:
1. **Encode input**: break input text into tokens, create a dictionary (vocab) of these tokens and remap each token into a unique id in that dictionary.
2. **Generate translation**: take input numbers, run them through a pre-trained machine learning model which predicts the best translation, and return output numbers.
3. **Decode output**: take output numbers, look them up in the target language dictionary, convert them back to text, and finally merge the converted tokens into the translated sentence.
The second stage may return one or several possible translations. In the case of the latter the caller then can choose the most suitable outcome. In this article I will refer to [the beam search algorithm](https://en.wikipedia.org/wiki/Beam_search), which is one of the ways multiple possible results are searched for. And the size of the beam refers to how many results are returned.
If there is only one result that's requested, the model will choose the one with the highest likelihood probability. If multiple results are requested it will return those results sorted by their probabilities.
Note that this same idea applies to the majority of NLP tasks, and not just translation.
## Tokenization
Early systems tokenized sentences into words and punctuation marks. But since many languages have hundreds of thousands of words, it is very taxing to work with huge vocabularies, as it dramatically increases the compute resource requirements and the length of time to complete the task.
As of 2020 there are quite a few different tokenizing methods, but most of the recent ones are based on sub-word tokenization - that is instead of breaking input text down into words, these modern tokenizers break the input text down into word segments and letters, using some kind of training to obtain the most optimal tokenization.
Let's see how this approach helps to reduce memory and computation requirements. If we have an input vocabulary of 6 common words: go, going, speak, speaking, sleep, sleeping - with word-level tokenization we end up with 6 tokens. However, if we break these down into: go, go-ing, speak, speak-ing, etc., then we have only 4 tokens in our vocabulary: go, speak, sleep, ing. This simple change made a 33% improvement! Except, the sub-word tokenizers don't use grammar rules, but they are trained on massive text inputs to find such splits. In this example I used a simple grammar rule as it's easy to understand.
Another important advantage of this approach is when dealing with input text words, that aren't in our vocabulary. For example, let's say our system encounters the word `grokking` (*), which can't be found in its vocabulary. If we split it into `grokk'-'ing', then the machine learning model might not know what to do with the first part of the word, but it gets a useful insight that 'ing' indicates a continuous tense, so it'll be able to produce a better translation. In such situation the tokenizer will split the unknown segments into segments it knows, in the worst case reducing them to individual letters.
* footnote: to grok was coined in 1961 by Robert A. Heinlein in "Stranger in a Strange Land": to understand (something) intuitively or by empathy.
There are many other nuances to why the modern tokenization approach is much more superior than simple word tokenization, which won't be covered in the scope of this article. Most of these systems are very complex to how they do the tokenization, as compared to the simple example of splitting `ing` endings that was just demonstrated, but the principle is similar.
## Tokenizer porting
The first step was to port the encoder part of the tokenizer, where text is converted to ids. The decoder part won't be needed until the very end.
### fairseq's tokenizer workings
Let's understand how `fairseq`'s tokenizer works.
`fairseq` (*) uses the [Byte Pair Encoding](https://en.wikipedia.org/wiki/Byte_pair_encoding) (BPE) algorithm for tokenization.
* footnote: from here on when I refer to `fairseq`, I refer [to this specific model implementation](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) - the `fairseq` project itself has dozens of different implementations of different models.
Let's see what BPE does:
```
import torch
sentence = "Machine Learning is great"
checkpoint_file='model4.pt'
model = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru', checkpoint_file=checkpoint_file, tokenizer='moses', bpe='fastbpe')
# encode step by step
tokens = model.tokenize(sentence)
print("tokenize ", tokens)
bpe = model.apply_bpe(tokens)
print("apply_bpe: ", bpe)
bin = model.binarize(bpe)
print("binarize: ", len(bin), bin)
# compare to model.encode - should give us the same output
expected = model.encode(sentence)
print("encode: ", len(expected), expected)
```
gives us:
```
('tokenize ', 'Machine Learning is great')
('apply_bpe: ', 'Mach@@ ine Lear@@ ning is great')
('binarize: ', 7, tensor([10217, 1419, 3, 2515, 21, 1054, 2]))
('encode: ', 7, tensor([10217, 1419, 3, 2515, 21, 1054, 2]))
```
You can see that `model.encode` does `tokenize+apply_bpe+binarize` - as we get the same output.
The steps were:
1. `tokenize`: normally it'd escape apostrophes and do other pre-processing, in this example it just returned the input sentence without any changes
2. `apply_bpe`: BPE splits the input into words and sub-words according to its `bpecodes` file supplied by the tokenizer - we get 6 BPE chunks
3. `binarize`: this simply remaps the BPE chunks from the previous step into their corresponding ids in the vocabulary (which is also downloaded with the model)
You can refer to [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/tokenizer.ipynb) to see more details.
This is a good time to look inside the `bpecodes` file. Here is the top of the file:
```
$ head -15 ~/porting/pytorch_fairseq_model/bpecodes
e n</w> 1423551864
e r 1300703664
e r</w> 1142368899
i n 1130674201
c h 933581741
a n 845658658
t h 811639783
e n 780050874
u n 661783167
s t 592856434
e i 579569900
a r 494774817
a l 444331573
o r 439176406
th e</w> 432025210
[...]
```
The top entries of this file include very frequent short 1-letter sequences. As we will see in a moment the bottom includes the most common multi-letter sub-words and even full long words.
A special token `</w>` indicates the end of the word. So in several lines quoted above we find:
```
e n</w> 1423551864
e r</w> 1142368899
th e</w> 432025210
```
If the second column doesn't include `</w>`, it means that this segment is found in the middle of the word and not at the end of it.
The last column declares the number of times this BPE code has been encountered while being trained. The `bpecodes` file is sorted by this column - so the most common BPE codes are on top.
By looking at the counts we now know that when this tokenizer was trained it encountered 1,423,551,864 words ending in `en`, 1,142,368,899 words ending in `er` and 432,025,210 words ending in `the`. For the latter it most likely means the actual word `the`, but it would also include words like `lathe`, `loathe`, `tithe`, etc.
These huge numbers also indicate to us that this tokenizer was trained on an enormous amount of text!
If we look at the bottom of the same file:
```
$ tail -10 ~/porting/pytorch_fairseq_model/bpecodes
4 x 109019
F ische</w> 109018
sal aries</w> 109012
e kt 108978
ver gewal 108978
Sten cils</w> 108977
Freiwilli ge</w> 108969
doub les</w> 108965
po ckets</w> 108953
Gö tz</w> 108943
```
we see complex combinations of sub-words which are still pretty frequent, e.g. `sal aries` for 109,012 times! So it got its own dedicated entry in the `bpecodes` map file.
How `apply_bpe` does its work? By looking up the various combinations of letters in the `bpecodes` map file and when finding the longest fitting entry it uses that.
Going back to our example, we saw that it split `Machine` into: `Mach@@` + `ine` - let's check:
```
$ grep -i ^mach ~/porting/pytorch_fairseq_model/bpecodes
mach ine</w> 463985
Mach t 376252
Mach ines</w> 374223
mach ines</w> 214050
Mach th 119438
```
You can see that it has `mach ine</w>`. We don't see `Mach ine` in there - so it must be handling lower cased look ups when normal case is not matching.
Now let's check: `Lear@@` + `ning`
```
$ grep -i ^lear ~/porting/pytorch_fairseq_model/bpecodes
lear n</w> 675290
lear ned</w> 505087
lear ning</w> 417623
```
We find `lear ning</w>` is there (again the case is not the same).
Thinking more about it, the case probably doesn't matter for tokenization, as long as there is a unique entry for `Mach`/`Lear` and `mach`/`lear` in the dictionary where it's very critical to have each case covered.
Hopefully, you can now see how this works.
One confusing thing is that if you remember the `apply_bpe` output was:
```
('apply_bpe: ', 6, ['Mach@@', 'ine', 'Lear@@', 'ning', 'is', 'great'])
```
Instead of marking endings of the words with `</w>`, it leaves those as is, but, instead, marks words that were not the endings with `@@`. This is probably so, because `fastBPE` implementation is used by `fairseq` and that's how it does things. I had to change this to fit the `transformers` implementation, which doesn't use `fastBPE`.
One last thing to check is the remapping of the BPE codes to vocabulary ids. To repeat, we had:
```
('apply_bpe: ', 'Mach@@ ine Lear@@ ning is great')
('binarize: ', 7, tensor([10217, 1419, 3, 2515, 21, 1054, 2]))
```
`2` - the last token id is a `eos` (end of stream) token. It's used to indicate to the model the end of input.
And then `Mach@@` gets remapped to `10217`, and `ine` to `1419`.
Let's check that the dictionary file is in agreement:
```
$ grep ^Mach@@ ~/porting/pytorch_fairseq_model/dict.en.txt
Mach@@ 6410
$ grep "^ine " ~/porting/pytorch_fairseq_model/dict.en.txt
ine 88376
```
Wait a second - those aren't the ids that we got after `binarize`, which should be `10217` and `1419` correspondingly.
It took some investigating to find out that the vocab file ids aren't the ids used by the model and that internally it remaps them to new ids once the vocab file is loaded. Luckily, I didn't need to figure out how exactly it was done. Instead, I just used `fairseq.data.dictionary.Dictionary.load` to load the dictionary (*), which performed all the re-mappings, - and I then saved the final dictionary. I found out about that `Dictionary` class by stepping through `fairseq` code with debugger.
* footnote: the more I work on porting models and datasets, the more I realize that putting the original code to work for me, rather than trying to replicate it, is a huge time saver and most importantly that code has already been tested - it's too easy to miss something and down the road discover big problems! After all, at the end, none of this conversion code will matter, since only the data it generated will be used by `transformers` and its end users.
Here is the relevant part of the conversion script:
```
from fairseq.data.dictionary import Dictionary
def rewrite_dict_keys(d):
# (1) remove word breaking symbol
# (2) add word ending symbol where the word is not broken up,
# e.g.: d = {'le@@': 5, 'tt@@': 6, 'er': 7} => {'le': 5, 'tt': 6, 'er</w>': 7}
d2 = dict((re.sub(r"@@$", "", k), v) if k.endswith("@@") else (re.sub(r"$", "</w>", k), v) for k, v in d.items())
keep_keys = "<s> <pad> </s> <unk>".split()
# restore the special tokens
for k in keep_keys:
del d2[f"{k}</w>"]
d2[k] = d[k] # restore
return d2
src_dict_file = os.path.join(fsmt_folder_path, f"dict.{src_lang}.txt")
src_dict = Dictionary.load(src_dict_file)
src_vocab = rewrite_dict_keys(src_dict.indices)
src_vocab_size = len(src_vocab)
src_vocab_file = os.path.join(pytorch_dump_folder_path, "vocab-src.json")
print(f"Generating {src_vocab_file}")
with open(src_vocab_file, "w", encoding="utf-8") as f:
f.write(json.dumps(src_vocab, ensure_ascii=False, indent=json_indent))
# we did the same for the target dict - omitted quoting it here
# and we also had to save `bpecodes`, it's called `merges.txt` in the transformers land
```
After running the conversion script, let's check the converted dictionary:
```
$ grep '"Mach"' /code/huggingface/transformers-fair-wmt/data/wmt19-en-ru/vocab-src.json
"Mach": 10217,
$ grep '"ine</w>":' /code/huggingface/transformers-fair-wmt/data/wmt19-en-ru/vocab-src.json
"ine</w>": 1419,
```
We have the correct ids in the `transformers` version of the vocab file.
As you can see I also had to re-write the vocabularies to match the `transformers` BPE implementation. We have to change:
```
['Mach@@', 'ine', 'Lear@@', 'ning', 'is', 'great']
```
to:
```
['Mach', 'ine</w>', 'Lear', 'ning</w>', 'is</w>', 'great</w>']
```
Instead of marking chunks that are segments of a word, with the exception of the last segment, we mark segments or words that are the final segment. One can easily go from one style of encoding to another and back.
This successfully completed the porting of the first part of the model files. You can see the final version of the code [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py#L128).
If you're curious to look deeper there are more tinkering bits in [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/tokenizer-dev.ipynb).
### Porting tokenizer's encoder to transformers
`transformers` can't rely on [`fastBPE`](https://github.com/glample/fastBPE) since the latter requires a C-compiler, but luckily someone already implemented a python version of the same in [`tokenization_xlm.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_xlm.py).
So I just copied it to `src/transformers/tokenization_fsmt.py` and renamed the class names:
```
cp tokenization_xlm.py tokenization_fsmt.py
perl -pi -e 's|XLM|FSMT|ig; s|xlm|fsmt|g;' tokenization_fsmt.py
```
and with very few changes I had a working encoder part of the tokenizer. There was a lot of code that didn't apply to the languages I needed to support, so I removed that code.
Since I needed 2 different vocabularies, instead of one here in tokenizer and everywhere else I had to change the code to support both. So for example I had to override the super-class' methods:
```
def get_vocab(self) -> Dict[str, int]:
return self.get_src_vocab()
@property
def vocab_size(self) -> int:
return self.src_vocab_size
```
Since `fairseq` didn't use `bos` (beginning of stream) tokens, I also had to change the code to not include those (*):
```
- return bos + token_ids_0 + sep
- return bos + token_ids_0 + sep + token_ids_1 + sep
+ return token_ids_0 + sep
+ return token_ids_0 + sep + token_ids_1 + sep
```
* footnote: this is the output of `diff(1)` which shows the difference between two chunks of code - lines starting with `-` show what was removed, and with `+` what was added.
`fairseq` was also escaping characters and performing an aggressive dash splitting, so I had to also change:
```
- [...].tokenize(text, return_str=False, escape=False)
+ [...].tokenize(text, return_str=False, escape=True, aggressive_dash_splits=True)
```
If you're following along, and would like to see all the changes I did to the original `tokenization_xlm.py`, you can do:
```
cp tokenization_xlm.py tokenization_orig.py
perl -pi -e 's|XLM|FSMT|g; s|xlm|fsmt|g;' tokenization_orig.py
diff -u tokenization_orig.py tokenization_fsmt.py | less
```
Just make sure you're checking out the repository [around the time fsmt was released](https://github.com/huggingface/transformers/tree/129fdae04033fe4adfe013b734deaec6ec34ae2e), since the 2 files could have diverged since then.
The final stage was to run through a bunch of inputs and to ensure that the ported tokenizer produced the same ids as the original. You can see this is done in [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/tokenizer.ipynb), which I was running repeatedly while trying to figure out how to make the outputs match.
This is how most of the porting process went, I'd take a small feature, run it the `fairseq`-way, get the outputs, do the same with the `transformers` code, try to make the outputs match - fiddle with the code until it did, then try a different kind of input make sure it produced the same outputs, and so on, until all inputs produced outputs that matched.
## Porting the core translation functionality
Having had a relatively quick success with porting the tokenizer (obviously, thanks to most of the code being there already), the next stage was much more complex. This is the `generate()` function which takes inputs ids, runs them through the model and returns output ids.
I had to break it down into multiple sub-tasks. I had to
1. port the model weights.
2. make `generate()` work for a single beam (i.e. return just one result).
3. and then multiple beams (i.e. return multiple results).
I first researched which of the existing architectures were the closest to my needs. It was BART that fit the closest, so I went ahead and did:
```
cp modeling_bart.py modeling_fsmt.py
perl -pi -e 's|Bart|FSMT|ig; s|bart|fsmt|g;' modeling_fsmt.py
```
This was my starting point that I needed to tweak to work with the model weights provided by `fairseq`.
### Porting weights and configuration
The first thing I did is to look at what was inside the publicly shared checkpoint. [This notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/config.ipynb) shows what I did there.
I discovered that there were 4 checkpoints in there. I had no idea what to do about it, so I started with a simpler job of using just the first checkpoint. Later I discovered that `fairseq` used all 4 checkpoints in an ensemble to get the best predictions, and that `transformers` currently doesn't support that feature. When the porting was completed and I was able to measure the performance scores, I found out that the `model4.pt` checkpoint provided the best score. But during the porting performance didn't matter much. Since I was using only one checkpoint it was crucial that when I was comparing outputs, I had `fairseq` also use just one and the same checkpoint.
To accomplish that I used a slightly different `fairseq` API:
```
from fairseq import hub_utils
#checkpoint_file = 'model1.pt:model2.pt:model3.pt:model4.pt'
checkpoint_file = 'model1.pt'
model_name_or_path = 'transformer.wmt19.ru-en'
data_name_or_path = '.'
cls = fairseq.model_parallel.models.transformer.ModelParallelTransformerModel
models = cls.hub_models()
kwargs = {'bpe': 'fastbpe', 'tokenizer': 'moses'}
ru2en = hub_utils.from_pretrained(
model_name_or_path,
checkpoint_file,
data_name_or_path,
archive_map=models,
**kwargs
)
```
First I looked at the model:
```
print(ru2en["models"][0])
```
```
TransformerModel(
(encoder): TransformerEncoder(
(dropout_module): FairseqDropout()
(embed_tokens): Embedding(31232, 1024, padding_idx=1)
(embed_positions): SinusoidalPositionalEmbedding()
(layers): ModuleList(
(0): TransformerEncoderLayer(
(self_attn): MultiheadAttention(
(dropout_module): FairseqDropout()
(k_proj): Linear(in_features=1024, out_features=1024, bias=True)
(v_proj): Linear(in_features=1024, out_features=1024, bias=True)
(q_proj): Linear(in_features=1024, out_features=1024, bias=True)
(out_proj): Linear(in_features=1024, out_features=1024, bias=True)
)
[...]
# the full output is in the notebook
```
which looked very similar to BART's architecture, with some slight differences in a few layers - some were added, others removed. So this was great news as I didn't have to re-invent the wheel, but to only tweak a well-working design.
Note that in the code sample above I'm not using `torch.load()` to load `state_dict`. This is what I initially did and the result was most puzzling - I was missing `self_attn.(k|q|v)_proj` weights and instead had a single `self_attn.in_proj`. When I tried loading the model using `fairseq` API, it fixed things up - apparently that model was old and was using an old architecture that had one set of weights for `k/q/v` and the newer architecture has them separate. When `fairseq` loads this old model, it rewrites the weights to match the modern architecture.
I also used [this notebook](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/nbs/visualize-models.ipynb) to compare the `state_dict`s visually. In that notebook you will also see that `fairseq` fetches a 2.2GB-worth of data in `last_optimizer_state`, which we can safely ignore, and have a 3 times leaner final model size.
In the conversion script I also had to remove some `state_dict` keys, which I wasn't going to use, e.g. `model.encoder.version`, `model.model` and a few others.
Next we look at the configuration args:
```
args = dict(vars(ru2en["args"]))
pprint(args)
```
```
'activation_dropout': 0.0,
'activation_fn': 'relu',
'adam_betas': '(0.9, 0.98)',
'adam_eps': 1e-08,
'adaptive_input': False,
'adaptive_softmax_cutoff': None,
'adaptive_softmax_dropout': 0,
'arch': 'transformer_wmt_en_de_big',
'attention_dropout': 0.1,
'bpe': 'fastbpe',
[... full output is in the notebook ...]
```
ok, we will copy those to configure the model. I had to rename some of the argument names, wherever `transformers` used different names for the corresponding configuration setting. So the re-mapping of configuration looks as following:
```
model_conf = {
"architectures": ["FSMTForConditionalGeneration"],
"model_type": "fsmt",
"activation_dropout": args["activation_dropout"],
"activation_function": "relu",
"attention_dropout": args["attention_dropout"],
"d_model": args["decoder_embed_dim"],
"dropout": args["dropout"],
"init_std": 0.02,
"max_position_embeddings": args["max_source_positions"],
"num_hidden_layers": args["encoder_layers"],
"src_vocab_size": src_vocab_size,
"tgt_vocab_size": tgt_vocab_size,
"langs": [src_lang, tgt_lang],
[...]
"bos_token_id": 0,
"pad_token_id": 1,
"eos_token_id": 2,
"is_encoder_decoder": True,
"scale_embedding": not args["no_scale_embedding"],
"tie_word_embeddings": args["share_all_embeddings"],
}
```
All that remains is to save the configuration into `config.json` and create a new `state_dict` dump into `pytorch.dump`:
```
print(f"Generating {fsmt_tokenizer_config_file}")
with open(fsmt_tokenizer_config_file, "w", encoding="utf-8") as f:
f.write(json.dumps(tokenizer_conf, ensure_ascii=False, indent=json_indent))
[...]
print(f"Generating {pytorch_weights_dump_path}")
torch.save(model_state_dict, pytorch_weights_dump_path)
```
We have the configuration and the model's `state_dict` ported - yay!
You will find the final conversion code [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py#L162).
### Porting the architecture code
Now that we have the model weights and the model configuration ported, we *just* need to adjust the code copied from `modeling_bart.py` to match `fairseq`'s functionality.
The first step was to take a sentence, encode it and then feed to the `generate` function - for `fairseq` and for `transformers`.
After a few very failing attempts to get somewhere (*) - I quickly realized that with the current level of complexity using `print` as debugging method will get me nowhere, and neither will the basic `pdb` debugger. In order to be efficient and to be able to watch multiple variables and have watches that are code-evaluations I needed a serious visual debugger. I spent a day trying all kinds of python debuggers and only when I tried `pycharm` I realized that it was the tool that I needed. It was my first time using `pycharm`, but I quickly figured out how to use it, as it was quite intuitive.
* footnote: the model was generating 'nononono' in Russian - that was fair and hilarious!
Over time I found a great feature in `pycharm` that allowed me to group breakpoints by functionality and I could turn whole groups on and off depending on what I was debugging. For example, here I have beam-search related break-points off and decoder ones on:
![break point group](./assets/07_porting_fsmt/pycharm-break-point-groups.png)
Now that I have used this debugger to port FSMT, I know that it would have taken me many times over to use pdb to do the same - I may have even given it up.
I started with 2 scripts:
* [fseq-translate](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fseq-translate.py)
* [fsmt-translate](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-translate.py)
(without the `decode` part first)
running both side by side, stepping through with debugger on each side and comparing values of relevant variables - until I found the first divergence. I then studied the code, made adjustments inside `modeling_fsmt.py`, restarted the debugger, quickly jumped to the point of divergence and re-checked the outputs. This cycle has been repeated multiple times until the outputs matched.
The first things I had to change was to remove a few layers that weren't used by `fairseq` and then add some new layers it was using instead. And then the rest was primarily figuring out when to switch to `src_vocab_size` and when to `tgt_vocab_size` - since in the core modules it's just `vocab_size`, which weren't accounting for a possible model that has 2 dictionaries. Finally, I discovered that a few hyperparameter configurations weren't the same, and so those were changed too.
I first did this process for the simpler no-beam search, and once the outputs were 100% matching I repeated it with the more complicated beam search. Here, for example, I discovered that `fairseq` was using the equivalent of `early_stopping=True`, whereas `transformers` has it as `False` by default. When early stopping is enabled it stops looking for new candidates as soon as there are as many candidates as the beam size, whereas when it's disabled, the algorithm stops searching only when it can't find higher probability candidates than what it already has. The `fairseq` paper mentions that a huge beam size of 50 was used, which compensates for using early stopping.
## Tokenizer decoder porting
Once I had the ported `generate` function produce pretty similar results to `fairseq`'s `generate` I next needed to complete the last stage of decoding the outputs into the human readable text. This allowed me to use my eyes for a quick comparison and the quality of translation - something I couldn't do with output ids.
Similar to the encoding process, this one was done in reverse.
The steps were:
1. convert output ids into text strings
2. remove BPE encodings
3. detokenize - handle escaped characters, etc.
After doing some more debugging here, I had to change the way BPE was dealt with from the original approach in `tokenization_xlm.py` and also run the outputs through the `moses` detokenizer.
```
def convert_tokens_to_string(self, tokens):
""" Converts a sequence of tokens (string) in a single string. """
- out_string = "".join(tokens).replace("</w>", " ").strip()
- return out_string
+ # remove BPE
+ tokens = [t.replace(" ", "").replace("</w>", " ") for t in tokens]
+ tokens = "".join(tokens).split()
+ # detokenize
+ text = self.moses_detokenize(tokens, self.tgt_lang)
+ return text
```
And all was good.
## Uploading models to s3
Once the conversion script did a complete job of porting all the required files to `transformers`, I uploaded the models to my 🤗 s3 account:
```
cd data
transformers-cli upload -y wmt19-ru-en
transformers-cli upload -y wmt19-en-ru
transformers-cli upload -y wmt19-de-en
transformers-cli upload -y wmt19-en-de
cd -
```
For the duration of testing I was using my 🤗 s3 account and once my PR with the complete changes was ready to be merged I asked in the PR to move the models to the `facebook` organization account, since these models belong there.
Several times I had to update just the config files, and I didn't want to re-upload the large models, so I wrote this little script that produces the right upload commands, which otherwise were too long to type and as a result were error-prone:
```
perl -le 'for $f (@ARGV) { print qq[transformers-cli upload -y $_/$f --filename $_/$f] \
for map { "wmt19-$_" } ("en-ru", "ru-en", "de-en", "en-de")}' \
vocab-src.json vocab-tgt.json tokenizer_config.json config.json
# add/remove files as needed
```
So if, for example, I only needed to update all the `config.json` files, the script above gave me a convenient copy-n-paste:
```
transformers-cli upload -y wmt19-en-ru/config.json --filename wmt19-en-ru/config.json
transformers-cli upload -y wmt19-ru-en/config.json --filename wmt19-ru-en/config.json
transformers-cli upload -y wmt19-de-en/config.json --filename wmt19-de-en/config.json
transformers-cli upload -y wmt19-en-de/config.json --filename wmt19-en-de/config.json
```
Once the upload was completed, these models could be accessed as (*):
```
tokenizer = FSMTTokenizer.from_pretrained("stas/wmt19-en-ru")
```
* footnote:`stas` is my username at https://huggingface.co..
Before I made this upload I had to use the local path to the folder with the model files, e.g.:
```
tokenizer = FSMTTokenizer.from_pretrained("/code/huggingface/transformers-fair-wmt/data/wmt19-en-ru")
```
Important: If you update the model files, and re-upload them, you must be aware that due to CDN caching the uploaded model may be unavailable for up to 24 hours after the upload - i.e. the old cached model will be delivered. So the only way to start using the new model sooner is by either:
1. downloading it to a local path and using that path as an argument that gets passed to `from_pretrained()`.
2. or using: `from_pretrained(..., use_cdn=False)` everywhere for the next 24h - it's not enough to do it once.
## AutoConfig, AutoTokenizer, etc.
One other change I needed to do is to plug the newly ported model into the automated model `transformers` system. This is used primarily on the [models website](https://huggingface.co./models) to load the model configuration, tokenizer and the main class without providing any specific class names. For example, in the case of `FSMT` one can do:
```
from transformers import AutoTokenizer, AutoModelWithLMHead
mname = "facebook/wmt19-en-ru"
tokenizer = AutoTokenizer.from_pretrained(mname)
model = AutoModelWithLMHead.from_pretrained(mname)
```
There are 3 `*auto*` files that have mappings to enable that:
```
-rw-rw-r-- 1 stas stas 16K Sep 23 13:53 src/transformers/configuration_auto.py
-rw-rw-r-- 1 stas stas 65K Sep 23 13:53 src/transformers/modeling_auto.py
-rw-rw-r-- 1 stas stas 13K Sep 23 13:53 src/transformers/tokenization_auto.py
```
Then the are the pipelines, which completely hide all the NLP complexities from the end user and provide a very simple API to just pick a model and use it for a task at hand. For example, here is how one could perform a summarization task using `pipeline`:
```
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base")
summary = summarizer("Some long document here", min_length=5, max_length=20)
print(summary)
```
The translation pipelines are a work in progress as of this writing, watch [this document](https://huggingface.co./transformers/main_classes/pipelines.html) for updates for when translation will be supported (currently only a few specific models/languages are supported).
Finally, there is `src/transforers/__init__.py` to edit so that one could do:
```
from transformers import FSMTTokenizer, FSMTForConditionalGeneration
```
instead of:
```
from transformers.tokenization_fsmt import FSMTTokenizer
from transformers.modeling_fsmt import FSMTForConditionalGeneration
```
but either way works.
To find all the places I needed to plug FSMT in, I mimicked `BartConfig`, `BartForConditionalGeneration` and `BartTokenizer`. I just `grep`ped which files had it and inserted corresponding entries for `FSMTConfig`, `FSMTForConditionalGeneration` and `FSMTTokenizer`.
```
$ egrep -l "(BartConfig|BartForConditionalGeneration|BartTokenizer)" src/transformers/*.py \
| egrep -v "(marian|bart|pegasus|rag|fsmt)"
src/transformers/configuration_auto.py
src/transformers/generation_utils.py
src/transformers/__init__.py
src/transformers/modeling_auto.py
src/transformers/pipelines.py
src/transformers/tokenization_auto.py
```
In the `grep` search I excluded the files that also include those classes.
## Manual testing
Until now I was primarily using my own scripts to do the testing.
Once I had the translator working, I converted the reversed `ru-en` model and then wrote two paraphrase scripts:
* [fseq-paraphrase](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fseq-paraphrase.py)
* [fsmt-paraphrase](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-paraphrase.py)
which took a sentence in the source language, translated it to another language and then translated the result of that back to the original language. This process usually results in a paraphrased outcome, due to differences in how different languages express similar things.
With the help of these scripts I found some more problems with the detokenizer, stepped through with the debugger and made the fsmt script produce the same results as the `fairseq` version.
At this stage no-beam search was producing mostly identical results, but there was still some divergence in the beam search. In order to identify the special cases, I wrote a [fsmt-port-validate.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-port-validate.py) script that used as inputs `sacrebleu` test data and it run that data through both `fairseq` and `transformers` translation and reported only mismatches. It quickly identified a few remaining problems and observing the patterns I was able to fix those issues as well.
## Porting other models
I next proceeded to port the `en-de` and `de-en` models.
I was surprised to discover that these weren't built in the same way. Each of these had a merged dictionary, so for a moment I felt frustration, since I thought I'd now have to do another huge change to support that. But, I didn't need to make any changes, as the merged dictionary fit in without needing any changes. I just used 2 identical dictionaries - one as a source and a copy of it as a target.
I wrote another script to test all ported models' basic functionality: [fsmt-test-all.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-test-all.py).
## Test Coverage
This next step was very important - I needed to prepare an extensive testing for the ported model.
In the `transformers` test suite most tests that deal with large models are marked as `@slow` and those don't get to run normally on CI (Continual Integration), as they are, well, slow. So I needed to also create a tiny model, that has the same structure as a normal pre-trained model, but it had to be very small and it could have random weights. This tiny model is then can be used to test the ported functionality. It just can't be used for quality testing, since it has just a few weights and thus can't really be trained to do anything practical. [fsmt-make-tiny-model.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-make-tiny-model.py) creates such a tiny model. The generated model with all of its dictionary and config files was just 3MB in size. I uploaded it to `s3` using `transformers-cli upload` and now I was able to use it in the test suite.
Just like with the code, I started by copying `tests/test_modeling_bart.py` and converting it to use `FSMT`, and then tweaking it to work with the new model.
I then converted a few of my scripts I used for manual testing into unit tests - that was easy.
`transformers` has a huge set of common tests that each model runs through - I had to do some more tweaks to make these tests work for `FSMT` (primarily to adjust for the 2 dictionary setup) and I had to override a few tests, that weren't possible to run due to the uniqueness of this model, in order to skip them. You can see the results [here](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/tests/test_tokenization_fsmt.py).
I added one more test that performs a light BLEU evaluation - I used just 8 text inputs for each of the 4 models and measured BLEU scores on those. Here is the [test](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/test_fsmt_bleu_score.py) and the [script that generated data](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/test_data/fsmt/build-eval-data.py).
## SinusoidalPositionalEmbedding
`fairseq` used a slightly different implementation of `SinusoidalPositionalEmbedding` than the one used by `transformers`. Initially I copied the `fairseq` implementation. But when trying to get the test suite to work I couldn't get the `torchscript` tests to pass. `SinusoidalPositionalEmbedding` was written so that it won't be part of `state_dict` and not get saved with the model weights - all the weights generated by this class are deterministic and are not trained. `fairseq` used a trick to make this work transparently by not making its weights a parameter or a buffer, and then during `forward` switching the weights to the correct device. `torchscript` wasn't taking this well, as it wanted all the weights to be on the correct device before the first `forward` call.
I had to rewrite the implementation to convert it to a normal `nn.Embedding` subclass and then add functionality to not save these weights during `save_pretrained()` and for `from_pretrained()` to not complain if it can't find those weights during the `state_dict` loading.
## Evaluation
I knew that the ported model was doing quite well based on my manual testing with a large body of text, but I didn't know how well the ported model performed comparatively to the original. So it was time to evaluate.
For the task of translation [BLEU score](https://en.wikipedia.org/wiki/BLEU) is used as an evaluation metric. `transformers`
has a script [run_eval.py](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/run_eval.py`) to perform the evaluation.
Here is an evaluation for the `ru-en` pair
```
export PAIR=ru-en
export MODEL=facebook/wmt19-$PAIR
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=64
export NUM_BEAMS=5
export LENGTH_PENALTY=1.1
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py $MODEL \
$DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target \
--score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS \
--length_penalty $LENGTH_PENALTY --info $MODEL --dump-args
```
which took a few minutes to run and returned:
```
{'bleu': 39.0498, 'n_obs': 2000, 'runtime': 184, 'seconds_per_sample': 0.092,
'num_beams': 5, 'length_penalty': 1.1, 'info': 'ru-en'}
```
You can see that the BLEU score was `39.0498` and that it evaluated using 2000 test inputs, provided by `sacrebleu` using the `wmt19` dataset.
Remember, I couldn't use the model ensemble, so I next needed to find the best performing checkpoint. For that purpose I wrote a script [fsmt-bleu-eval-each-chkpt.py](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fsmt-bleu-eval-each-chkpt.sh) which converted each checkpoint, run the eval script and reported the best one. As a result I knew that `model4.pt` was delivering the best performance, out of the 4 available checkpoints.
I wasn't getting the same BLEU scores as the ones reported in the original paper, so I next needed to make sure that we were comparing the same data using the same tools. Through asking at the `fairseq` issue I was given the code that was used by `fairseq` developers to get their BLEU scores - you will find it [here](https://github.com/stas00/porting/tree/master/transformers/fairseq-wmt19/scripts/fseq-reproduce-bleu.sh). But, alas, their method was using a re-ranking approach which wasn't disclosed. Moreover, they evaled on outputs before detokenization and not the real output, which apparently scores better. Bottom line - we weren't scoring in the same way (*).
* footnote: the paper [A Call for Clarity in Reporting BLEU Scores](https://arxiv.org/abs/1804.08771) invites developers to start using the same method for calculating the metrics (tldr: use `sacrebleu`).
Currently, this ported model is slightly behind the original on the BLEU scores, because model ensemble is not used, but it's impossible to tell the exact difference until the same measuring method is used.
## Porting new models
After uploading the 4 `fairseq` models [here](https://huggingface.co./models?filter=facebook&tag=fsmt) it was then suggested to port 3 `wmt16` and 2 `wmt19` AllenAI models ([Jungo Kasai, et al](https://github.com/jungokasai/deep-shallow/)). The porting was a breeze, as I only had to figure out how to put all the source files together, since they were spread out through several unrelated archives. Once this was done the conversion worked without a hitch.
The only issue I discovered after porting is that I was getting a lower BLEU score than the original. Jungo Kasai, the creator of these models, was very helpful at suggesting that a custom hyper-parameter`length_penalty=0.6` was used, and once I plugged that in I was getting much better results.
This discovery lead me to write a new script: [run_eval_search.py](https://github.com/huggingface/transformers/blob/129fdae04033fe4adfe013b734deaec6ec34ae2e/examples/seq2seq/run_eval_search.py`), which can be used to search various hyper-parameters that would lead to the best BLEU scores. Here is an example of its usage:
```
# search space
export PAIR=ru-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=32
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval_search.py stas/wmt19-$PAIR \
$DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target \
--score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation \
--search="num_beams=5:8:11:15 length_penalty=0.6:0.7:0.8:0.9:1.0:1.1 early_stopping=true:false"
```
Here it searches though all the possible combinations of `num_beams`, `length_penalty` and `early_stopping`.
Once finished executing it reports:
```
bleu | num_beams | length_penalty | early_stopping | [
[
"transformers",
"research",
"implementation"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"translation",
"implementation",
"research"
] | null | null |
a31d084d-090e-4d29-a190-2c087869171a | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:47:44.828000 | 0e7993a0-8558-44d2-af5f-b858e6aff2cd | Introducing the Open Ko-LLM Leaderboard: Leading the Korean LLM Evaluation Ecosystem | Chanjun, hunkim, clefourrier | leaderboard-upstage.md | In the fast-evolving landscape of Large Language Models (LLMs), building an “ecosystem” has never been more important. This trend is evident in several major developments like Hugging Face's democratizing NLP and Upstage building a Generative AI ecosystem.
Inspired by these industry milestones, in September of 2023, at [Upstage](https://upstage.ai/) we initiated the [Open Ko-LLM Leaderboard](https://huggingface.co./spaces/upstage/open-ko-llm-leaderboard). Our goal was to quickly develop and introduce an evaluation ecosystem for Korean LLM data, aligning with the global movement towards open and collaborative AI development.
Our vision for the Open Ko-LLM Leaderboard is to cultivate a vibrant Korean LLM evaluation ecosystem, fostering transparency by enabling researchers to share their results and uncover hidden talents in the LLM field. In essence, we're striving to expand the playing field for Korean LLMs.
To that end, we've developed an open platform where individuals can register their Korean LLM and engage in competitions with other models.
Additionally, we aimed to create a leaderboard that captures the unique characteristics and culture of the Korean language. To achieve this goal, we made sure that our translated benchmark datasets such as Ko-MMLU reflect the distinctive attributes of Korean.
<script type="module" src="https://gradio.s3-us-west-2.amazonaws.com/3.45.1/gradio.js"> </script>
<gradio-app theme_mode="light" space="upstage/open-ko-llm-leaderboard"></gradio-app>
## Leaderboard design choices: creating a new private test set for fairness
The Open Ko-LLM Leaderboard is characterized by its unique approach to benchmarking, particularly:
- its adoption of Korean language datasets, as opposed to the prevalent use of English-based benchmarks.
- the non-disclosure of test sets, contrasting with the open test sets of most leaderboards: we decided to construct entirely new datasets dedicated to Open Ko-LLM and maintain them as private, to prevent test set contamination and ensure a more equitable comparison framework.
While acknowledging the potential for broader impact and utility to the research community through open benchmarks, the decision to maintain a closed test set environment was made with the intention of fostering a more controlled and fair comparative analysis.
## Evaluation Tasks
The Open Ko-LLM Leaderboard adopts the following five types of evaluation methods:
- **Ko-ARC** (AI2 Reasoning Challenge): Ko-ARC is a multiple-choice test designed to assess scientific thinking and understanding. It measures the reasoning ability required to solve scientific problems, evaluating complex reasoning, problem-solving skills, and the understanding of scientific knowledge. The evaluation metric focuses on accuracy rates, reflecting how often the model selects the correct answer from a set of options, thereby gauging its ability to navigate and apply scientific principles effectively.
- **Ko-HellaSwag**: Ko-HellaSwag evaluates situational comprehension and prediction ability, either in a generative format or as a multiple-choice setup. It tests the capacity to predict the most likely next scenario given a situation, serving as an indicator of the model's understanding and reasoning abilities about situations. Metrics include accuracy assessing the quality of predictions, depending on whether it is approached as a multiple-choice.
- **Ko-MMLU** (Massive Multitask Language Understanding): Ko-MMLU assesses language comprehension across a wide range of topics and fields in a multiple-choice format. This broad test demonstrates how well a model functions across various domains, showcasing its versatility and depth in language understanding. Overall accuracy across tasks and domain-specific performance are key metrics, highlighting strengths and weaknesses in different areas of knowledge.
- **Ko-Truthful QA**: Ko-Truthful QA is actually a multiple-choice benchmark designed to evaluate the model's truthfulness and factual accuracy. Unlike a generative format where the model freely generates responses, in this multiple-choice setting, the model is tasked with selecting the most accurate and truthful answer from a set of options. This approach emphasizes the model's ability to discern truthfulness and accuracy within a constrained choice framework. The primary metric for Ko-Truthful QA focuses on the accuracy of the model's selections, assessing its consistency with known facts and its ability to identify the most truthful response among the provided choices.
- **Ko-CommonGEN V2**: A newly made benchmark for the Open Ko-LLM Leaderboard assesses whether LLMs can generate outputs that align with Korean common sense given certain conditions, testing the model’s capacity to produce contextually and culturally relevant outputs in the Korean language.
## A leaderboard in action: the barometer of Ko-LLM
The Open Ko-LLM Leaderboard has exceeded expectations, with over 1,000 models submitted. In comparison, the Original English Open LLM Leaderboard now hosts over 4,000 models. The Ko-LLM leaderboard has achieved a quarter of that number in just five months after its launch. We're grateful for this widespread participation, which shows the vibrant interest in Korean LLM development.
Of particular note is the diverse competition, encompassing individual researchers, corporations, and academic institutions such as KT, Lotte Information & Communication, Yanolja, MegaStudy Maum AI, 42Maru, the Electronics and Telecommunications Research Institute (ETRI), KAIST, and Korea University.
One standout submission is KT's [Mi:dm 7B model](https://huggingface.co./KT-AI/midm-bitext-S-7B-inst-v1), which not only topped the rankings among models with 7B parameters or fewer but also became accessible for public use, marking a significant milestone.
We also observed that, more generally, two types of models demonstrate strong performance on the leaderboard:
- models which underwent cross-lingual transfer or fine-tuning in Korean (like Upstage’s [SOLAR](https://huggingface.co./upstage/SOLAR-10.7B-v1.0))
- models fine-tuned from LLaMa2, Yi, and Mistral, emphasizing the importance of leveraging solid foundational models for finetuning.
Managing such a big leaderboard did not come without its own challenges. The Open Ko-LLM Leaderboard aims to closely align with the Open LLM Leaderboard’s philosophy, especially in integrating with the Hugging Face model ecosystem. This strategy ensures that the leaderboard is accessible, making it easier for participants to take part, a crucial factor in its operation. Nonetheless, there are limitations due to the infrastructure, which relies on 16 A100 80GB GPUs. This setup faces challenges, particularly when running models larger than 30 billion parameters as they require an excessive amount of compute. This leads to prolonged pending states for many submissions. Addressing these infrastructure challenges is essential for future enhancements of the Open Ko-LLM Leaderboard.
## Our vision and next steps
We recognize several limitations in current leaderboard models when considered in real-world contexts:
- Outdated Data: Datasets like SQUAD and KLEU become outdated over time. Data evolves and transforms continuously, but existing leaderboards remain fixed in a specific timeframe, making them less reflective of the current moment as hundreds of new data points are generated daily.
- Failure to Reflect the Real World: In B2B and B2C services, data is constantly accumulated from users or industries, and edge cases or outliers continuously arise. True competitive advantage lies in responding well to these challenges, yet current leaderboard systems lack the means to measure this capability. Real-world data is perpetually generated, changing, and evolving.
- Questionable Meaningfulness of Competition: Many models are specifically tuned to perform well on the test sets, potentially leading to another form of overfitting within the test set. Thus, the current leaderboard system operates in a leaderboard-centric manner rather than being real-world-centric.
We therefore plan to further develop the leaderboard so that it addresses these issues, and becomes a trusted resource widely recognized by many. By incorporating a variety of benchmarks that have a strong correlation with real-world use cases, we aim to make the leaderboard not only more relevant but also genuinely helpful to businesses. We aspire to bridge the gap between academic research and practical application, and will continuously update and enhance the leaderboard, through feedback from both the research community and industry practitioners to ensure that the benchmarks remain rigorous, comprehensive, and up-to-date. Through these efforts, we hope to contribute to the advancement of the field by providing a platform that accurately measures and drives the progress of large language models in solving practical and impactful problems.
If you develop datasets and would like to collaborate with us on this, we’ll be delighted to talk with you, and you can contact us at [email protected] or [email protected]!
As a side note, we believe that evaluations in a real online environment, as opposed to benchmark-based evaluations, are highly meaningful. Even within benchmark-based evaluations, there is a need for benchmarks to be updated monthly or for the benchmarks to more specifically assess domain-specific aspects - we'd love to encourage such initiatives.
## Many thanks to our partners
The journey of Open Ko-LLM Leaderboard began with a collaboration agreement to develop a Korean-style leaderboard, in partnership with Upstage and the [National Information Society Agency](https://www.nia.or.kr/site/nia_kor/main.do) (NIA), a key national institution in Korea. This partnership marked the starting signal, and within just a month, we were able to launch the leaderboard.
To validate common-sense reasoning, we collaborated with Professor [Heuiseok Lim](https://scholar.google.com/citations?user=HMTkz7oAAAAJ&hl=en)'s [research team](https://blpkorea.cafe24.com/wp/level-1/level-2a/) at Korea University to incorporate KoCommonGen V2 as an additional task for the leaderboard.
Building a robust infrastructure was crucial for success. To that end, we are grateful to [Korea Telecom](https://cloud.kt.com/) (KT) for their generous support of GPU resources and to Hugging Face for their continued support. It's encouraging that Open Ko-LLM Leaderboard has established a direct line of communication with Hugging Face, a global leader in natural language processing, and we're in continuous discussion to push new initiatives forward.
Moreover, the Open Ko-LLM Leaderboard boasts a prestigious consortium of credible partners: the National Information Society Agency (NIA), Upstage, KT, and Korea University. The participation of these institutions, especially the inclusion of a national agency, lends significant authority and trustworthiness to the endeavor, underscoring its potential as a cornerstone in the academic and practical exploration of language models. | [
[
"llm",
"research",
"benchmarks",
"community"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"community",
"research"
] | null | null |
512bb096-2538-4be8-8ebd-8866cd1bc14c | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T19:13:54.373000 | db443612-33f7-4ad6-8684-01c4413a97a0 | Deploying 🤗 ViT on Kubernetes with TF Serving | chansung, sayakpaul | deploy-tfserving-kubernetes.md | In the [<u>previous post</u>](https://huggingface.co./blog/tf-serving-vision), we showed how
to deploy a [<u>Vision Transformer (ViT)</u>](https://huggingface.co./docs/transformers/main/en/model_doc/vit)
model from 🤗 Transformers locally with TensorFlow Serving. We covered
topics like embedding preprocessing and postprocessing operations within
the Vision Transformer model, handling gRPC requests, and more!
While local deployments are an excellent head start to building
something useful, you’d need to perform deployments that can serve many
users in real-life projects. In this post, you’ll learn how to scale the
local deployment from the previous post with Docker and Kubernetes.
Therefore, we assume some familiarity with Docker and Kubernetes.
This post builds on top of the [<u>previous post</u>](https://huggingface.co./blog/tf-serving-vision), so, we highly
recommend reading it first. You can find all the code
discussed throughout this post in [<u>this repository</u>](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_onnx_gke).
## Why go with Docker and Kubernetes?
The basic workflow of scaling up a deployment like ours includes the
following steps:
- **Containerizing the application logic**: The application logic
involves a served model that can handle requests and return
predictions. For containerization, Docker is the industry-standard
go-to.
- **Deploying the Docker container**: You have various options here. The most
widely used option is deploying the Docker container on a Kubernetes
cluster. Kubernetes provides numerous deployment-friendly features
(e.g. autoscaling and security). You can use a solution like
[<u>Minikube</u>](https://minikube.sigs.k8s.io/docs/start/) to
manage Kubernetes clusters locally or a serverless solution like
[<u>Elastic Kubernetes Service (EKS)</u>](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html).
You might be wondering why use an explicit setup like this in the age
of [<u>Sagemaker,</u>](https://aws.amazon.com/sagemaker/) [<u>Vertex AI</u>](https://cloud.google.com/vertex-ai)
that provides ML deployment-specific features right off the bat. It is fair to think
about it.
The above workflow is widely adopted in the industry, and many
organizations benefit from it. It has already been battle-tested for
many years. It also lets you have more granular control of your
deployments while abstracting away the non-trivial bits.
This post uses [<u>Google Kubernetes Engine (GKE)</u>](https://cloud.google.com/kubernetes-engine)
to provision and manage a Kubernetes cluster. We assume you already have a
billing-enabled GCP project if you’re using GKE. Also, note that you’d need to
configure the [`gcloud`](https://cloud.google.com/sdk/gcloud) utility for
performing the deployment on GKE. But the concepts discussed in this post
equally apply should you decide to use Minikube.
**Note**: The code snippets shown in this post can be executed on a Unix terminal
as long as you have configured the `gcloud` utility along with Docker and `kubectl`.
More instructions are available in the [accompanying repository](https://github.com/sayakpaul/deploy-hf-tf-vision-models/tree/main/hf_vision_model_onnx_gke).
## Containerization with Docker
The serving model can handle raw image inputs as bytes and is capable of preprocessing and
postprocessing.
In this section, you’ll see how to containerize that model using the
[<u>base TensorFlow Serving Image</u>](http://hub.docker.com/r/tensorflow/serving/tags/). TensorFlow Serving consumes models
in the [`SavedModel`](https://www.tensorflow.org/guide/saved_model) format. Recall how you
obtained such a `SavedModel` in the [<u>previous post</u>](https://huggingface.co./blog/tf-serving-vision). We assume that
you have the `SavedModel` compressed in `tar.gz` format. You can fetch
it from [<u>here</u>](https://huggingface.co./deploy-hf-tf-vit/vit-base16-extended/resolve/main/saved_model.tar.gz)
just in case. Then `SavedModel` should be placed in the special directory
structure of `<MODEL_NAME>/<VERSION>/<SavedModel>`. This is how TensorFlow Serving simultaneously manages multiple deployments of different versioned models.
### Preparing the Docker image
The shell script below places the `SavedModel` in `hf-vit/1` under the
parent directory models. You'll copy everything inside it when preparing
the Docker image. There is only one model in this example, but this
is a more generalizable approach.
```bash
$ MODEL_TAR=model.tar.gz
$ MODEL_NAME=hf-vit
$ MODEL_VERSION=1
$ MODEL_PATH=models/$MODEL_NAME/$MODEL_VERSION
$ mkdir -p $MODEL_PATH
$ tar -xvf $MODEL_TAR --directory $MODEL_PATH
```
Below, we show how the `models` directory is structured in our case:
```bash
$ find /models
/models
/models/hf-vit
/models/hf-vit/1
/models/hf-vit/1/keras_metadata.pb
/models/hf-vit/1/variables
/models/hf-vit/1/variables/variables.index
/models/hf-vit/1/variables/variables.data-00000-of-00001
/models/hf-vit/1/assets
/models/hf-vit/1/saved_model.pb
```
The custom TensorFlow Serving image should be built on top of the [base one](http://hub.docker.com/r/tensorflow/serving/tags/).
There are various approaches for this, but you’ll do this by running a Docker container as illustrated in the
[<u>official document</u>](https://www.tensorflow.org/tfx/serving/serving_kubernetes#commit_image_for_deployment). We start by running `tensorflow/serving` image in background mode, then the entire `models` directory is copied to the running container
as below.
```bash
$ docker run -d --name serving_base tensorflow/serving
$ docker cp models/ serving_base:/models/
```
We used the official Docker image of TensorFlow Serving as the base, but
you can use ones that you have [<u>built from source</u>](https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/setup.md#building-from-source)
as well.
**Note**: TensorFlow Serving benefits from hardware optimizations that leverage instruction sets such as
[<u>AVX512</u>](https://en.wikipedia.org/wiki/AVX-512). These
instruction sets can [<u>speed up deep learning model inference</u>](https://huggingface.co./blog/bert-cpu-scaling-part-1). So,
if you know the hardware on which the model will be deployed, it’s often
beneficial to obtain an optimized build of the TensorFlow Serving image
and use it throughout.
Now that the running container has all the required files in the
appropriate directory structure, we need to create a new Docker image
that includes these changes. This can be done with the [`docker commit`](https://docs.docker.com/engine/reference/commandline/commit/) command below, and you'll have a new Docker image named `$NEW_IMAGE`.
One important thing to note is that you need to set the `MODEL_NAME`
environment variable to the model name, which is `hf-vit` in this
case. This tells TensorFlow Serving what model to deploy.
```bash
$ NEW_IMAGE=tfserving:$MODEL_NAME
$ docker commit \
--change "ENV MODEL_NAME $MODEL_NAME" \
serving_base $NEW_IMAGE
```
### Running the Docker image locally
Lastly, you can run the newly built Docker image locally to see if it
works fine. Below you see the output of the `docker run` command. Since
the output is verbose, we trimmed it down to focus on the important
bits. Also, it is worth noting that it opens up `8500` and `8501`
ports for gRPC and HTTP/REST endpoints, respectively.
```shell
$ docker run -p 8500:8500 -p 8501:8501 -t $NEW_IMAGE & | [
[
"computer_vision",
"transformers",
"mlops",
"tutorial",
"deployment"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"mlops",
"deployment"
] | null | null |
c5f128b3-f370-4984-89cd-132b753a94b3 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:17:15.373000 | 4caf7254-0df2-4acd-8ff2-b335e3c7d9bd | AMD + 🤗: Large Language Models Out-of-the-Box Acceleration with AMD GPU | fxmarty, IlyasMoutawwakil, mohitsha, echarlaix, seungrokj, mfuntowicz | huggingface-and-optimum-amd.md | Earlier this year, [AMD and Hugging Face announced a partnership](https://huggingface.co./blog/huggingface-and-amd) to accelerate AI models during the AMD's AI Day event. We have been hard at work to bring this vision to reality, and make it easy for the Hugging Face community to run the latest AI models on AMD hardware with the best possible performance.
AMD is powering some of the most powerful supercomputers in the World, including the fastest European one, [LUMI](https://www.lumi-supercomputer.eu/lumi-retains-its-position-as-europes-fastest-supercomputer/), which operates over 10,000 MI250X AMD GPUs. At this event, AMD revealed their latest generation of server GPUs, the AMD [Instinct™ MI300](https://www.amd.com/fr/graphics/instinct-server-accelerators) series accelerators, which will soon become generally available.
In this blog post, we provide an update on our progress towards providing great out-of-the-box support for AMD GPUs, and improving the interoperability for the latest server-grade AMD Instinct GPUs
## Out-of-the-box Acceleration
Can you spot AMD-specific code changes below? Don't hurt your eyes, there's none compared to running on NVIDIA GPUs 🤗.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "01-ai/Yi-6B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
with torch.device("cuda"):
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
inp = tokenizer(["Today I am in Paris and"], padding=True, return_tensors="pt").to("cuda")
res = model.generate(**inp, max_new_tokens=30)
print(tokenizer.batch_decode(res))
```
One of the major aspects we have been working on is the ability to run Hugging Face Transformers models without any code change. We now support all Transformers models and tasks on AMD Instinct GPUs. And our collaboration is not stopping here, as we explore out-of-the-box support for diffusers models, and other libraries as well as other AMD GPUs.
Achieving this milestone has been a significant effort and collaboration between our teams and companies. To maintain support and performances for the Hugging Face community, we have built integrated testing of Hugging Face open source libraries on AMD Instinct GPUs in our datacenters - and were able to minimize the carbon impact of these new workloads working with Verne Global to deploy the AMD Instinct servers in [Iceland](https://verneglobal.com/about-us/locations/iceland/).
On top of native support, another major aspect of our collaboration is to provide integration for the latest innovations and features available on AMD GPUs. Through the collaboration of Hugging Face team, AMD engineers and open source community members, we are happy to announce [support for](https://huggingface.co./docs/optimum/amd/index):
* Flash Attention v2 from AMD Open Source efforts in [ROCmSoftwarePlatform/flash-attention](https://github.com/ROCmSoftwarePlatform/flash-attention) integrated natively in [Transformers](https://huggingface.co./docs/transformers/perf_infer_gpu_one#flashattention-2) and [Text Generation Inference](https://huggingface.co./docs/text-generation-inference/quicktour).
* Paged Attention from [vLLM](https://github.com/vllm-project/vllm/pull/1313), and various fused kernels available in [Text Generation Inference](https://huggingface.co./docs/text-generation-inference/quicktour) for ROCm.
* [DeepSpeed](https://github.com/microsoft/DeepSpeed) for ROCm-powered GPUs using Transformers is also now officially validated and supported.
* GPTQ, a common weight compression technique used to reduce the model memory requirements, is supported on ROCm GPUs through a direct integration with [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) and [Transformers](https://huggingface.co./blog/gptq-integration).
* [Optimum-Benchmark](https://github.com/huggingface/optimum-benchmark), a utility to easily benchmark the performance of Transformers on AMD GPUs, in normal and distributed settings, with supported optimizations and quantization schemes.
* Support of ONNX models execution on ROCm-powered GPUs using ONNX Runtime through the [ROCMExecutionProvider](https://onnxruntime.ai/docs/execution-providers/ROCm-ExecutionProvider.html) using [Optimum library](https://huggingface.co./docs/optimum/onnxruntime/usage_guides/amdgpu).
We are very excited to make these state of the art acceleration tools available and easy to use to Hugging Face users, and offer maintained support and performance with direct integration in our new continuous integration and development pipeline for AMD Instinct GPUs.
One AMD Instinct MI250 GPU with 128 GB of High Bandwidth Memory has two distinct ROCm devices (GPU 0 and 1), each of them having 64 GB of High Bandwidth Memory.
<br>
<figure class="image table text-center m-0 w-full">
<img alt="" src="assets/optimum_amd/rocmsmi.png" />
<figcaption>MI250 two devices as displayed by `rocm-smi`</figcaption>
</figure>
<br>
This means that with just one MI250 GPU card, we have two PyTorch devices that can be used very easily with tensor and data parallelism to achieve higher throughputs and lower latencies.
In the rest of the blog post, we report performance results for the two steps involved during the text generation through large language models:
* **Prefill latency**: The time it takes for the model to compute the representation for the user's provided input or prompt (also referred to as "Time To First Token").
* **Decoding per token latency**: The time it takes to generate each new token in an autoregressive manner after the prefill step.
* **Decoding throughput**: The number of tokens generated per second during the decoding phase.
Using [`optimum-benchmark`](https://github.com/huggingface/optimum-benchmark) and running [inference benchmarks](https://github.com/huggingface/optimum-benchmark/tree/main/examples/running-llamas) on an MI250 and an A100 GPU with and without optimizations, we get the following results:
<br>
<figure class="image table text-center m-0 w-full">
<img alt="" src="assets/optimum_amd/transformers_bench.png" />
<figcaption>Inference benchmarks using Transformers and PEFT libraries. FA2 stands for "Flash Attention 2", TP for "Tensor Parallelism", DDP for "Distributed Data Parallel".</figcaption>
</figure>
<br>
In the plots above, we can see how performant the MI250 is, especially for production settings where requests are processed in big batches, delivering more than 2.33x more tokens (decode throughput) and taking half the time to the first token (prefill latency), compared to an A100 card.
Running [training benchmarks](https://github.com/huggingface/optimum-benchmark/tree/main/examples/training-llamas) as seen below, one MI250 card fits larger batches of training samples and reaches higher training throughput.
<br>
<figure class="image table text-center m-0 w-9/12">
<img alt="" src="assets/optimum_amd/training_bench.png" />
<figcaption>Training benchmark using Transformers library at maximum batch size (power of two) that can fit on a given card</figcaption>
</figure>
<br>
## Production Solutions
Another important focus for our collaboration is to build support for Hugging Face production solutions, starting with Text Generation Inference (TGI). TGI provides an end-to-end solution to deploy large language models for inference at scale.
Initially, TGI was mostly driven towards Nvidia GPUs, leveraging most of the recent optimizations made for post Ampere architecture, such as Flash Attention v1 and v2, GPTQ weight quantization and Paged Attention.
Today, we are happy to announce initial support for AMD Instinct MI210 and MI250 GPUs in TGI, leveraging all the great open-source work detailed above, integrated in a complete end-to-end solution, ready to be deployed.
Performance-wise, we spent a lot of time benchmarking Text Generation Inference on AMD Instinct GPUs to validate and discover where we should focus on optimizations. As such, and with the support of AMD GPUs Engineers, we have been able to achieve matching performance compared to what TGI was already offering.
In this context, and with the long-term relationship we are building between AMD and Hugging Face, we have been integrating and testing with the AMD GeMM Tuner tool which allows us to tune the GeMM (matrix multiplication) kernels we are using in TGI to find the best setup towards increased performances. GeMM Tuner tool is expected to be released [as part of PyTorch](https://github.com/pytorch/pytorch/pull/114894) in a coming release for everyone to benefit from it.
With all of the above being said, we are thrilled to show the very first performance numbers demonstrating the latest AMD technologies, putting Text Generation Inference on AMD GPUs at the forefront of efficient inferencing solutions with Llama model family.
<br>
<figure class="image table text-center m-0 w-full">
<img alt="" src="assets/optimum_amd/tgi_34b.png" />
<figcaption>TGI latency results for Llama 34B, comparing one AMD Instinct MI250 against A100-SXM4-80GB. As explained above one MI250 corresponds to two PyTorch devices.</figcaption>
</figure>
<br>
<br>
<figure class="image table text-center m-0 w-full">
<img alt="" src="assets/optimum_amd/tgi_70b.png" />
<figcaption>TGI latency results for Llama 70B, comparing two AMD Instinct MI250 against two A100-SXM4-80GB (using tensor parallelism)</figcaption>
</figure>
<br>
Missing bars for A100 correspond to out of memory errors, as Llama 70B weights 138 GB in float16, and enough free memory is necessary for intermediate activations, KV cache buffer (>5GB for 2048 sequence length, batch size 8), CUDA context, etc. The Instinct MI250 GPU has 128 GB global memory while an A100 has 80GB which explains the ability to run larger workloads (longer sequences, larger batches) on MI250.
Text Generation Inference is [ready to be deployed](https://huggingface.co./docs/text-generation-inference/quicktour) in production on AMD Instinct GPUs through the docker image `ghcr.io/huggingface/text-generation-inference:1.2-rocm`. Make sure to refer to the [documentation](https://huggingface.co./docs/text-generation-inference/supported_models#supported-hardware) concerning the support and its limitations.
## What's next?
We hope this blog post got you as excited as we are at Hugging Face about this partnership with AMD. Of course, this is just the very beginning of our journey, and we look forward to enabling more use cases on more AMD hardware.
In the coming months, we will be working on bringing more support and validation for AMD Radeon GPUs, the same GPUs you can put in your own desktop for local usage, lowering down the accessibility barrier and paving the way for even more versatility for our users.
Of course we'll soon be working on performance optimization for the MI300 lineup, ensuring that both the Open Source and the Solutions provide with the latest innovations at the highest stability level we are always looking for at Hugging Face.
Another area of focus for us will be around AMD Ryzen AI technology, powering the latest generation of AMD laptop CPUs, allowing to run AI at the edge, on the device. At the time where Coding Assistant, Image Generation tools and Personal Assistant are becoming more and more broadly available, it is important to offer solutions which can meet the needs of privacy to leverage these powerful tools. In this context, Ryzen AI compatible models are already being made available on the [Hugging Face Hub](https://huggingface.co./models?other=RyzenAI) and we're working closely with AMD to bring more of them in the coming months. | [
[
"llm",
"implementation",
"optimization",
"integration"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"optimization",
"implementation",
"integration"
] | null | null |
5fbe5aae-7a41-4b61-9506-ae7e8bdb9836 | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T03:13:57.062000 | 3a503229-03f0-4c5f-abd9-9f62f7613473 | Fine-Tune a Semantic Segmentation Model with a Custom Dataset | tobiasc, nielsr | fine-tune-segformer.md | <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script>
<a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
**This guide shows how you can fine-tune Segformer, a state-of-the-art semantic segmentation model. Our goal is to build a model for a pizza delivery robot, so it can see where to drive and recognize obstacles 🍕🤖. We'll first label a set of sidewalk images on [Segments.ai](https://segments.ai?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg). Then we'll fine-tune a pre-trained SegFormer model by using [`🤗 transformers`](https://huggingface.co./transformers), an open-source library that offers easy-to-use implementations of state-of-the-art models. Along the way, you'll learn how to work with the Hugging Face Hub, the largest open-source catalog of models and datasets.**
Semantic segmentation is the task of classifying each pixel in an image. You can see it as a more precise way of classifying an image. It has a wide range of use cases in fields such as medical imaging and autonomous driving. For example, for our pizza delivery robot, it is important to know exactly where the sidewalk is in an image, not just whether there is a sidewalk or not.
Because semantic segmentation is a type of classification, the network architectures used for image classification and semantic segmentation are very similar. In 2014, [a seminal paper](https://arxiv.org/abs/1411.4038) by Long et al. used convolutional neural networks for semantic segmentation. More recently, Transformers have been used for image classification (e.g. [ViT](https://huggingface.co./blog/fine-tune-vit)), and now they're also being used for semantic segmentation, pushing the state-of-the-art further.
[SegFormer](https://huggingface.co./docs/transformers/model_doc/segformer) is a model for semantic segmentation introduced by Xie et al. in 2021. It has a hierarchical Transformer encoder that doesn't use positional encodings (in contrast to ViT) and a simple multi-layer perceptron decoder. SegFormer achieves state-of-the-art performance on multiple common datasets. Let's see how our pizza delivery robot performs for sidewalk images.
<figure class="image table text-center m-0 w-full">
<medium-zoom background="rgba(0,0,0,.7)" alt="Pizza delivery robot segmenting a scene" src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/pizza-scene.png"></medium-zoom>
</figure>
Let's get started by installing the necessary dependencies. Because we're going to push our dataset and model to the Hugging Face Hub, we need to install [Git LFS](https://git-lfs.github.com/) and log in to Hugging Face.
The installation of `git-lfs` might be different on your system. Note that Google Colab has Git LFS pre-installed.
```bash
pip install -q transformers datasets evaluate segments-ai
apt-get install git-lfs
git lfs install
huggingface-cli login
```
## 1. Create/choose a dataset
The first step in any ML project is assembling a good dataset. In order to train a semantic segmentation model, we need a dataset with semantic segmentation labels. We can either use an existing dataset from the Hugging Face Hub, such as [ADE20k](https://huggingface.co./datasets/scene_parse_150), or create our own dataset.
For our pizza delivery robot, we could use an existing autonomous driving dataset such as [CityScapes](https://www.cityscapes-dataset.com/) or [BDD100K](https://bdd100k.com/). However, these datasets were captured by cars driving on the road. Since our delivery robot will be driving on the sidewalk, there will be a mismatch between the images in these datasets and the data our robot will see in the real world.
We don't want our delivery robot to get confused, so we'll create our own semantic segmentation dataset using images captured on sidewalks. We'll show how you can label the images we captured in the next steps. If you just want to use our finished, labeled dataset, you can skip the ["Create your own dataset"](#create-your-own-dataset) section and continue from ["Use a dataset from the Hub"](#use-a-dataset-from-the-hub).
### Create your own dataset
To create your semantic segmentation dataset, you'll need two things:
1. images covering the situations your model will encounter in the real world
2. segmentation labels, i.e. images where each pixel represents a class/category.
We went ahead and captured a thousand images of sidewalks in Belgium. Collecting and labeling such a dataset can take a long time, so you can start with a smaller dataset and expand it if the model does not perform well enough.
<figure class="image table text-center m-0 w-full">
<medium-zoom background="rgba(0,0,0,.7)" alt="Example images from the sidewalk dataset" src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-examples.png"></medium-zoom>
<figcaption>Some examples of the raw images in the sidewalk dataset.</figcaption>
</figure>
To obtain segmentation labels, we need to indicate the classes of all the regions/objects in these images. This can be a time-consuming endeavour, but using the right tools can speed up the task significantly. For labeling, we'll use [Segments.ai](https://segments.ai?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg), since it has smart labeling tools for image segmentation and an easy-to-use Python SDK.
#### Set up the labeling task on Segments.ai
First, create an account at [https://segments.ai/join](https://segments.ai/join?utm_source=hf&utm_medium=colab&utm_campaign=sem_seg).
Next, create a new dataset and upload your images. You can either do this from the web interface or via the Python SDK (see the [notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb)).
#### Label the images
Now that the raw data is loaded, go to [segments.ai/home](https://segments.ai/home) and open the newly created dataset. Click "Start labeling" and create segmentation masks. You can use the ML-powered superpixel and autosegment tools to label faster.
<figure class="image table text-center m-0">
<video
alt="Labeling a sidewalk image on Segments.ai"
style="max-width: 70%; margin: auto;"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-labeling-crop.mp4" poster="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/sidewalk-labeling-crop-poster.png" type="video/mp4">
</video>
<figcaption>Tip: when using the superpixel tool, scroll to change the superpixel size, and click and drag to select segments.</figcaption>
</figure>
#### Push the result to the Hugging Face Hub
When you're done labeling, create a new dataset release containing the labeled data. You can either do this on the releases tab on Segments.ai, or programmatically through the SDK as shown in the notebook.
Note that creating the release can take a few seconds. You can check the releases tab on Segments.ai to check if your release is still being created.
Now, we'll convert the release to a [Hugging Face dataset](https://huggingface.co./docs/datasets/package_reference/main_classes.html#datasets.Dataset) via the Segments.ai Python SDK. If you haven't set up the Segments Python client yet, follow the instructions in the "Set up the labeling task on Segments.ai" section of the [notebook](https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb#scrollTo=9T2Jr9t9y4HD).
*Note that the conversion can take a while, depending on the size of your dataset.*
```python
from segments.huggingface import release2dataset
release = segments_client.get_release(dataset_identifier, release_name)
hf_dataset = release2dataset(release)
```
If we inspect the features of the new dataset, we can see the image column and the corresponding label. The label consists of two parts: a list of annotations and a segmentation bitmap. The annotation corresponds to the different objects in the image. For each object, the annotation contains an `id` and a `category_id`. The segmentation bitmap is an image where each pixel contains the `id` of the object at that pixel. More information can be found in the [relevant docs](https://docs.segments.ai/reference/sample-and-label-types/label-types#segmentation-labels).
For semantic segmentation, we need a semantic bitmap that contains a `category_id` for each pixel. We'll use the `get_semantic_bitmap` function from the Segments.ai SDK to convert the bitmaps to semantic bitmaps. To apply this function to all the rows in our dataset, we'll use [`dataset.map`](https://huggingface.co./docs/datasets/package_reference/main_classes#datasets.Dataset.map).
```python
from segments.utils import get_semantic_bitmap
def convert_segmentation_bitmap(example):
return {
"label.segmentation_bitmap":
get_semantic_bitmap(
example["label.segmentation_bitmap"],
example["label.annotations"],
id_increment=0,
)
}
semantic_dataset = hf_dataset.map(
convert_segmentation_bitmap,
)
```
You can also rewrite the `convert_segmentation_bitmap` function to use batches and pass `batched=True` to `dataset.map`. This will significantly speed up the mapping, but you might need to tweak the `batch_size` to ensure the process doesn't run out of memory.
The SegFormer model we're going to fine-tune later expects specific names for the features. For convenience, we'll match this format now. Thus, we'll rename the `image` feature to `pixel_values` and the `label.segmentation_bitmap` to `label` and discard the other features.
```python
semantic_dataset = semantic_dataset.rename_column('image', 'pixel_values')
semantic_dataset = semantic_dataset.rename_column('label.segmentation_bitmap', 'label')
semantic_dataset = semantic_dataset.remove_columns(['name', 'uuid', 'status', 'label.annotations'])
```
We can now push the transformed dataset to the Hugging Face Hub. That way, your team and the Hugging Face community can make use of it. In the next section, we'll see how you can load the dataset from the Hub.
```python
hf_dataset_identifier = f"{hf_username}/{dataset_name}"
semantic_dataset.push_to_hub(hf_dataset_identifier)
```
### Use a dataset from the Hub
If you don't want to create your own dataset, but found a suitable dataset for your use case on the Hugging Face Hub, you can define the identifier here.
For example, you can use the full labeled sidewalk dataset. Note that you can check out the examples [directly in your browser](https://huggingface.co./datasets/segments/sidewalk-semantic).
```python
hf_dataset_identifier = "segments/sidewalk-semantic"
```
## 2. Load and prepare the Hugging Face dataset for training
Now that we've created a new dataset and pushed it to the Hugging Face Hub, we can load the dataset in a single line.
```python
from datasets import load_dataset
ds = load_dataset(hf_dataset_identifier)
```
Let's shuffle the dataset and split the dataset in a train and test set.
```python
ds = ds.shuffle(seed=1)
ds = ds["train"].train_test_split(test_size=0.2)
train_ds = ds["train"]
test_ds = ds["test"]
```
We'll extract the number of labels and the human-readable ids, so we can configure the segmentation model correctly later on.
```python
import json
from huggingface_hub import hf_hub_download
repo_id = f"datasets/{hf_dataset_identifier}"
filename = "id2label.json"
id2label = json.load(open(hf_hub_download(repo_id=hf_dataset_identifier, filename=filename, repo_type="dataset"), "r"))
id2label = {int(k): v for k, v in id2label.items()}
label2id = {v: k for k, v in id2label.items()}
num_labels = len(id2label)
```
### Image processor & data augmentation
A SegFormer model expects the input to be of a certain shape. To transform our training data to match the expected shape, we can use `SegFormerImageProcessor`. We could use the `ds.map` function to apply the image processor to the whole training dataset in advance, but this can take up a lot of disk space. Instead, we'll use a *transform*, which will only prepare a batch of data when that data is actually used (on-the-fly). This way, we can start training without waiting for further data preprocessing.
In our transform, we'll also define some data augmentations to make our model more resilient to different lighting conditions. We'll use the [`ColorJitter`](https://pytorch.org/vision/main/generated/torchvision.transforms.ColorJitter.html) function from `torchvision` to randomly change the brightness, contrast, saturation, and hue of the images in the batch.
```python
from torchvision.transforms import ColorJitter
from transformers import SegformerImageProcessor
processor = SegformerImageProcessor()
jitter = ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.1)
def train_transforms(example_batch):
images = [jitter(x) for x in example_batch['pixel_values']]
labels = [x for x in example_batch['label']]
inputs = processor(images, labels)
return inputs
def val_transforms(example_batch):
images = [x for x in example_batch['pixel_values']]
labels = [x for x in example_batch['label']]
inputs = processor(images, labels)
return inputs
# Set transforms
train_ds.set_transform(train_transforms)
test_ds.set_transform(val_transforms)
```
## 3. Fine-tune a SegFormer model
### Load the model to fine-tune
The SegFormer authors define 5 models with increasing sizes: B0 to B5. The following chart (taken from the original paper) shows the performance of these different models on the ADE20K dataset, compared to other models.
<figure class="image table text-center m-0 w-full">
<medium-zoom background="rgba(0,0,0,.7)" alt="SegFormer model variants compared with other segmentation models" src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/segformer.png"></medium-zoom>
<figcaption><a href="https://arxiv.org/abs/2105.15203">Source</a></figcaption>
</figure>
Here, we'll load the smallest SegFormer model (B0), pre-trained on ImageNet-1k. It's only about 14MB in size!
Using a small model will make sure that our model can run smoothly on our pizza delivery robot.
```python
from transformers import SegformerForSemanticSegmentation
pretrained_model_name = "nvidia/mit-b0"
model = SegformerForSemanticSegmentation.from_pretrained(
pretrained_model_name,
id2label=id2label,
label2id=label2id
)
```
### Set up the Trainer
To fine-tune the model on our data, we'll use Hugging Face's [Trainer API](https://huggingface.co./docs/transformers/main_classes/trainer). We need to set up the training configuration and an evalutation metric to use a Trainer.
First, we'll set up the [`TrainingArguments`](https://huggingface.co./docs/transformers/main_classes/trainer#transformers.TrainingArguments). This defines all training hyperparameters, such as learning rate and the number of epochs, frequency to save the model and so on. We also specify to push the model to the hub after training (`push_to_hub=True`) and specify a model name (`hub_model_id`).
```python
from transformers import TrainingArguments
epochs = 50
lr = 0.00006
batch_size = 2
hub_model_id = "segformer-b0-finetuned-segments-sidewalk-2"
training_args = TrainingArguments(
"segformer-b0-finetuned-segments-sidewalk-outputs",
learning_rate=lr,
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
save_total_limit=3,
evaluation_strategy="steps",
save_strategy="steps",
save_steps=20,
eval_steps=20,
logging_steps=1,
eval_accumulation_steps=5,
load_best_model_at_end=True,
push_to_hub=True,
hub_model_id=hub_model_id,
hub_strategy="end",
)
```
Next, we'll define a function that computes the evaluation metric we want to work with. Because we're doing semantic segmentation, we'll use the [mean Intersection over Union (mIoU)](https://huggingface.co./spaces/evaluate-metric/mean_iou), directly accessible in the [`evaluate` library](https://huggingface.co./docs/evaluate/index). IoU represents the overlap of segmentation masks. Mean IoU is the average of the IoU of all semantic classes. Take a look at [this blogpost](https://www.jeremyjordan.me/evaluating-image-segmentation-models/) for an overview of evaluation metrics for image segmentation.
Because our model outputs logits with dimensions height/4 and width/4, we have to upscale them before we can compute the mIoU.
```python
import torch
from torch import nn
import evaluate
metric = evaluate.load("mean_iou")
def compute_metrics(eval_pred):
with torch.no_grad():
logits, labels = eval_pred
logits_tensor = torch.from_numpy(logits)
# scale the logits to the size of the label
logits_tensor = nn.functional.interpolate(
logits_tensor,
size=labels.shape[-2:],
mode="bilinear",
align_corners=False,
).argmax(dim=1)
pred_labels = logits_tensor.detach().cpu().numpy()
metrics = metric.compute(
predictions=pred_labels,
references=labels,
num_labels=len(id2label),
ignore_index=0,
reduce_labels=processor.do_reduce_labels,
)
# add per category metrics as individual key-value pairs
per_category_accuracy = metrics.pop("per_category_accuracy").tolist()
per_category_iou = metrics.pop("per_category_iou").tolist()
metrics.update({f"accuracy_{id2label[i]}": v for i, v in enumerate(per_category_accuracy)})
metrics.update({f"iou_{id2label[i]}": v for i, v in enumerate(per_category_iou)})
return metrics
```
Finally, we can instantiate a `Trainer` object.
```python
from transformers import Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
```
Now that our trainer is set up, training is as simple as calling the `train` function. We don't need to worry about managing our GPU(s), the trainer will take care of that.
```python
trainer.train()
```
When we're done with training, we can push our fine-tuned model and the image processor to the Hub.
This will also automatically create a model card with our results. We'll supply some extra information in `kwargs` to make the model card more complete.
```python
kwargs = {
"tags": ["vision", "image-segmentation"],
"finetuned_from": pretrained_model_name,
"dataset": hf_dataset_identifier,
}
processor.push_to_hub(hub_model_id)
trainer.push_to_hub(**kwargs)
```
## 4. Inference
Now comes the exciting part, using our fine-tuned model! In this section, we'll show how you can load your model from the hub and use it for inference.
However, you can also try out your model directly on the Hugging Face Hub, thanks to the cool widgets powered by the [hosted inference API](https://api-inference.huggingface.co/docs/python/html/index.html). If you pushed your model to the Hub in the previous step, you should see an inference widget on your model page. You can add default examples to the widget by defining example image URLs in your model card. See [this model card](https://huggingface.co./tobiasc/segformer-b0-finetuned-segments-sidewalk/blob/main/README.md) as an example.
<figure class="image table text-center m-0 w-full">
<video
alt="The interactive widget of the model"
style="max-width: 70%; margin: auto;"
autoplay loop autobuffer muted playsinline
>
<source src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/widget.mp4" poster="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/widget-poster.png" type="video/mp4">
</video>
</figure>
### Use the model from the Hub
We'll first load the model from the Hub using `SegformerForSemanticSegmentation.from_pretrained()`.
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
processor = SegformerImageProcessor.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512")
model = SegformerForSemanticSegmentation.from_pretrained(f"{hf_username}/{hub_model_id}")
```
Next, we'll load an image from our test dataset.
```python
image = test_ds[0]['pixel_values']
gt_seg = test_ds[0]['label']
image
```
To segment this test image, we first need to prepare the image using the image processor. Then we forward it through the model.
We also need to remember to upscale the output logits to the original image size. In order to get the actual category predictions, we just have to apply an `argmax` on the logits.
```python
from torch import nn
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
# First, rescale logits to original image size
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1], # (height, width)
mode='bilinear',
align_corners=False
)
# Second, apply argmax on the class dimension
pred_seg = upsampled_logits.argmax(dim=1)[0]
```
Now it's time to display the result. We'll display the result next to the ground-truth mask.
<figure class="image table text-center m-0 w-full">
<medium-zoom background="rgba(1,1,1,1)" alt="SegFormer prediction vs the ground truth" src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/56_fine_tune_segformer/output.png"></medium-zoom>
</figure>
What do you think? Would you send our pizza delivery robot on the road with this segmentation information?
The result might not be perfect yet, but we can always expand our dataset to make the model more robust. We can now also go train a larger SegFormer model, and see how it stacks up.
## 5. Conclusion
That's it! You now know how to create your own image segmentation dataset and how to use it to fine-tune a semantic segmentation model.
We introduced you to some useful tools along the way, such as:
* [Segments.ai](https://segments.ai) for labeling your data
* [🤗 datasets](https://huggingface.co./docs/datasets/) for creating and sharing a dataset
* [🤗 transformers](https://huggingface.co./transformers) for easily fine-tuning a state-of-the-art segmentation model
* [Hugging Face Hub](https://huggingface.co./docs/hub/main) for sharing our dataset and model, and for creating an inference widget for our model
We hope you enjoyed this post and learned something. Feel free to share your own model with us on Twitter ([@TobiasCornille](https://twitter.com/tobiascornille), [@NielsRogge](https://twitter.com/nielsrogge), and [@huggingface](https://twitter.com/huggingface)). | [
[
"computer_vision",
"transformers",
"tutorial",
"fine_tuning"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"transformers",
"fine_tuning",
"tutorial"
] | null | null |
87f38fed-f820-4344-bd87-a019413f8662 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:52:58.126000 | 4cac3387-3005-45bd-a1fb-d605ab09f600 | Accelerating Document AI | rajistics, nielsr, florentgbelidji, nbroad | document-ai.md | Enterprises are full of documents containing knowledge that isn't accessible by digital workflows. These documents can vary from letters, invoices, forms, reports, to receipts. With the improvements in text, vision, and multimodal AI, it's now possible to unlock that information. This post shows you how your teams can use open-source models to build custom solutions for free!
Document AI includes many data science tasks from [image classification](https://huggingface.co./tasks/image-classification), [image to text](https://huggingface.co./tasks/image-to-text), [document question answering](https://huggingface.co./tasks/document-question-answering), [table question answering](https://huggingface.co./tasks/table-question-answering), and [visual question answering](https://huggingface.co./tasks/visual-question-answering). This post starts with a taxonomy of use cases within Document AI and the best open-source models for those use cases. Next, the post focuses on licensing, data preparation, and modeling. Throughout this post, there are links to web demos, documentation, and models.
### Use Cases
There are at least six general use cases for building document AI solutions. These use cases differ in the kind of document inputs and outputs. A combination of approaches is often necessary when solving enterprise Document AI problems.
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="1-what-is-ocr"><strong itemprop="name"> What is Optical Character Recognition (OCR)?</strong></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Turning typed, handwritten, or printed text into machine-encoded text is known as Optical Character Recognition (OCR). It's a widely studied problem with many well-established open-source and commercial offerings. The figure shows an example of converting handwriting into text.
![png](assets/112_document-ai/ocr.png)
OCR is a backbone of Document AI use cases as it's essential to transform the text into something readable by a computer. Some widely available OCR models that operate at the document level are [EasyOCR](https://huggingface.co./spaces/tomofi/EasyOCR) or [PaddleOCR](https://huggingface.co./spaces/PaddlePaddle/PaddleOCR). There are also models like [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://huggingface.co./docs/transformers/model_doc/trocr), which runs on single-text line images. This model works with a text detection model like CRAFT which first identifies the individual "pieces" of text in a document in the form of bounding boxes. The relevant metrics for OCR are Character Error Rate (CER) and word-level precision, recall, and F1. Check out [this Space](https://huggingface.co./spaces/tomofi/CRAFT-TrOCR) to see a demonstration of CRAFT and TrOCR.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="2-what-is-doc_class"><strong itemprop="name"> What is Document Image Classification?</strong></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Classifying documents into the appropriate category, such as forms, invoices, or letters, is known as document image classification. Classification may use either one or both of the document's image and text. The recent addition of multimodal models that use the visual structure and the underlying text has dramatically increased classifier performance.
A basic approach is applying OCR on a document image, after which a [BERT](https://huggingface.co./docs/transformers/model_doc/bert)-like model is used for classification. However, relying on only a BERT model doesn't take any layout or visual information into account. The figure from the [RVL-CDIP](https://huggingface.co./datasets/rvl_cdip) dataset shows how visual structure differs by different document types.
![png](assets/112_document-ai/doc_class.png)
That's where models like [LayoutLM](https://huggingface.co./docs/transformers/model_doc/layoutlmv3) and [Donut](https://huggingface.co./docs/transformers/model_doc/donut) come into play. By incorporating not only text but also visual information, these models can dramatically increase accuracy. For comparison, on [RVL-CDIP](https://huggingface.co./datasets/rvl_cdip), an important benchmark for document image classification, a BERT-base model achieves 89% accuracy by using the text. A [DiT](https://huggingface.co./docs/transformers/main/en/model_doc/dit) (Document Image Transformer) is a pure vision model (i.e., it does not take text as input) and can reach 92% accuracy. But models like [LayoutLMv3](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv3) and [Donut](https://huggingface.co./docs/transformers/model_doc/donut), which use the text and visual information together using a multimodal Transformer, can achieve 95% accuracy! These multimodal models are changing how practitioners solve Document AI use cases.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="2-what-is-doc-layout"><strong itemprop="name"> What is Document layout analysis?</strong></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Document layout analysis is the task of determining the physical structure of a document, i.e., identifying the individual building blocks that make up a document, like text segments, headers, and tables. This task is often solved by framing it as an image segmentation/object detection problem. The model outputs a set of segmentation masks/bounding boxes, along with class names.
Models that are currently state-of-the-art for document layout analysis are [LayoutLMv3](https://huggingface.co./docs/transformers/model_doc/layoutlmv3) and [DiT](https://huggingface.co./docs/transformers/model_doc/dit) (Document Image Transformer). Both models use the classic [Mask R-CNN](https://arxiv.org/abs/1703.06870) framework for object detection as a backbone. This [document layout analysis](https://huggingface.co./spaces/nielsr/dit-document-layout-analysis) Space illustrates how DiT can be used to identify text segments, titles, and tables in documents. An example using [DiT](https://github.com/microsoft/unilm/tree/master/dit) detecting different parts of a document is shown here.
</div>
</div>
</div>
![png](assets/112_document-ai/DIT.png)
Document layout analysis with DiT.
Document layout analysis typically uses the mAP (mean average-precision) metric, often used for evaluating object detection models. An important benchmark for layout analysis is the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset. [LayoutLMv3](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv3), the state-of-the-art at the time of writing, achieves an overall mAP score of 0.951 ([source](https://paperswithcode.com/sota/document-layout-analysis-on-publaynet-val)).
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="4-what-is-doc-parsing"><strong itemprop="name"> What is Document parsing?</strong></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
A step beyond layout analysis is document parsing. Document parsing is identifying and extracting key information (often in the form of key-value pairs) from a document, such as names, items, and totals from an invoice form. This [LayoutLMv2 Space](https://huggingface.co./spaces/nielsr/LayoutLMv2-FUNSD) shows to parse a document to recognize questions, answers, and headers.
The first version of LayoutLM (now known as LayoutLMv1) was released in 2020 and dramatically improved over existing benchmarks, and it's still one of the most popular models on the Hugging Face Hub for Document AI. [LayoutLMv2](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv2) and [LayoutLMv3](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv3) incorporate visual features during pre-training, which provides an improvement. The LayoutLM family produced a step change in Document AI performance. For example, on the [FUNSD](https://guillaumejaume.github.io/FUNSD/) benchmark dataset, a BERT model has an F1 score of 60%, but with LayoutLM, it is possible to get to 90%!
LayoutLMv1 now has many successors, including [ERNIE-Layout](https://arxiv.org/abs/2210.06155) which shows promising results as shown in this [Space](https://huggingface.co./spaces/PaddlePaddle/ERNIE-Layout). For multilingual use cases, there are multilingual variants of LayoutLM, like [LayoutXLM](https://huggingface.co./docs/transformers/model_doc/layoutxlm) and [LiLT](https://huggingface.co./docs/transformers/main/en/model_doc/lilt). This figure from the LayoutLM paper shows LayoutLM analyzing some different documents.
![png](assets/112_document-ai/layoutlm.png)
Many successors of LayoutLM adopt a generative, end-to-end approach. This started with the [Donut](https://huggingface.co./docs/transformers/model_doc/donut) model, which simply takes a document's image as input and produces text as an output, not relying on any separate OCR engine.
<img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/112_document_ai/donut.png"
alt="drawing" width="600"/>
<small> Donut model consisting of an encoder-decoder Transformer. Taken from the <a href="https://arxiv.org/abs/2111.15664">Donut paper.</a> </small>
After Donut, various similar models were released, including [Pix2Struct](https://huggingface.co./docs/transformers/model_doc/pix2struct) by Google and [UDOP](https://huggingface.co./docs/transformers/model_doc/udop) by Microsoft. Nowadays, larger vision-language models such as [LLaVa-NeXT](https://huggingface.co./docs/transformers/model_doc/llava_next) and [Idefics2](https://huggingface.co./docs/transformers/model_doc/idefics2) can be fine-tuned to perform document parsing in an end-to-end manner. As a matter of fact, these models can be fine-tuned to perform any document AI task, from document image classification to document parsing, as long as the task can be defined as an image-text-to-text task. See, for instance, the [tutorial notebook](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/PaliGemma) to fine-tune Google's [PaliGemma](https://huggingface.co./docs/transformers/model_doc/paligemma) (a smaller vision-language model) to return a JSON from receipt images.
<img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/112_document_ai/paligemma.jpeg" width="600"/>
<small> Vision-language models such as PaliGemma can be fine-tuned on any image-text-to-text task. See the <a href="https://github.com/NielsRogge/Transformers-Tutorials/blob/master/PaliGemma/Fine_tune_PaliGemma_for_image_%3EJSON.ipynb">tutorial notebook.</a> </small>
Data scientists are finding document layout analysis and extraction as key use cases for enterprises. The existing commercial solutions typically cannot handle the diversity of most enterprise data, in content and structure. Consequently, data science teams can often surpass commercial tools by fine-tuning their own models.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="5-what-is-table"><strong itemprop="name"> What is Table detection, extraction, and table structure recognition?</strong></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Documents often contain tables, and most OCR tools don't work incredibly well out-of-the-box on tabular data. Table detection is the task of identifying where tables are located, and table extraction creates a structured representation of that information. Table structure recognition is the task of identifying the individual pieces that make up a table, like rows, columns, and cells. Table functional analysis (FA) is the task of recognizing the keys and values of the table. The figure from the [Table transformer](https://github.com/microsoft/table-transformer) illustrates the difference between the various subtasks.
![jpeg](assets/112_document-ai/table.jpeg)
The approach for table detection and structure recognition is similar to document layout analysis in using object detection models that output a set of bounding boxes and corresponding classes.
The latest approaches, like [Table Transformer](https://huggingface.co./docs/transformers/main/en/model_doc/table-transformer), can enable table detection and table structure recognition with the same model. The Table Transformer is a [DETR](https://huggingface.co./docs/transformers/model_doc/detr)-like object detection model, trained on [PubTables-1M](https://arxiv.org/abs/2110.00061) (a dataset comprising one million tables). Evaluation for table detection and structure recognition typically uses the average precision (AP) metric. The Table Transformer performance is reported as having an AP of 0.966 for table detection and an AP of 0.912 for table structure recognition + functional analysis on PubTables-1M.
Table detection and extraction is an exciting approach, but the results may be different on your data. In our experience, the quality and formatting of tables vary widely and can affect how well the models perform. Additional fine-tuning on some custom data will greatly improve the performance.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="6-what-is-docvqa"><strong itemprop="name"> What is Document question answering (DocVQA)?</strong></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Question answering on documents has dramatically changed how people interact with AI. Recent advancements have made it possible to ask models to answer questions about an image - this is known as document visual question answering, or DocVQA for short. After being given a question, the model analyzes the image and responds with an answer. An example from the [DocVQA dataset](https://rrc.cvc.uab.es/?ch=17) is shown in the figure below. The user asks, "Mention the ZIP code written?" and the model responds with the answer.
![png](assets/112_document-ai/vqa.png)
In the past, building a DocVQA system would often require multiple models working together. There could be separate models for analyzing the document layout, performing OCR, extracting entities, and then answering a question. The latest DocVQA models enable question-answering in an end-to-end manner, comprising only a single (multimodal) model.
DocVQA is typically evaluated using the Average Normalized Levenshtein Similarity (ANLS) metric. For more details regarding this metric, we refer to [this guide](https://rrc.cvc.uab.es/?ch=11&com=tasks). The current state-of-the-art on the DocVQA benchmark that is open-source is [LayoutLMv3](https://huggingface.co./docs/transformers/model_doc/layoutlmv3), which achieves an ANLS score of 83.37. However, this model consists of a pipeline of OCR + multimodal Transformer.
Newer models such as [Donut](https://huggingface.co./docs/transformers/model_doc/donut), [LLaVa-NeXT](https://huggingface.co./docs/transformers/model_doc/idefics2) and [Idefics2](https://huggingface.co./docs/transformers/model_doc/llava_next) solve the task in an end-to-end manner using a single Transformer-based neural network, not relying on OCR. Impira hosts an [exciting Space](https://huggingface.co./spaces/impira/docquery) that illustrates LayoutLM and Donut for DocVQA.
Visual question answering is compelling; however, there are many considerations for successfully using it. Having accurate training data, evaluation metrics, and post-processing is vital. For teams taking on this use case, be aware that DocVQA can be challenging to work properly. In some cases, responses can be unpredictable, and the model can “hallucinate” by giving an answer that doesn't appear within the document. Visual question answering models can inherit biases in data raising ethical issues. Ensuring proper model setup and post-processing is integral to building a successful DocVQA solution.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="7-what-is-licensing"><h3 itemprop="name"> What are Licensing Issues in Document AI?</h3></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Industry and academia make enormous contributions to advancing Document AI. There are a wide assortment of models and datasets available for data scientists to use. However, licensing can be a non-starter for building an enterprise solution. Some well-known models have restrictive licenses that forbid the model from being used for commercial purposes. Most notably, Microsoft's [LayoutLMv2](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv2) and [LayoutLMv3](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv3) checkpoints cannot be used commercially. When you start a project, we advise carefully evaluating the license of prospective models. Knowing which models you want to use is essential at the outset, since that may affect data collection and annotation. A table of the popular models with their licensing license information is at the end of this post.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="8-what-are-dataprep"><h3 itemprop="name"> What are Data Prep Issues in Document AI?</h3></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
Data preparation for Document AI is critical and challenging. It's crucial to have properly annotated data. Here are some lessons we have learned along with the way around data preparation.
First, machine learning depends on the scale and quality of your data. If the image quality of your documents is poor, you can't expect AI to be able to read these documents magically. Similarly, if your training data is small with many classes, your performance may be poor. Document AI is like other problems in machine learning where larger data will generally provide greater performance.
Second, be flexible in your approaches. You may need to test several different methodologies to find the best solution. A great example is OCR, in which you can use an open-source product like Tesseract, a commercial solution like Cloud Vision API, or the OCR capability inside an open-source multimodal model like [Donut](https://huggingface.co./docs/transformers/model_doc/donut).
Third, start small with annotating data and pick your tools wisely. In our experience, you can get good results with several hundred documents. So start small and carefully evaluate your performance. Once you have narrowed your overall approach, you can begin to scale up the data to maximize your predictive accuracy. When annotating, remember that some tasks like layout identification and document extraction require identifying a specific region within a document. You will want to ensure your annotation tool supports bounding boxes.
</div>
</div>
</div>
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="9-what-is-modeling"><h3 itemprop="name"> What are Modeling Issues in Document AI?</h3></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
The flexibility of building your models leads to many options for data scientists. Our strong recommendation for teams is to start with the pre-trained open-source models. These models can be fine-tuned to your specific documents, and this is generally the quickest way to a good model.
For teams considering building their own pre-trained model, be aware this can involve millions of documents and can easily take several weeks to train a model. Building a pre-trained model requires significant effort and is not recommended for most data science teams. Instead, start with fine-tuning one, but ask yourself these questions first.
Do you want the model to handle the OCR? For example, [Donut](https://huggingface.co./docs/transformers/model_doc/donut) doesn't require the document to be OCRed and directly works on full-resolution images, so there is no need for OCR before modeling. However, depending on your problem setup, it may be simpler to get OCR separately.
Should you use higher-resolution images? When using images with [LayoutLMv2](https://huggingface.co./docs/transformers/main/en/model_doc/layoutlmv2), it downscales them to 224 by 224, which destroys the original aspect ratio of the images. Newer models such as [Donut](https://huggingface.co./docs/transformers/model_doc/donut), [Pix2Struct](https://huggingface.co./docs/transformers/model_doc/pix2struct) and [Idefics2](https://huggingface.co./docs/transformers/model_doc/idefics2) uses the full high-resolution image, keeping the original aspect ratio. Research has shown that performance dramatically increases with a higher image resolution, as it allows models to "see" a lot more. However, it also comes at the cost of additional memory required for training and inference.
<img src="https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/blog/112_document_ai/pix2struct.png"
alt="drawing" width="600"/>
<small> Effect of image resolution on downstream performance. Taken from the <a href="https://arxiv.org/abs/2210.03347">Pix2Struct paper.</a> </small>
How are you evaluating the model? Watch out for misaligned bounding boxes. You should ensure bounding boxes provided by the OCR engine of your choice align with the model processor. Verifying this can save you from unexpectedly poor results. Second, let your project requirements guide your evaluation metrics. For example, in some tasks like token classification or question answering, a 100% match may not be the best metric. A metric like partial match could allow for many more potential tokens to be considered, such as “Acme” and “inside Acme” as a match. Finally, consider ethical issues during your evaluation as these models may be working with biased data or provide unstable outcomes that could biased against certain groups of people.
</div>
</div>
</div>
### Next Steps
Are you seeing the possibilities of Document AI? Every day we work with enterprises to unlock valuable data using state-of-the-art vision and language models. We included links to various demos throughout this post, so use them as a starting point. The last section of the post contains resources for starting to code up your own models, such as visual question answering. Once you are ready to start building your solutions, the [Hugging Face public hub](https://huggingface.co./models) is a great starting point. It hosts a vast array of Document AI models.
If you want to accelerate your Document AI efforts, Hugging Face can help. Through our [Enterprise Acceleration Program](https://huggingface.co./support) we partner with enterprises to provide guidance on AI use cases. For Document AI, this could involve helping build a pre-train model, improving accuracy on a fine-tuning task, or providing overall guidance on tackling your first Document AI use case.
We can also provide bundles of compute credits to use our training (AutoTrain) or inference (Spaces or Inference Endpoints) products at scale.
### Resources
Notebooks and tutorials for many Document AI models can be found at:
- Niels' [Transformers-Tutorials](https://github.com/NielsRogge/Transformers-Tutorials)
- Philipp's [Document AI with Hugging Face Transformers](https://github.com/philschmid/document-ai-transformers)
<html itemscope itemtype="https://schema.org/FAQPage">
<div itemscope itemprop="mainEntity" itemtype="https://schema.org/Question">
<a id="10-what-are-models"><h3 itemprop="name"> What are Popular Open-Source Models for Document AI?</h3></a>
<div itemscope itemprop="acceptedAnswer" itemtype="https://schema.org/Answer">
<div itemprop="text">
A table of the currently available Transformers models achieving state-of-the-art performance on Document AI tasks. An important trend is that we see more and more vision-language models that perform document AI tasks in an end-to-end manner, taking the document image(s) as an input and producing text as an output.
This was last updated in June 2024.
| model | paper | license | checkpoints |
| | [
[
"computer_vision",
"implementation",
"multi_modal"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"computer_vision",
"multi_modal",
"implementation",
"tutorial"
] | null | null |
7129deb4-9c64-4b1e-a27b-71a789ce3cd4 | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T18:59:13.437000 | 36285803-8548-4393-a819-fc9b45ce933f | Overview of natively supported quantization schemes in 🤗 Transformers | ybelkada, marcsun13, IlyasMoutawwakil, clefourrier, fxmarty | overview-quantization-transformers.md | We aim to give a clear overview of the pros and cons of each quantization scheme supported in transformers to help you decide which one you should go for.
Currently, quantizing models are used for two main purposes:
- Running inference of a large model on a smaller device
- Fine-tune adapters on top of quantized models
So far, two integration efforts have been made and are **natively** supported in transformers : *bitsandbytes* and *auto-gptq*.
Note that some additional quantization schemes are also supported in the [🤗 optimum library](https://github.com/huggingface/optimum), but this is out of scope for this blogpost.
To learn more about each of the supported schemes, please have a look at one of the resources shared below. Please also have a look at the appropriate sections of the documentation.
Note also that the details shared below are only valid for `PyTorch` models, this is currently out of scope for Tensorflow and Flax/JAX models.
## Table of contents
- [Resources](#resources)
- [Comparing bitsandbytes and auto-gptq](#Comparing-bitsandbytes-and-auto-gptq)
- [Diving into speed benchmarks](#Diving-into-speed-benchmarks)
- [Conclusion and final words](#conclusion-and-final-words)
- [Acknowledgements](#acknowledgements)
## Resources
- [GPTQ blogpost](https://huggingface.co./blog/gptq-integration) – gives an overview on what is the GPTQ quantization method and how to use it.
- [bistandbytes 4-bit quantization blogpost](https://huggingface.co./blog/4bit-transformers-bitsandbytes) - This blogpost introduces 4-bit quantization and QLoRa, an efficient finetuning approach.
- [bistandbytes 8-bit quantization blogpost](https://huggingface.co./blog/hf-bitsandbytes-integration) - This blogpost explains how 8-bit quantization works with bitsandbytes.
- [Basic usage Google Colab notebook for GPTQ](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) - This notebook shows how to quantize your transformers model with the GPTQ method, how to do inference, and how to do fine-tuning with the quantized model.
- [Basic usage Google Colab notebook for bitsandbytes](https://colab.research.google.com/drive/1ge2F1QSK8Q7h0hn3YKuBCOAS0bK8E0wf?usp=sharing) - This notebook shows how to use 4-bit models in inference with all their variants, and how to run GPT-neo-X (a 20B parameter model) on a free Google Colab instance.
- [Merve's blogpost on quantization](https://huggingface.co./blog/merve/quantization) - This blogpost provides a gentle introduction to quantization and the quantization methods supported natively in transformers.
## Comparing bitsandbytes and auto-gptq
In this section, we will go over the pros and cons of bitsandbytes and gptq quantization. Note that these are based on the feedback from the community and they can evolve over time as some of these features are in the roadmap of the respective libraries.
### What are the benefits of bitsandbytes?
**easy**: bitsandbytes still remains the easiest way to quantize any model as it does not require calibrating the quantized model with input data (also called zero-shot quantization). It is possible to quantize any model out of the box as long as it contains `torch.nn.Linear` modules. Whenever a new architecture is added in transformers, as long as they can be loaded with accelerate’s `device_map=”auto”`, users can benefit from bitsandbytes quantization straight out of the box with minimal performance degradation. Quantization is performed on model load, no need to run any post-processing or preparation step.
**cross-modality interoperability**: As the only condition to quantize a model is to contain a `torch.nn.Linear` layer, quantization works out of the box for any modality, making it possible to load models such as Whisper, ViT, Blip2, etc. in 8-bit or 4-bit out of the box.
**0 performance degradation when merging adapters**: (Read more about adapters and PEFT in [this blogpost](https://huggingface.co./blog/peft) if you are not familiar with it). If you train adapters on top of the quantized base model, the adapters can be merged on top of of the base model for deployment, with no inference performance degradation. You can also [merge](https://github.com/huggingface/peft/pull/851/files) the adapters on top of the dequantized model ! This is not supported for GPTQ.
### What are the benefits of autoGPTQ?
**fast for text generation**: GPTQ quantized models are fast compared to bitsandbytes quantized models for [text generation](https://huggingface.co./docs/transformers/main_classes/text_generation). We will address the speed comparison in an appropriate section.
**n-bit support**: The GPTQ algorithm makes it possible to quantize models up to 2 bits! However, this might come with severe quality degradation. The recommended number of bits is 4, which seems to be a great tradeoff for GPTQ at this time.
**easily-serializable**: GPTQ models support serialization for any number of bits. Loading models from TheBloke namespace: https://huggingface.co./TheBloke (look for those that end with the `-GPTQ` suffix) is supported out of the box, as long as you have the required packages installed. Bitsandbytes supports 8-bit serialization but does not support 4-bit serialization as of today.
**AMD support**: The integration should work out of the box for AMD GPUs!
### What are the potential rooms of improvements of bitsandbytes?
**slower than GPTQ for text generation**: bitsandbytes 4-bit models are slow compared to GPTQ when using [`generate`](https://huggingface.co./docs/transformers/main_classes/text_generation).
**4-bit weights are not serializable**: Currently, 4-bit models cannot be serialized. This is a frequent community request, and we believe it should be addressed very soon by the bitsandbytes maintainers as it's in their roadmap!
### What are the potential rooms of improvements of autoGPTQ?
**calibration dataset**: The need of a calibration dataset might discourage some users to go for GPTQ. Furthermore, it can take several hours to quantize the model (e.g. 4 GPU hours for a 175B scale model [according to the paper](https://arxiv.org/pdf/2210.17323.pdf) - section 2)
**works only for language models (for now)**: As of today, the API for quantizing a model with auto-GPTQ has been designed to support only language models. It should be possible to quantize non-text (or multimodal) models using the GPTQ algorithm, but the process has not been elaborated in the original paper or in the auto-gptq repository. If the community is excited about this topic this might be considered in the future.
## Diving into speed benchmarks
We decided to provide an extensive benchmark for both inference and fine-tuning adapters using bitsandbytes and auto-gptq on different hardware. The inference benchmark should give users an idea of the speed difference they might get between the different approaches we propose for inference, and the adapter fine-tuning benchmark should give a clear idea to users when it comes to deciding which approach to use when fine-tuning adapters on top of bitsandbytes and GPTQ base models.
We will use the following setup:
- bitsandbytes: 4-bit quantization with `bnb_4bit_compute_dtype=torch.float16`. Make sure to use `bitsandbytes>=0.41.1` for fast 4-bit kernels.
- auto-gptq: 4-bit quantization with exllama kernels. You will need `auto-gptq>=0.4.0` to use ex-llama kernels.
### Inference speed (forward pass only)
This benchmark measures only the prefill step, which corresponds to the forward pass during training. It was run on a single NVIDIA A100-SXM4-80GB GPU with a prompt length of 512. The model we used was `meta-llama/Llama-2-13b-hf`.
with batch size = 1:
|quantization |act_order|bits|group_size|kernel|Load time (s)|Per-token latency (ms)|Throughput (tok/s)|Peak memory (MB)|
| | [
[
"transformers",
"implementation",
"optimization",
"quantization"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"transformers",
"quantization",
"optimization",
"implementation"
] | null | null |
05615c67-233e-4acf-92c4-5a3564376aad | completed | 2025-01-16T03:08:37.719000 | 2025-01-16T13:34:39.854000 | 8607bfc3-dbe2-46e0-9570-b0e8ff2fff70 | How to train your model dynamically using adversarial data | chrisjay | mnist-adversarial.md | ##### What you will learn here
- 💡the basic idea of dynamic adversarial data collection and why it is important.
- ⚒ how to collect adversarial data dynamically and train your model on them - using an MNIST handwritten digit recognition task as an example.
## Dynamic adversarial data collection (DADC)
Static benchmarks, while being a widely-used way to evaluate your model's performance, are fraught with many issues: they saturate, have biases or loopholes, and often lead researchers to chase increment in metrics instead of building trustworthy models that can be used by humans <sup>[1](https://dynabench.org/about)</sup>.
Dynamic adversarial data collection (DADC) holds great promise as an approach to mitigate some of the issues of static benchmarks. In DADC, humans create examples to _fool_ state-of-the-art (SOTA) models. This process offers two benefits:
1. it allows users to gauge how robust their models really are;
2. it yields data that may be used to further train even stronger models.
This process of fooling and training the model on the adversarially collected data is repeated over multiple rounds leading to a more robust model that is aligned with humans<sup>[1](https://aclanthology.org/2022.findings-acl.18.pdf) </sup>.
## Training your model dynamically using adversarial data
Here I will walk you through dynamically collecting adversarial data from users and training your model on them - using the MNIST handwritten digit recognition task.
In the MNIST handwritten digit recognition task, the model is trained to predict the number given a `28x28` grayscale image input of the handwritten digit (see examples in the figure below). The numbers range from 0 to 9.
![](https://i.imgur.com/1OiMHhE.png)
> Image source: [mnist | Tensorflow Datasets](https://www.tensorflow.org/datasets/catalog/mnist)
This task is widely regarded as the _hello world_ of computer vision and it is very easy to train models that achieve high accuracy on the standard (and static) benchmark test set. Nevertheless, it has been shown that these SOTA models still find it difficult to predict the correct digits when humans write them (and give them as input to the model): researchers opine that this is largely because the static test set does not adequately represent the very diverse ways humans write. Therefore humans are needed in the loop to provide the models with _adversarial_ samples which will help them generalize better.
This walkthrough will be divided into the following sections:
1. Configuring your model
2. Interacting with your model
3. Flagging your model
4. Putting it all together
### Configuring your model
First of all, you need to define your model architecture. My simple model architecture below is made up of two convolutional networks connected to a 50 dimensional fully connected layer and a final layer for the 10 classes. Finally, we use the softmax activation function to turn the model's output into a probability distribution over the classes.
```python
# Adapted from: https://nextjournal.com/gkoehler/pytorch-mnist
class MNIST_Model(nn.Module):
def __init__(self):
super(MNIST_Model, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
```
Now that you have defined the structure of your model, you need to train it on the standard MNIST train/dev dataset.
### Interacting with your model
At this point we assume you have your trained model. Although this model is trained, we aim to make it robust using human-in-the-loop adversarial data. For that, you need a way for users to interact with it: specifically you want users to be able to write/draw numbers from 0-9 on a canvas and have the model try to classify it. You can do all that with [🤗 Spaces](https://huggingface.co./spaces) which allows you to quickly and easily build a demo for your ML models. Learn more about Spaces and how to build them [here](https://huggingface.co./spaces/launch).
Below is a simple Space to interact with the `MNIST_Model` which I trained for 20 epochs (achieved 89% accuracy on the test set). You draw a number on the white canvas and the model predicts the number from your image. The full Space can be accessed [here](https://huggingface.co./spaces/chrisjay/simple-mnist-classification). Try to fool this model😁. Use your funniest handwriting; write on the sides of the canvas; go wild!
<iframe src="https://chrisjay-simple-mnist-classification.hf.space" frameBorder="0" width="100%" height="700px" title="Gradio app" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
### Flagging your model
Were you able to fool the model above?😀 If yes, then it's time to _flag_ your adversarial example. Flagging entails:
1. saving the adversarial example to a dataset
2. training the model on the adversarial examples after some threshold samples have been collected.
3. repeating steps 1-2 a number of times.
I have written a custom `flag` function to do all that. For more details feel free to peruse the full code [here](https://huggingface.co./spaces/chrisjay/mnist-adversarial/blob/main/app.py#L314).
>Note: Gradio has a built-in flaggiing callback that allows you easily flag adversarial samples of your model. Read more about it [here](https://gradio.app/using_flagging/).
### Putting it all together
The final step is to put all the three components (configuring the model, interacting with it and flagging it) together as one demo Space! To that end, I have created the [MNIST Adversarial](https://huggingface.co./spaces/chrisjay/mnist-adversarial) Space for dynamic adversarial data collection for the MNIST handwritten recognition task. Feel free to test it out below.
<iframe src="https://chrisjay-mnist-adversarial.hf.space" frameBorder="0" width="100%" height="1400px" title="Gradio app" allow="accelerometer; ambient-light-sensor; autoplay; battery; camera; document-domain; encrypted-media; fullscreen; geolocation; gyroscope; layout-animations; legacy-image-formats; magnetometer; microphone; midi; oversized-images; payment; picture-in-picture; publickey-credentials-get; sync-xhr; usb; vr ; wake-lock; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-popups-to-escape-sandbox allow-same-origin allow-scripts allow-downloads"></iframe>
## Conclusion
Dynamic Adversarial Data Collection (DADC) has been gaining traction in the machine learning community as a way to gather diverse non-saturating human-aligned datasets, and improve model evaluation and task performance. By dynamically collecting human-generated adversarial data with models in the loop, we can improve the generalization potential of our models.
This process of fooling and training the model on the adversarially collected data should be repeated over multiple rounds<sup>[1](https://aclanthology.org/2022.findings-acl.18.pdf)</sup>. [Eric Wallace et al](https://aclanthology.org/2022.findings-acl.18), in their experiments on natural language inference tasks, show that while in the short term standard non-adversarial data collection performs better, in the long term however dynamic adversarial data collection leads to the highest accuracy by a noticeable margin.
Using the [🤗 Spaces](https://huggingface.co./spaces), it becomes relatively easy to build a platform to dynamically collect adversarial data for your model and train on them. | [
[
"data",
"research",
"benchmarks",
"tutorial"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"data",
"research",
"benchmarks",
"tutorial"
] | null | null |
7a3744a5-a39a-448d-8507-2cd0993c514c | completed | 2025-01-16T03:08:37.719000 | 2025-01-19T19:15:04.653000 | 219ed138-a525-4b47-a5cb-445983ff4c8b | Benchmarking Language Model Performance on 5th Gen Xeon at GCP | MatrixYao, kding1, IlyasMoutawwakil | intel-gcp-c4.md | **TL;DR**: We benchmark 2 representative agentic AI workload components, text embedding and text generation, on two Google Cloud Compute Engine Xeon-based CPU instances, namely N2 and C4. The results consistently shows that C4 has 10x to 24x higher throughput over N2 in text embedding and 2.3x to 3.6x higher throughput over N2 in text generation. Taking price into consideration, C4's hourly price is about 1.3x of N2, in this sense, C4 keeps 7x ~ 19x TCO(Total Cost of Ownership) advantage over N2 in text embedding and 1.7x ~ 2.9x TCO advantage in text generation. The results indicate that it is possible to deploy light-weight Agentic AI solutions wholly on CPUs.
## Introduction
People believe the next frontier of artificial intelligence lies in agentic AI. The new paradigm uses the `perceive - reason - action` pipeline to combine LLM's sophisticated reasoning and iterative planning capabilities with a strong context understanding enhancement. The context understanding capability is provided by tools like vector databases and sensor input, to ceate more context-aware AI systems which can autonomously solve complex, multi-step problems. Moreover, the function calling capability of LLMs make it possible for the AI agent to directly take the action, going far beyond the chatting a chatbot offers. Agentic AI offers exciting prospects to enhance productivity and operations across industries.
<kbd>
<img src="assets/intel-gcp-c4/agentic_ai.png">
</kbd>
People are bringing more and more tools into agentic AI systems, and most of these tools are now work on CPU, this brings a concern that there will be non-negligible host-accelerator traffic overheads in this paradigm. At the same time, model builders and vendors are building Small Language Models (SLMs) that are smaller yet powerful, the latest examples being Meta's 1B and 3B llama3.2 models, advanced multilingual text generation and tool calling capabilities. Further, CPUs are evolving and beginning to offer increased AI support, Intel Advanced Matrix Extensions (AMX), a new AI tensor accelerator, was introduced in its 4th generation of Xeon CPUs. Putting these 3 threads together, it would be interesting to see the potential of CPU to host the whole agentic AI systems, especially when it uses SLMs.
In this post, we will benchmark 2 representative components of agentic AI: text embedding and text generation and compare the gen-on-gen performance boost of CPU on these 2 components. We picked Google Cloud Compute Engine C4 instance and N2 instance for comparison. The logic behind is: C4 is powered by [5th generation Intel Xeon processors](https://www.intel.com/content/www/us/en/products/docs/processors/xeon/5th-gen-xeon-scalable-processors.html) (code-named Emerald Rapids) , the latest generatiton of Xeon CPU available on Google Cloud which integrates Intel AMX to boost AI performance; and N2 is powered by [3rd generation Intel Xeon processors](https://www.intel.com/content/www/us/en/products/docs/processors/xeon-accelerated/3rd-gen-xeon-scalable-processors.html) (code-named Ice Lake), the previous generation of Xeon CPU on Google Cloud which only has AVX-512 and no AMX. We'll demonstrate the benefits of AMX.
We will use [`optimum-benchmark`](https://github.com/huggingface/optimum-benchmark), Hugging Face's unified benchmark library for multi-backends and multi-devices, to measure the performance. The benchmark runs on [`optimum-intel`](https://github.com/huggingface/optimum-intel) backend. `optimum-intel` is an Hugging Face acceleration library to accelerate end-to-end pipelines on Intel architectures (CPU, GPU). Our benchmark cases are as below:
- for text embedding, we use [`WhereIsAI/UAE-Large-V1`](https://huggingface.co./WhereIsAI/UAE-Large-V1) model with input sequence length 128, and we sweep batch size from 1 to 128
- for text generation, we use [`meta-llama/Llama-3.2-3`](https://huggingface.co./meta-llama/Llama-3.2-3B) model with input sequence length 256 and output sequence length 32, and we sweep batch size from 1 to 64
## Create instance
### N2
Visit [google cloud console](https://console.cloud.google.com/) and click on `create a VM` under your project. Then, follow the below steps to create a single 96-vcpu instance which corresponds to one Intel Ice Lake CPU socket.
1. pick N2 in `Machine configuration` tab and specify `Machine type` as `n2-standard-96`. Then you need set `CPU platform` as below image:
![alt text](assets/intel-gcp-c4/image-2.png)
2. configure `OS and storage` tab as below:
![alt text](assets/intel-gcp-c4/image-1.png)
3. keep other configurations as default
4. click `CREATE` button
Now, you have one N2 instance.
## C4
Follow the below steps to create a 96-vcpu instance which corresponds to one Intel Emerald Rapids socket. Please note that we use the same CPU core count between C4 and N2 in this post to ensure an iso-core-count benchmark.
1. pick C4 in `Machine configuration` tab and specify `Machine type` as `c4-standard-96`. You can also set `CPU platform` and turn on all-core turbo to make performance more stable:
![alt text](assets/intel-gcp-c4/image.png)
2. configure `OS and storage` as N2
3. keep other configurations as default
4. click `CREATE` button
Now, you have one C4 instance.
## Set up environment
Follow below steps to set up the environment easily. For reproducibility, we list the version and commit we are using in the commands.
1. SSH connect to instance
2. `$ git clone https://github.com/huggingface/optimum-benchmark.git`
3. `$ cd ./optimum-benchmark`
4. `$ git checkout d58bb2582b872c25ab476fece19d4fa78e190673`
5. `$ cd ./docker/cpu`
6. `$ sudo docker build . -t <your_docker_image_tag>`
7. `$ sudo docker run -it --rm --privileged -v /home/<your_home_folder>:/workspace <your_docker_image_tag> /bin/bash`
We are in container now, do following steps:
1. `$ pip install "optimum-intel[ipex]"@git+https://github.com/huggingface/optimum-intel.git@6a3b1ba5924b0b017b0b0f5de5b10adb77095b`
2. `$ pip install torch==2.3.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu`
3. `$ python -m pip install intel-extension-for-pytorch==2.3.10`
4. `$ cd /workspace/optimum-benchmark`
5. `$ pip install .[ipex]`
6. `$ export OMP_NUM_THREADS=48`
7. `$ export KMP_AFFINITY=granularity=fine,compact,1,0`
8. `$ export KMP_BLOCKTIME=1`
9. `$ pip install huggingface-hub`
10. `$ huggingface-cli login`, then input your Hugging Face token to access llama model
## Benchmark
### text embedding
You need update `examples/ipex_bert.yaml` in `optimum-benchmark` directory as below to benchmark `WhereIsAI/UAE-Large-V1`. We change numa binding to `0,1` because both N2 and C4 have 2 NUMA domains per socket, you can double check with `lscpu`.
``` | [
[
"llm",
"benchmarks",
"tutorial",
"optimization",
"efficient_computing"
]
] | [
"2629e041-8c70-4026-8651-8bb91fd9749a"
] | [
"submitted"
] | [
"llm",
"benchmarks",
"efficient_computing",
"optimization"
] | null | null |
Dataset Card for blog_posts_classified
This dataset has been created with Argilla. As shown in the sections below, this dataset can be loaded into your Argilla server as explained in Load with Argilla, or used directly with the datasets
library in Load with datasets
.
Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as pip install argilla --upgrade
and then use the following code:
import argilla as rg
ds = rg.Dataset.from_hub("fdaudens/blog_posts_classified", settings="auto")
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
Using this dataset with datasets
To load the records of this dataset with datasets
, you'll just need to install datasets
as pip install datasets --upgrade
and then use the following code:
from datasets import load_dataset
ds = load_dataset("fdaudens/blog_posts_classified")
This will only load the records of the dataset, but not the Argilla settings.
Dataset Structure
This dataset repo contains:
- Dataset records in a format compatible with HuggingFace
datasets
. These records will be loaded automatically when usingrg.Dataset.from_hub
and can be loaded independently using thedatasets
library viaload_dataset
. - The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
- A dataset configuration folder conforming to the Argilla dataset format in
.argilla
.
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
Fields
The fields are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
Field Name | Title | Type | Required |
---|---|---|---|
title | Blog Post Title | text | True |
authors | Authors | text | True |
filename | Source Filename | text | True |
content | Blog Content | text | True |
Questions
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
Question Name | Title | Type | Required | Description | Values/Labels |
---|---|---|---|---|---|
content_class | What topics does this blog post cover? | multi_label_selection | True | Select all topics that apply to this blog post | ['llm', 'computer_vision', 'audio', 'transformers', 'data', 'mlops', 'research', 'implementation', 'benchmarks', 'tutorial', 'community', 'security', 'optimization', 'deployment', 'tools', 'text_generation', 'text_classification', 'translation', 'image_generation', 'multi_modal', 'quantization', 'fine_tuning', 'integration', 'efficient_computing', 'robotics'] |
Data Splits
The dataset contains a single split, which is train
.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation guidelines
Pre-annotated blog posts with manual labels. Please verify and adjust the classifications as needed.
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
[More Information Needed]
- Downloads last month
- 13