--- language: en tags: - deberta - deberta-v3 thumbnail: https://huggingface.co./front/thumbnails/microsoft.png license: mit --- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data. Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates. In DeBERTa V3, we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up. The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024 . Its total parameter number is 418M since we use a vocabulary containing 128K tokens which introduce 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2. #### Fine-tuning on NLU tasks We present the dev results on SQuAD 1.1/2.0 and MNLI tasks. | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m | |-------------------|-----------|-----------|--------| | RoBERTa-large | 94.6/88.9 | 89.4/86.5 | 90.2 | | XLNet-large | 95.1/89.7 | 90.6/87.9 | 90.8 | | DeBERTa-large | -/- | 90.7/88.0 | 91.3 | | **DeBERTa-v3-large** | -/- | 91.5/89.0 | **92.0** | | DeBERTa-v2-xxlarge|96.1/91.4 |**92.2/89.7** | 91.7 | #### Fine-tuning with HF transformers ```bash #!/bin/bash cd transformers/examples/pytorch/text-classification/ pip install datasets export TASK_NAME=mnli output_dir="ds_results" num_gpus=8 batch_size=8 python -m torch.distributed.launch --nproc_per_node=${num_gpus} \ run_glue.py \ --model_name_or_path microsoft/deberta-v3-large \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --evaluation_strategy steps \ --max_seq_length 256 \ --warmup_steps 1000 \ --per_device_train_batch_size ${batch_size} \ --learning_rate 6e-6 \ --num_train_epochs 2 \ --output_dir $output_dir \ --overwrite_output_dir \ --logging_steps 1000 \ --logging_dir $output_dir ``` ### Citation If you find DeBERTa useful for your work, please cite the following paper: ``` latex @inproceedings{ he2021deberta, title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION}, author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=XPZIaotutsD} } ```