File size: 1,137 Bytes
4ecfbfd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
language: en
license: mit
tags:
- exbert
- text-classification
- onnx
- fp16
- roberta
- optimum
datasets:
- bookcorpus
- wikipedia
base_model:
- openai-community/roberta-large-openai-detector
---

# RoBERTa Large OpenAI Detector


This model is a FP16 optimized version of [openai-community/roberta-large-openai-detector](https://huggingface.co./openai-community/roberta-large-openai-detector/). It runs exclusively on the GPU. 
The speedup compared to the base ONNX and pytorch versions depends chiefly on your GPU's FP16:FP32 ratio. For more comparison benchmarks and sample code of a related model, check here: [https://github.com/joaopn/gpu_benchmark_goemotions](https://github.com/joaopn/gpu_benchmark_goemotions).

You will need the GPU version of the ONNX Runtime. It can be installed with

```
pip install optimum[onnxruntime-gpu] --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
```

For convenience, this [benchmark repo](https://github.com/joaopn/gpu_benchmark_goemotions) provides an `environment.yml` file to create a conda env with all the requirements.