|
--- |
|
license: mit |
|
datasets: |
|
- tsac |
|
language: |
|
- ar |
|
--- |
|
|
|
This is a converted version of [Instadeep's](https://huggingface.co./InstaDeepAI) [TunBERT](https://github.com/instadeepai/tunbert/) from nemo to safetensors. |
|
|
|
Make sure to read the original model [licence](https://github.com/instadeepai/tunbert/blob/main/LICENSE) |
|
<details> |
|
<summary>architectural changes </summary> |
|
|
|
## original model head |
|
|
|
 |
|
|
|
|
|
## this model head |
|
|
|
 |
|
|
|
</details> |
|
|
|
## Note |
|
this is a WIP and any contributions are welcome |
|
|
|
|
|
# how to load the model |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("not-lain/TunBERT") |
|
model = AutoModelForSequenceClassification.from_pretrained("not-lain/TunBERT",trust_remote_code=True) |
|
``` |
|
|
|
|
|
# how to use the model |
|
```python |
|
text = "[insert text here]" |
|
inputs = tokenizer(text,return_tensors='pt') |
|
output = model(**inputs) |
|
``` |
|
or you can use the pipeline : |
|
```python |
|
from transformers import pipeline |
|
|
|
pipe = pipeline(model="not-lain/TunBERT",tokenizer = "not-lain/TunBERT",trust_remote_code=True) |
|
pipe("text") |
|
``` |
|
**IMPORTANT** : |
|
* Make sure to enable `trust_remote_code=True` |
|
|