File size: 1,596 Bytes
85d2f30 0cbf2d1 10ea423 0cbf2d1 10ea423 dc7c958 32681ab dc7c958 32681ab dc7c958 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: creativeml-openrail-m
language:
- ta
- en
pipeline_tag: translation
widget:
- text: Thalaivaru nirantharam
inference:
parameters:
src_lang : en
tgt_lang : ta
---
# Model Card for Deepakvictor/tan-ta
<!-- Provide a quick summary of what the model is/does. -->
This model is Finetuned on Facebook's m2m model to convert Tanglish words to Tamil
## Model Details
Model is finetuned on facebook/m2m100_418M m2m100_418m page --> https://huggingface.co./facebook/m2m100_418M
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Deepakvictor
- **Language(s) (NLP):** Tamil,Tanglish
- **Finetuned from model [facebook/m2m100_418M]:** [https://huggingface.co./facebook/m2m100_418M]
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Need to be uploaded]
- **Demo [optional]:** [Need to be uploaded]
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
## How to Get Started with the Model
Use the code below to get started with the model.
```python
# Load model directly from transformers library
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Deepakvictor/tan-ta")
model = AutoModelForSeq2SeqLM.from_pretrained("Deepakvictor/tan-ta")
#pass the input
inp = tokenizer("Thalaivaru nirantharam",return_tensors="pt")
out= model.generate(**inp)
tokenizer.batch_decode(out,skip_special_tokens=True)
#['தலைவரு நிரந்தரம்']
```
Repo code --> github.com/devic1 🖤
|