File size: 1,012 Bytes
8da5e1b 811a933 8da5e1b de0b66d e277447 de0b66d 811a933 8da5e1b de0b66d e277447 de0b66d 8e4e6db de0b66d e277447 de0b66d e277447 de0b66d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
language:
- en
- hi
- multilingual
license: apache-2.0
tags:
- translation
- Hindi
- generated_from_keras_callback
datasets:
- HindiEnglishCorpora
---
# opus-mt-finetuned-hi-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co./Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
## Model description
The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need by Vaswani et al
## Training and evaluation data
More information needed
## Training procedure
The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: AdamWeightDecay
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|