AbhirupGhosh commited on
Commit
e277447
·
1 Parent(s): de0b66d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -12
README.md CHANGED
@@ -1,28 +1,22 @@
1
  ---
2
  license: apache-2.0
3
  tags:
 
 
4
  - generated_from_keras_callback
5
  model-index:
6
  - name: opus-mt-finetuned-hi-en
7
  results: []
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
11
- probably proofread and complete it, then remove this comment. -->
12
-
13
  # opus-mt-finetuned-hi-en
14
 
15
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on an unknown dataset.
16
- It achieves the following results on the evaluation set:
17
 
18
 
19
  ## Model description
20
 
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
 
27
  ## Training and evaluation data
28
 
@@ -30,10 +24,12 @@ More information needed
30
 
31
  ## Training procedure
32
 
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - optimizer: None
37
  - training_precision: float32
38
 
39
  ### Training results
@@ -46,4 +42,3 @@ The following hyperparameters were used during training:
46
  - TensorFlow 2.8.2
47
  - Datasets 2.3.2
48
  - Tokenizers 0.12.1
49
-
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - translation
5
+ - Hindi
6
  - generated_from_keras_callback
7
  model-index:
8
  - name: opus-mt-finetuned-hi-en
9
  results: []
10
  ---
11
 
 
 
 
12
  # opus-mt-finetuned-hi-en
13
 
14
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-hi-en](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en) on [HindiEnglish Corpora](https://www.clarin.eu/resource-families/parallel-corpora)
 
15
 
16
 
17
  ## Model description
18
 
19
+ The model is a transformer model similar to the [Transformer](https://arxiv.org/abs/1706.03762?context=cs) as defined in Attention Is All You Need et al
 
 
 
 
20
 
21
  ## Training and evaluation data
22
 
 
24
 
25
  ## Training procedure
26
 
27
+ The model was trained on 2 NVIDIA_TESLA_A100 GPU's on Google's vertex AI platform.
28
+
29
  ### Training hyperparameters
30
 
31
  The following hyperparameters were used during training:
32
+ - optimizer: AdamWeightDecay
33
  - training_precision: float32
34
 
35
  ### Training results
 
42
  - TensorFlow 2.8.2
43
  - Datasets 2.3.2
44
  - Tokenizers 0.12.1