dahara1 commited on
Commit
5cdfc95
1 Parent(s): 57a4780

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -13
README.md CHANGED
@@ -10,18 +10,15 @@ language:
10
  ---
11
  # webbigdata/ALMA-7B-Ja
12
 
13
- Original ALMA Model [ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B). (26.95GB)
14
-
15
- ALMA-7B-Ja is a machine translation model that uses ALMA's learning method to translate Japanese to English.(13.3GB)
16
- The original ALMA-7B supports English and Russian(ru) translation. This model supports Japanese(ja) and English translations instead of Russian.
17
 
18
  Like the original model, This model has been verified that it also has a translation ability between the following languages, but if you want the translation function for these languages, it is better to use the original [ALMA-13B model](https://huggingface.co/haoranxu/ALMA-13B).
19
 
20
- German(de) and English(en)
21
- Chinese(zh) and English(en)
22
- Icelandic(is) and English(en)
23
- Czech(cs) and English(en)
24
-
25
 
26
  Translating from English (en→xx) BLEU/COMET
27
  Models | de | cs | is | zh | ru/jp | Avg. |
@@ -39,17 +36,31 @@ GPT-3.5-D | 30.90/84.79 | 44.50/86.16 | 31.90/82.13 | 25.00/81.62 | 38.50
39
  ALMA-7B(Original)| 30.26/84.00 | 43.91/85.86 | 35.97/86.03 | 23.75/79.85 | 39.37/84.58 | 34.55/84.02 |
40
  ALMA-7B-Ja(Ours) | 26.41/83.13 | 34.39/83.50 | 24.77/81.12 | 20.60/78.54 | 15.57/78.61 | 24.35/81.76 |
41
 
 
42
 
43
 
44
- [Sample Code For Free Colab](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_Free_Colab_sample.ipynb)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  If you want to translate the entire file at once, try Colab below.
47
  [ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample.ipynb)
48
 
49
 
50
- There is also a GPTQ quantized version model that reduces model size(3.9GB) and memory usage, although the performance is probably lower.
51
- And translation ability for languages other than Japanese and English has deteriorated significantly.
52
- [webbigdata/ALMA-7B-Ja-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-GPTQ-Ja-En)
53
 
54
 
55
  **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.
 
10
  ---
11
  # webbigdata/ALMA-7B-Ja
12
 
13
+ ALMA-7B-Ja(13.3GB) is a machine translation model that uses ALMA's learning method to translate Japanese to English.
14
+ The [original ALMA-7B (26.95GB)](https://huggingface.co/haoranxu/ALMA-7B) supports English and Russian(ru) translation. This model supports Japanese(ja) and English translations instead of Russian.
 
 
15
 
16
  Like the original model, This model has been verified that it also has a translation ability between the following languages, but if you want the translation function for these languages, it is better to use the original [ALMA-13B model](https://huggingface.co/haoranxu/ALMA-13B).
17
 
18
+ - German(de) and English(en)
19
+ - Chinese(zh) and English(en)
20
+ - Icelandic(is) and English(en)
21
+ - Czech(cs) and English(en)
 
22
 
23
  Translating from English (en→xx) BLEU/COMET
24
  Models | de | cs | is | zh | ru/jp | Avg. |
 
36
  ALMA-7B(Original)| 30.26/84.00 | 43.91/85.86 | 35.97/86.03 | 23.75/79.85 | 39.37/84.58 | 34.55/84.02 |
37
  ALMA-7B-Ja(Ours) | 26.41/83.13 | 34.39/83.50 | 24.77/81.12 | 20.60/78.54 | 15.57/78.61 | 24.35/81.76 |
38
 
39
+ [Sample Code For Free Colab](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_Free_Colab_sample.ipynb)
40
 
41
 
42
+
43
+ ## Other Version
44
+
45
+
46
+
47
+
48
+ ### webbigdata-ALMA-7B-Ja-gguf
49
+
50
+ mmnga made llama.cpp(gguf) version [webbigdata-ALMA-7B-Ja-gguf](https://huggingface.co/mmnga/webbigdata-ALMA-7B-Ja-gguf). Thank you!
51
+ llama.cpp is a tool used primarily on Macs, and gguf is its latest version format. It can be used without gpu.
52
+
53
+
54
+ ### ALMA-7B-Ja-GPTQ-Ja-En
55
+ GPTQ is quantized(reduce the size of the model) method and ALMA-7B-Ja-GPTQ has GPTQ quantized version that reduces model size(3.9GB) and memory usage.
56
+ But the performance is probably lower. And translation ability for languages other than Japanese and English has deteriorated significantly.
57
+
58
+ [Sample Code For Free Colab webbigdata/ALMA-7B-Ja-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-GPTQ-Ja-En)
59
 
60
  If you want to translate the entire file at once, try Colab below.
61
  [ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample.ipynb)
62
 
63
 
 
 
 
64
 
65
 
66
  **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance.