Update README.md
Browse files
README.md
CHANGED
@@ -43,6 +43,10 @@ ALMA-7B-Ja(Ours) | 26.41/83.13 | 34.39/83.50 | 24.77/81.12 | 20.60/78.54 | 15.57
|
|
43 |
|
44 |
[Sample Code For Free Colab](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_Free_Colab_sample.ipynb)
|
45 |
|
|
|
|
|
|
|
|
|
46 |
There is also a GPTQ quantized version model that reduces model size(3.9GB) and memory usage, although the performance is probably lower.
|
47 |
And translation ability for languages other than Japanese and English has deteriorated significantly.
|
48 |
[webbigdata/ALMA-7B-Ja-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-GPTQ-Ja-En)
|
|
|
43 |
|
44 |
[Sample Code For Free Colab](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_Free_Colab_sample.ipynb)
|
45 |
|
46 |
+
If you want to translate the entire file at once, try Colab below.
|
47 |
+
[ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample](https://github.com/webbigdata-jp/python_sample/blob/main/ALMA_7B_Ja_GPTQ_Ja_En_batch_translation_sample.ipynb)
|
48 |
+
|
49 |
+
|
50 |
There is also a GPTQ quantized version model that reduces model size(3.9GB) and memory usage, although the performance is probably lower.
|
51 |
And translation ability for languages other than Japanese and English has deteriorated significantly.
|
52 |
[webbigdata/ALMA-7B-Ja-GPTQ-Ja-En](https://huggingface.co/webbigdata/ALMA-7B-Ja-GPTQ-Ja-En)
|