Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
# MalayaLLM: Gemma-2 [മലയാളം/Malayalam]
|
5 |
+
|
6 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64e65800e44b2668a56f9731/MztEunp8nG4Qy-LSds0SZ.png" alt="Baby MalayaLLM" width="300" height="200">
|
7 |
+
|
8 |
+
# Introducing the Developer:
|
9 |
+
Discover the mind behind this model and stay updated on their contributions to the field
|
10 |
+
https://www.linkedin.com/in/vishnu-prasad-j/
|
11 |
+
|
12 |
+
# Model description
|
13 |
+
The MalayaLLM models have been improved and customized expanding upon the groundwork laid by the original Gemma-2 model.
|
14 |
+
|
15 |
+
- **Model type:** A 9B Gemma-2 finetuned model on Malayalam tokens.
|
16 |
+
- **Language(s):** Malayalam and English
|
17 |
+
- **Datasets:** [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset)
|
18 |
+
- **Source Model:** [MalayaLLM_Gemma_2_9B_Base_V1.0](https://huggingface.co/VishnuPJ/MalayaLLM_Gemma_2_9B_Base_V1.0)
|
19 |
+
- **Instruct Model:** [MalayaLLM_Gemma_2_9B_Instruct_V1.0](https://huggingface.co/VishnuPJ/MalayaLLM_Gemma_2_9B_Instruct_V1.0)
|
20 |
+
- **Training Precision:** `float16`
|
21 |
+
- **Code:**
|
22 |
+
|
23 |
+
# Old Model
|
24 |
+
Gemma trained model is here :[MalayaLLM: Gemma-7B](https://huggingface.co/collections/VishnuPJ/malayallm-malayalam-gemma-7b-66851df5e809bed18c2abd25)
|
25 |
+
|
26 |
+
## How to run GGUF
|
27 |
+
|
28 |
+
- #### llama.cpp Web Server
|
29 |
+
- The web server is a lightweight HTTP server that can be used to serve local models and easily connect them to existing clients.
|
30 |
+
- #### Building llama.cpp
|
31 |
+
- To build `llama.cpp` locally, follow the instructions provided in the [build documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md).
|
32 |
+
- #### Running llama.cpp as a Web Server
|
33 |
+
- Once you have built `llama.cpp`, you can run it as a web server. Below is an example of how to start the server:
|
34 |
+
```sh
|
35 |
+
llama-server.exe -m gemma_2_9b_instruction.Q4_K_M.gguf -ngl 42 -c 128 -n 100
|
36 |
+
```
|
37 |
+
- #### Accessing the Web UI
|
38 |
+
- After starting the server, you can access the basic web UI via your browser at the following address:
|
39 |
+
[http://localhost:8080](http://localhost:8080)
|
40 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64e65800e44b2668a56f9731/te7d5xjMrtk6RDMEAxmCy.png" alt="Baby MalayaLLM" width="600" height="1000">
|
41 |
+
|
42 |
+
|
43 |
+
## Made Using UNSLOTH
|
44 |
+
|
45 |
+
Thanks to [Unsloth](https://github.com/unslothai/unsloth), the process of fine-tuning large language models (LLMs) has become much easier and more efficient.
|
46 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/64e65800e44b2668a56f9731/WPt_FKUWDdc6--l_Qnb-G.png" alt="Unsloth" width="300" height="200">
|
47 |
+
|
48 |
+
# 🌟Happy coding💻🌟
|