Edit model card

MalayaLLM: Gemma-2 [മലയാളം/Malayalam]

Baby MalayaLLM

Introducing the Developer:

Discover the mind behind this model and stay updated on their contributions to the field https://www.linkedin.com/in/vishnu-prasad-j/

Model description

The MalayaLLM models have been improved and customized expanding upon the groundwork laid by the original Gemma-2 model.

Old Model

Gemma trained model is here :MalayaLLM: Gemma-7B

How to run GGUF

  • llama.cpp Web Server

    • The web server is a lightweight HTTP server that can be used to serve local models and easily connect them to existing clients.
  • Building llama.cpp

  • Running llama.cpp as a Web Server

    • Once you have built llama.cpp, you can run it as a web server. Below is an example of how to start the server:
      llama-server.exe -m gemma_2_9b_instruction.Q4_K_M.gguf -ngl 42 -c 128 -n 100
      
  • Accessing the Web UI

    • After starting the server, you can access the basic web UI via your browser at the following address: http://localhost:8080Baby MalayaLLM

Made Using UNSLOTH

Thanks to Unsloth, the process of fine-tuning large language models (LLMs) has become much easier and more efficient. Unsloth

🌟Happy coding💻🌟

Downloads last month
18
GGUF
Model size
9.24B params
Architecture
gemma2

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including VishnuPJ/MalayaLLM_Gemma_2_9B_Instruct_V1.0_GGUF