image/jpeg

FinMatcha-3B-Instruct

FinMatcha is a powerful Indonesian-focused large language model (LLM) fine-tuned from the Llama-3.2-3B-Instruct base model. The model has been trained to handle a variety of conversation, with a special emphasis on understanding and generating Indonesian text.

This model has been fine-tuned on a wide array of Indonesian datasets, making it adept at handling the nuances of the Indonesian language, from formal to colloquial speech. It also supports English for bilingual applications.

Model Details

How to use

Installation

To use the Finmatcha model, install the required dependencies:

pip install transformers>=4.45

Usage

Google Colab

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "xMaulana/FinMatcha-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

inputs = tokenizer("Bagaimanakah sebuah negara dapat terbentuk?", return_tensors="pt").to("cuda")
outputs = model.generate(inputs.input_ids, 
                          max_new_tokens = 2048,
                          pad_token_id=tokenizer.pad_token_id,
                          eos_token_id=tokenizer.eos_token_id,
                          temperature=0.7,
                          do_sample=True, 
                          top_k=5, 
                          top_p=0.9,
                          repetition_penalty=1.1
                         )
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Limitations

  • The model is primarily focused on the Indonesian language and may not perform as well on non-Indonesian tasks.
  • As with all LLMs, cultural and contextual biases can be present.

License

The model is licensed under the Apache-2.0.

Contributing

We welcome contributions to enhance and improve Finmatcha. Feel free to open issues or submit pull requests for improvements.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 23.81
IFEval (0-Shot) 75.48
BBH (3-Shot) 23.19
MATH Lvl 5 (4-Shot) 12.39
GPQA (0-shot) 2.57
MuSR (0-shot) 5.02
MMLU-PRO (5-shot) 24.24
Downloads last month
363
Safetensors
Model size
3.21B params
Tensor type
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for xMaulana/FinMatcha-3B-Instruct

Finetuned
(166)
this model
Merges
1 model
Quantizations
3 models

Collection including xMaulana/FinMatcha-3B-Instruct

Evaluation results