FinMatcha-3B-Instruct
FinMatcha is a powerful Indonesian-focused large language model (LLM) fine-tuned from the Llama-3.2-3B-Instruct base model. The model has been trained to handle a variety of conversation, with a special emphasis on understanding and generating Indonesian text.
This model has been fine-tuned on a wide array of Indonesian datasets, making it adept at handling the nuances of the Indonesian language, from formal to colloquial speech. It also supports English for bilingual applications.
Model Details
- Finetuned from model: Llama-3.2-3B-Instruct
- Dataset: NekoFi/alpaca-gpt4-indonesia-cleaned
- Model Size: 3B
- License: Apache-2.0
- Languages: Indonesian, English
How to use
Installation
To use the Finmatcha model, install the required dependencies:
pip install transformers>=4.45
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "xMaulana/FinMatcha-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer("Bagaimanakah sebuah negara dapat terbentuk?", return_tensors="pt").to("cuda")
outputs = model.generate(inputs.input_ids,
max_new_tokens = 2048,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
temperature=0.7,
do_sample=True,
top_k=5,
top_p=0.9,
repetition_penalty=1.1
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Limitations
- The model is primarily focused on the Indonesian language and may not perform as well on non-Indonesian tasks.
- As with all LLMs, cultural and contextual biases can be present.
License
The model is licensed under the Apache-2.0.
Contributing
We welcome contributions to enhance and improve Finmatcha. Feel free to open issues or submit pull requests for improvements.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 23.81 |
IFEval (0-Shot) | 75.48 |
BBH (3-Shot) | 23.19 |
MATH Lvl 5 (4-Shot) | 12.39 |
GPQA (0-shot) | 2.57 |
MuSR (0-shot) | 5.02 |
MMLU-PRO (5-shot) | 24.24 |
- Downloads last month
- 363
Model tree for xMaulana/FinMatcha-3B-Instruct
Base model
meta-llama/Llama-3.2-3B-InstructCollection including xMaulana/FinMatcha-3B-Instruct
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard75.480
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard23.190
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard12.390
- acc_norm on GPQA (0-shot)Open LLM Leaderboard2.570
- acc_norm on MuSR (0-shot)Open LLM Leaderboard5.020
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard24.240