robinsyihab commited on
Commit
b1d85d1
0 Parent(s):

first init

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLM Model for Bahasa Indonesia Dialog
2
+
3
+ Sidrap-7B-v1 is a Large Language Model (LLM) trained and fine-tuned on a Indonesian public dataset. It is designed to enable conversations and dialogues in bahasa Indonesia. The base model used for fine-tuning is [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
4
+
5
+ ## Usage
6
+
7
+ ```python
8
+ from transformers import AutoModelForCausalLM, AutoTokenizer
9
+
10
+ device = "cuda" # the device to load the model onto
11
+
12
+ model = AutoModelForCausalLM.from_pretrained("robinsyihab/Sidrap-7B-v1")
13
+ tokenizer = AutoTokenizer.from_pretrained("robinsyihab/Sidrap-7B-v1")
14
+
15
+ messages = [
16
+ {"role": "system", "content": "Anda adalah asisten yang suka membantu, penuh hormat, dan jujur. Selalu jawab semaksimal mungkin, sambil tetap aman. Jawaban Anda tidak boleh berisi konten berbahaya, tidak etis, rasis, seksis, beracun, atau ilegal. Harap pastikan bahwa tanggapan Anda tidak memihak secara sosial dan bersifat positif.\n\
17
+ Jika sebuah pertanyaan tidak masuk akal, atau tidak koheren secara faktual, jelaskan alasannya daripada menjawab sesuatu yang tidak benar. Jika Anda tidak mengetahui jawaban atas sebuah pertanyaan, mohon jangan membagikan informasi palsu."},
18
+ {"role": "user", "content": "buatkan kode program, sebuah fungsi untuk memvalidasi alamat email menggunakan regex"}
19
+ ]
20
+
21
+ encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
22
+
23
+ model_inputs = encodeds.to(device)
24
+ model.to(device)
25
+
26
+ generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
27
+ decoded = tokenizer.batch_decode(generated_ids)
28
+ print(decoded[0])
29
+ ```
30
+
31
+ **NOTES:** To achieve optimal results in Bahasa Indonesia, please use a system message as the initial input as demonstrated above.
32
+
33
+ ## Model Architecture
34
+
35
+ This model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
36
+
37
+ * Grouped-Query Attention
38
+ * Sliding-Window Attention
39
+ * Byte-fallback BPE tokenizer
40
+
41
+
42
+ ## Limitations and Ethical Considerations
43
+
44
+ The Sidrap-7B-v1 model has been trained on a public dataset and does not have any moderation mechanism.
45
+
46
+ It may still have limitations and biases. It is always recommended to review and evaluate the generated outputs for any potential issues.
47
+
48
+ We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
49
+
50
+ Furthermore, please ensure that the usage of this language model is aligned with ethical guidelines, respectful of privacy, and avoids harmful content generation.
51
+
52
+ ### Citation
53
+
54
+ If you use the Sidrap-7B-v1 model in your research or project, please cite it as:
55
+
56
+ ```
57
+ @article{Sidrap,
58
+ title={Sidrap-7B-v1: LLM Model for Bahasa Indonesia Dialog},
59
+ author={Robin Syihab},
60
+ publisher={Hugging Face}
61
+ journal={Hugging Face Repository},
62
+ year={2023}
63
+ }
64
+ ```
65
+