kirankunapuli commited on
Commit
119bfb9
1 Parent(s): 51e4f01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -1
README.md CHANGED
@@ -4,6 +4,7 @@ language:
4
  - hi
5
  license: apache-2.0
6
  tags:
 
7
  - transformers
8
  - unsloth
9
  - gemma
@@ -13,6 +14,7 @@ datasets:
13
  - yahma/alpaca-cleaned
14
  - ravithejads/samvaad-hi-filtered
15
  - HydraIndicLM/hindi_alpaca_dolly_67k
 
16
  ---
17
 
18
  # Gemma-2B-Hinglish-LORA-v1.0 model
@@ -20,7 +22,55 @@ datasets:
20
  - **Developed by:** [Kiran Kunapuli](https://www.linkedin.com/in/kirankunapuli/)
21
  - **License:** apache-2.0
22
  - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
23
- - - **Model config:**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ```python
25
  model = FastLanguageModel.get_peft_model(
26
  model,
 
4
  - hi
5
  license: apache-2.0
6
  tags:
7
+ - text-generation
8
  - transformers
9
  - unsloth
10
  - gemma
 
14
  - yahma/alpaca-cleaned
15
  - ravithejads/samvaad-hi-filtered
16
  - HydraIndicLM/hindi_alpaca_dolly_67k
17
+ pipeline_tag: text-generation
18
  ---
19
 
20
  # Gemma-2B-Hinglish-LORA-v1.0 model
 
22
  - **Developed by:** [Kiran Kunapuli](https://www.linkedin.com/in/kirankunapuli/)
23
  - **License:** apache-2.0
24
  - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
25
+ - **Model usage:** Use the below code in Python
26
+ ```python
27
+ import torch
28
+ from transformers import AutoTokenizer, AutoModelForCausalLM
29
+
30
+ tokenizer = AutoTokenizer.from_pretrained("kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0")
31
+ model = AutoModelForCausalLM.from_pretrained("kirankunapuli/Gemma-2B-Hinglish-LORA-v1.0")
32
+
33
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
34
+ model = model.to(device)
35
+
36
+ alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
37
+
38
+ ### Instruction:
39
+ {}
40
+
41
+ ### Input:
42
+ {}
43
+
44
+ ### Response:
45
+ {}"""
46
+
47
+ # Example 1
48
+ inputs = tokenizer(
49
+ [
50
+ alpaca_prompt.format(
51
+ "ऐतिहासिक स्मारक India Gate कहाँ स्थित है?", # instruction
52
+ "", # input
53
+ "", # output - leave this blank for generation!
54
+ )
55
+ ], return_tensors = "pt").to(device)
56
+
57
+ outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
58
+ print(tokenizer.batch_decode(outputs))
59
+
60
+ # Example 2
61
+ inputs = tokenizer(
62
+ [
63
+ alpaca_prompt.format(
64
+ "ऐतिहासिक स्मारक इंडिया गेट कहाँ स्थित है? मुझे अंग्रेजी में बताओ", # instruction
65
+ "", # input
66
+ "", # output - leave this blank for generation!
67
+ )
68
+ ], return_tensors = "pt").to(device)
69
+
70
+ outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
71
+ print(tokenizer.batch_decode(outputs))
72
+ ```
73
+ - **Model config:**
74
  ```python
75
  model = FastLanguageModel.get_peft_model(
76
  model,