SEOKDONG commited on
Commit
ecde90c
โ€ข
1 Parent(s): 752aa6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -36,3 +36,21 @@ pipeline_tag: text-generation
36
  ๋˜ํ•œ, ๋ณต์žกํ•œ ๋…ผ๋ฆฌ์  ์‚ฌ๊ณ ๋ฅผ ์š”๊ตฌํ•˜๋Š” ๋ฌธ์ œ์— ๋Œ€ํ•ด ์ œํ•œ๋œ ์ถ”๋ก  ๋Šฅ๋ ฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ,
37
  ํŽธํ–ฅ๋œ ๋ฐ์ดํ„ฐ๊ฐ€ ํฌํ•จ๋  ๊ฒฝ์šฐ ํŽธํ–ฅ๋œ ์‘๋‹ต์ด ์ƒ์„ฑ๋  ๊ฐ€๋Šฅ์„ฑ๋„ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค.
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ๋˜ํ•œ, ๋ณต์žกํ•œ ๋…ผ๋ฆฌ์  ์‚ฌ๊ณ ๋ฅผ ์š”๊ตฌํ•˜๋Š” ๋ฌธ์ œ์— ๋Œ€ํ•ด ์ œํ•œ๋œ ์ถ”๋ก  ๋Šฅ๋ ฅ์„ ๋ณด์ผ ์ˆ˜ ์žˆ์œผ๋ฉฐ,
37
  ํŽธํ–ฅ๋œ ๋ฐ์ดํ„ฐ๊ฐ€ ํฌํ•จ๋  ๊ฒฝ์šฐ ํŽธํ–ฅ๋œ ์‘๋‹ต์ด ์ƒ์„ฑ๋  ๊ฐ€๋Šฅ์„ฑ๋„ ์กด์žฌํ•ฉ๋‹ˆ๋‹ค.
38
 
39
+ # โบ ์‚ฌ์šฉ ๋ฐฉ๋ฒ•
40
+ <pre><code>
41
+ from transformers import AutoModel, AutoTokenizer
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained("SEOKDONG/llama3.1_korean_v0.1_sft_by_aidx")
44
+ model = AutoModel.from_pretrained("SEOKDONG/llama3.1_korean_v0.1_sft_by_aidx")
45
+
46
+ input_text = """ ใ€Œ๊ตญ๋ฏผ๊ฑด๊ฐ•๋ณดํ—˜๋ฒ•ใ€์ œ44์กฐ, ใ€Œ๊ตญ๋ฏผ๊ฑด๊ฐ•๋ณดํ—˜๋ฒ• ์‹œํ–‰๋ นใ€์ œ19์กฐ,ใ€Œ์•ฝ๊ด€์˜ ๊ทœ์ œ์— ๊ด€ํ•œ ๋ฒ•๋ฅ ใ€์ œ5์กฐ, ใ€Œ์ƒ๋ฒ•ใ€์ œ54์กฐ ์ฐธ์กฐ ํŒ๋‹จ ํ•ด์ค˜"""
47
+ inputs = tokenizer(input_text, return_tensors="pt")
48
+ with torch.no_grad():
49
+ outputs = model.generate(**inputs, max_length=1024, temperature=0.5, do_sample=True, repetition_penalty=1.15)
50
+ result = tokenizer.decode(outputs[0], skip_special_tokens=True)
51
+ print(result)
52
+ </code></pre>
53
+
54
+
55
+ ---
56
+ Hereโ€™s the English version of the provided text: