Update README.md
Browse files
README.md
CHANGED
@@ -36,3 +36,21 @@ pipeline_tag: text-generation
|
|
36 |
๋ํ, ๋ณต์กํ ๋
ผ๋ฆฌ์ ์ฌ๊ณ ๋ฅผ ์๊ตฌํ๋ ๋ฌธ์ ์ ๋ํด ์ ํ๋ ์ถ๋ก ๋ฅ๋ ฅ์ ๋ณด์ผ ์ ์์ผ๋ฉฐ,
|
37 |
ํธํฅ๋ ๋ฐ์ดํฐ๊ฐ ํฌํจ๋ ๊ฒฝ์ฐ ํธํฅ๋ ์๋ต์ด ์์ฑ๋ ๊ฐ๋ฅ์ฑ๋ ์กด์ฌํฉ๋๋ค.
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
๋ํ, ๋ณต์กํ ๋
ผ๋ฆฌ์ ์ฌ๊ณ ๋ฅผ ์๊ตฌํ๋ ๋ฌธ์ ์ ๋ํด ์ ํ๋ ์ถ๋ก ๋ฅ๋ ฅ์ ๋ณด์ผ ์ ์์ผ๋ฉฐ,
|
37 |
ํธํฅ๋ ๋ฐ์ดํฐ๊ฐ ํฌํจ๋ ๊ฒฝ์ฐ ํธํฅ๋ ์๋ต์ด ์์ฑ๋ ๊ฐ๋ฅ์ฑ๋ ์กด์ฌํฉ๋๋ค.
|
38 |
|
39 |
+
# โบ ์ฌ์ฉ ๋ฐฉ๋ฒ
|
40 |
+
<pre><code>
|
41 |
+
from transformers import AutoModel, AutoTokenizer
|
42 |
+
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained("SEOKDONG/llama3.1_korean_v0.1_sft_by_aidx")
|
44 |
+
model = AutoModel.from_pretrained("SEOKDONG/llama3.1_korean_v0.1_sft_by_aidx")
|
45 |
+
|
46 |
+
input_text = """ ใ๊ตญ๋ฏผ๊ฑด๊ฐ๋ณดํ๋ฒใ์ 44์กฐ, ใ๊ตญ๋ฏผ๊ฑด๊ฐ๋ณดํ๋ฒ ์ํ๋ นใ์ 19์กฐ,ใ์ฝ๊ด์ ๊ท์ ์ ๊ดํ ๋ฒ๋ฅ ใ์ 5์กฐ, ใ์๋ฒใ์ 54์กฐ ์ฐธ์กฐ ํ๋จ ํด์ค"""
|
47 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
48 |
+
with torch.no_grad():
|
49 |
+
outputs = model.generate(**inputs, max_length=1024, temperature=0.5, do_sample=True, repetition_penalty=1.15)
|
50 |
+
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
51 |
+
print(result)
|
52 |
+
</code></pre>
|
53 |
+
|
54 |
+
|
55 |
+
---
|
56 |
+
Hereโs the English version of the provided text:
|