RichardErkhov commited on
Commit
02ef21c
·
verified ·
1 Parent(s): 4c20831

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ spell_generation_opt-1.3b - AWQ
11
+ - Model creator: https://huggingface.co/m-elio/
12
+ - Original model: https://huggingface.co/m-elio/spell_generation_opt-1.3b/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language:
20
+ - en
21
+ tags:
22
+ - text-generation-inference
23
+ ---
24
+
25
+ # Model Card for OPT Spell Generation
26
+
27
+ ### Model Description
28
+
29
+ <!-- Provide a longer summary of what this model is. -->
30
+
31
+ This model is a fine-tuned **opt-1.3b** model for the generation of *D&D 5th edition spells*
32
+
33
+ - **Language(s) (NLP):** English
34
+ - **Finetuned from model:** [opt-1.3b](https://huggingface.co/facebook/opt-1.3b)
35
+ - **Dataset used for fine-tuning:** [m-elio/spell_generation](https://huggingface.co/datasets/m-elio/spell_generation)
36
+
37
+ ## Prompt Format
38
+
39
+ This prompt format based on the Alpaca model was used for fine-tuning:
40
+
41
+ ```python
42
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" \
43
+ f"### Instruction:\n{instruction}\n\n### Response:\n{response}"
44
+ ```
45
+
46
+ It is recommended to use the same prompt in inference to obtain the best results!
47
+
48
+ ## Output Format
49
+
50
+ The output format for a generated spell should be the following:
51
+
52
+ ```
53
+ Name:
54
+ Level:
55
+ School:
56
+ Classes:
57
+ Casting time:
58
+ Range:
59
+ Duration:
60
+ Components: [If no components are required, then this field has a None value]
61
+ Material cost: [If there is no "M" character in the Components field, then this field is skipped]
62
+ Description:
63
+ ```
64
+
65
+ Example:
66
+
67
+ ```
68
+ Name: The Shadow
69
+ Level: 1
70
+ School: Evocation
71
+ Classes: Bard, Cleric, Druid, Ranger, Sorcerer, Warlock, Wizard
72
+ Casting time: 1 Action
73
+ Range: Self
74
+ Duration: Concentration, Up To 1 Minute
75
+ Components: V, S, M
76
+ Material cost: a small piece of cloth
77
+ Description: You touch a creature within range. The target must make a Dexterity saving throw. On a failed save, the target takes 2d6 psychic damage and is charmed by you. On a successful save, the target takes half as much damage.
78
+ At Higher Levels. When you cast this spell using a spell slot of 4th level or higher, the damage increases by 1d6 for each slot level above 1st.
79
+ ```
80
+
81
+ ## Example use
82
+
83
+
84
+ ```python
85
+ from transformers import AutoModelForCausalLM, AutoTokenizer
86
+
87
+ model_id = "m-elio/spell_generation_opt-1.3b"
88
+
89
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
90
+ model = AutoModelForCausalLM.from_pretrained(model_id)
91
+
92
+ instruction = "Write a spell for the 5th edition of the Dungeons & Dragons game."
93
+
94
+ prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n" \
95
+ f"### Instruction:\n{instruction}\n\n### Response:\n"
96
+
97
+ tokenized_input = tokenizer(prompt, return_tensors="pt")
98
+ outputs = model.generate(**tokenized_input, max_length=512)
99
+
100
+ print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, tokenized_input.input_ids.shape[1]:], skip_special_tokens=True)[0])
101
+ ```
102
+
103
+