theprint commited on
Commit
0d17e39
1 Parent(s): 311edc4

Update README.md

Browse files

More info added on what it is, and why it exists.

Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -11,13 +11,17 @@ tags:
11
  - trl
12
  - sft
13
  base_model: unsloth/tinyllama-bnb-4bit
 
14
  ---
15
 
 
 
 
16
  **5/3/24 Update:** The model was given a bit more training and several gguf files were uploaded.
17
 
18
- This model is a work in progress, mainly created to test a cthulhu-fied data set. The plan is to train this model further, and eventually also make the data set public.
19
 
20
- The data set itself is based on alpaca-cleaned, except all the replies have been re-written to sound like they were given by a cultist of Cthulhu. Only a subset of the data (10k entries) was used to train the first iteration of this model
21
 
22
  # Uploaded model
23
 
@@ -27,4 +31,4 @@ The data set itself is based on alpaca-cleaned, except all the replies have been
27
 
28
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
29
 
30
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
11
  - trl
12
  - sft
13
  base_model: unsloth/tinyllama-bnb-4bit
14
+ pipeline_tag: question-answering
15
  ---
16
 
17
+ **IA IA! A tiny Cthulhu cultist!**
18
+ This TinyLlama variant is fine tuned on Cthulhu Mythos, so you can have your very own cultist AI friend.
19
+
20
  **5/3/24 Update:** The model was given a bit more training and several gguf files were uploaded.
21
 
22
+ This model was mainly created to test a cthulhu-fied data set. This tiny model is a proof of concept, before a larger model is trained on the full data set. At that point, I will also make the data set public.
23
 
24
+ The Cthulhu Mythos data set is based on alpaca-cleaned, except all the replies have been re-written to sound like they were given by a cultist of Cthulhu. Only a subset of the data (10k entries) was used to train the first iteration of this model
25
 
26
  # Uploaded model
27
 
 
31
 
32
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
33
 
34
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)