Sweaterdog commited on
Commit
29eb381
·
verified ·
1 Parent(s): f58c553

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -25,7 +25,10 @@ The MindCraft LLM tuning CSV file can be found here, this can be tweaked as need
25
  # This is a very very early access Beta Model
26
  This model is NOT a final version, but instead is a test to see how well models can be with a small dataset. This dataset is also a test of how smaller models can be improved from extremely high quality, and as close to real-world scenarios as possible.
27
 
28
- This small dataset finally allows the model to code, and to store history, of course the crux of this dataset is in the playing part.
 
 
 
29
 
30
  The storing memory parts are real examples from in-game interactions
31
 
@@ -43,6 +46,4 @@ I hope this model performs well for you!
43
 
44
  The models are going to change, I am changing hyperparameters on tuning to *(hopefully)* increase performance and decrease hallucinations
45
 
46
- *BTW, if you want to download this model, I suggest using llama.cpp to make a quantization of it, I would have done it during tuning but I ran out of GPU time on google colab*
47
-
48
- attempt 7 failed, trying again today with fixed settings and possibly more prompts *(~3000)*
 
25
  # This is a very very early access Beta Model
26
  This model is NOT a final version, but instead is a test to see how well models can be with a small dataset. This dataset is also a test of how smaller models can be improved from extremely high quality, and as close to real-world scenarios as possible.
27
 
28
+ This model listed here (Andy-3.5-beta-10) is NOT the final model, but instead a preview for the new training method, this model performs well at playing Minecraft and can even play with no instructions other than history.
29
+ That all being said, this model was trained on a *small* dataset, meaning it doesn't have ***every single example*** it may need, the final version will have a much larger dataset.
30
+
31
+ # Where data came from
32
 
33
  The storing memory parts are real examples from in-game interactions
34
 
 
46
 
47
  The models are going to change, I am changing hyperparameters on tuning to *(hopefully)* increase performance and decrease hallucinations
48
 
49
+ *BTW, if you want to download this model, I suggest using llama.cpp to make a quantization of it, I would have done it during tuning but I ran out of GPU time on google colab*