grimulkan commited on
Commit
367d7b8
1 Parent(s): c7a0b81

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -45,7 +45,7 @@ Examples are generated with the default Mirostat setting in Oobabooga, with `Mir
45
  * [EXL2 2.4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-2.4bpw_h6_exl2) fits in 1x24GB using Exllamav2 & 8-bit cache @ 10K context
46
  * [EXL2 4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-4.65bpw_h6_exl2) fits in 2x24GB (19/24) using Exllamav2 @ 16K context
47
  * [EXL2 6bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-6bpw_h8_exl2) fits in 48GB+24GB (36/24 split) or 3x24GB (16/17/20 split) using Exllamav2 @ 32k context
48
- * [GGUFs](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K_GGUF)
49
 
50
  ### Training Data
51
 
@@ -60,8 +60,8 @@ Examples are generated with the default Mirostat setting in Oobabooga, with `Mir
60
  * Sections of [Surge Instruct](https://huggingface.co/datasets/sachith-surge/evol-instruct) (extraction, summarization, re-writing, classification).
61
  * Proxy RP Logs (GPT4 outputs only): [jannie-log-augmented](https://huggingface.co/datasets/grimulkan/jannie-log-augmented), [Teatime](https://huggingface.co/datasets/OpenLeecher/Teatime) & [aicg-logs-augmented](https://huggingface.co/datasets/grimulkan/aicg-logs-augmented)
62
  * All were re-stitched together to create a single seamless conversion to undo the 2K or 4K divisions, and augmented/cleaned (the updated datasets are linked above).
63
- * A fully re-generated version of [Floyd Text Adventures](https://huggingface.co/datasets/PocketDoc/Floyd-Text-Adventures) with better context and AI interaction format - though this is not the focus for v0.5.
64
- * A fully re-generated version of the CYS CYOA dataset (re-generated from source by 'dungeon crawling' the space automatically, maximizing visiting unique 'rooms', then converting the output logs into a chat format). This is also not the focus for v0.5.
65
  * [NART synthetic therapy logs](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic) was heavily filtered and used cautiously.
66
  * [Augmental-Stenisgate-Augmented](https://huggingface.co/datasets/grimulkan/Augmental-Stenisgate-Augmented), an augmented, cleaned up version of [Augmental Stenisgate RP](https://huggingface.co/datasets/Heralax/Augmental-Dataset) where the AI only plays a single character.
67
  * [bluemoon_Karen_cleaned](https://huggingface.co/datasets/grimulkan/bluemoon_Karen_cleaned), an error-corrected version of [Bluemoon RP](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned), re-generated using [Karen The Editor](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).
 
45
  * [EXL2 2.4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-2.4bpw_h6_exl2) fits in 1x24GB using Exllamav2 & 8-bit cache @ 10K context
46
  * [EXL2 4bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-4.65bpw_h6_exl2) fits in 2x24GB (19/24) using Exllamav2 @ 16K context
47
  * [EXL2 6bit](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-6bpw_h8_exl2) fits in 48GB+24GB (36/24 split) or 3x24GB (16/17/20 split) using Exllamav2 @ 32k context
48
+ * [All GGUFs](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K_GGUF)
49
 
50
  ### Training Data
51
 
 
60
  * Sections of [Surge Instruct](https://huggingface.co/datasets/sachith-surge/evol-instruct) (extraction, summarization, re-writing, classification).
61
  * Proxy RP Logs (GPT4 outputs only): [jannie-log-augmented](https://huggingface.co/datasets/grimulkan/jannie-log-augmented), [Teatime](https://huggingface.co/datasets/OpenLeecher/Teatime) & [aicg-logs-augmented](https://huggingface.co/datasets/grimulkan/aicg-logs-augmented)
62
  * All were re-stitched together to create a single seamless conversion to undo the 2K or 4K divisions, and augmented/cleaned (the updated datasets are linked above).
63
+ * A fully re-generated version of [Floyd Text Adventures](https://huggingface.co/datasets/PocketDoc/Floyd-Text-Adventures) with better context and AI interaction format.
64
+ * A fully re-generated version of the [CYS](https://huggingface.co/datasets/PocketDoc/Choose-Your-Story-Long-Text-Adventures) dataset from source (by 'dungeon crawling' the space automatically, maximizing visiting unique 'rooms', then converting the output logs into a chat format).
65
  * [NART synthetic therapy logs](https://huggingface.co/datasets/jerryjalapeno/nart-100k-synthetic) was heavily filtered and used cautiously.
66
  * [Augmental-Stenisgate-Augmented](https://huggingface.co/datasets/grimulkan/Augmental-Stenisgate-Augmented), an augmented, cleaned up version of [Augmental Stenisgate RP](https://huggingface.co/datasets/Heralax/Augmental-Dataset) where the AI only plays a single character.
67
  * [bluemoon_Karen_cleaned](https://huggingface.co/datasets/grimulkan/bluemoon_Karen_cleaned), an error-corrected version of [Bluemoon RP](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned), re-generated using [Karen The Editor](https://huggingface.co/FPHam/Karen_theEditor_13b_HF).