Edit model card

Special Thanks:

Model Description:

The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.

image/png

Chang Log

2024-06-26

  • 1048K

image/png

Questions

  • The model's response results are for reference only, please do not fully trust them.
  • testing with other tools is not comprehensive.but there may be new issues, so please leave a message if you encounter any.

้—ฎ้ข˜

  • ๆจกๅž‹ๅ›žๅค็ป“ๆžœไป…ไพ›ๅ‚่€ƒ,่ฏทๅ‹ฟๅฎŒๅ…จ็›ธไฟก
  • ๅทฅๅ…ท็š„ๆต‹่ฏ•ไธๅฎŒๅ–„

Stop Strings

    stop = [
      "## Instruction:",
      "### Instruction:",
      "<|end_of_text|>",
      "  //:",
      "</s>",
      "<3```",
      "### Note:",
      "### Input:",
      "### Response:",
      "### Emoticons:"
    ],

Model Use

character

If you want to use vision functionality:

  • You must use the latest versions of Koboldcpp.

To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. Llava MMProj

  • You can load the mmproj by using the corresponding section in the interface: image/png

Thank you:

To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.

  • Hastagaras
  • Gryphe
  • cgato
  • ChaoticNeutrals
  • mergekit
  • merge
  • transformers
  • llama
  • Nitral-AI
  • MLP-KTLim
  • rinna
  • hfl
  • Rupesh2
  • stephenlzc
  • theprint
  • Sao10K
  • turboderp
  • TheBossLevel123
  • winglian
  • .........

base_model:

  • turboderp/llama3-turbcat-instruct-8b
  • Nitral-AI/Hathor_Fractionate-L3-8B-v.05
  • Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  • Sao10K/L3-8B-Stheno-v3.3-32K
  • TheBossLevel123/Llama3-Toxic-8B-Float16
  • cgato/L3-TheSpice-8b-v0.8.3
  • winglian/llama-3-8b-1m-PoSE library_name: transformers tags:
  • mergekit
  • merge

llama3-8B-DarkIdol-2.1-Uncensored-1048K-a

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using winglian/llama-3-8b-1m-PoSE as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/L3-8B-Stheno-v3.3-32K
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
  - model: cgato/L3-TheSpice-8b-v0.8.3
  - model: Nitral-AI/Hathor_Fractionate-L3-8B-v.05
  - model: TheBossLevel123/Llama3-Toxic-8B-Float16
  - model: turboderp/llama3-turbcat-instruct-8b
  - model: winglian/llama-3-8b-1m-PoSE
merge_method: model_stock
base_model: winglian/llama-3-8b-1m-PoSE
dtype: bfloat16

base_model:

  • hfl/llama-3-chinese-8b-instruct-v3
  • MLP-KTLim/llama-3-Korean-Bllossom-8B
  • rinna/llama-3-youko-8b library_name: transformers tags:
  • mergekit
  • merge

llama3-8B-DarkIdol-2.1-Uncensored-1048K-b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using ./llama3-8B-DarkIdol-2.1-1048K-a as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: hfl/llama-3-chinese-8b-instruct-v3
  - model: rinna/llama-3-youko-8b
  - model: MLP-KTLim/llama-3-Korean-Bllossom-8B
  - model: ./llama3-8B-DarkIdol-2.1-1048K-a
merge_method: model_stock
base_model: ./llama3-8B-DarkIdol-2.1-1048K-a
dtype: bfloat16

base_model:

  • stephenlzc/dolphin-llama3-zh-cn-uncensored
  • theprint/Llama-3-8B-Lexi-Smaug-Uncensored
  • Rupesh2/OrpoLlama-3-8B-instruct-uncensored library_name: transformers tags:
  • mergekit
  • merge

llama3-8B-DarkIdol-2.1-Uncensored-1048K

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using ./llama3-8B-DarkIdol-2.1-1048K-b as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Rupesh2/OrpoLlama-3-8B-instruct-uncensored
  - model: stephenlzc/dolphin-llama3-zh-cn-uncensored
  - model: theprint/Llama-3-8B-Lexi-Smaug-Uncensored
  - model: ./llama3-8B-DarkIdol-2.1-1048K-b
merge_method: model_stock
base_model: ./llama3-8B-DarkIdol-2.1-1048K-b
dtype: bfloat16
Downloads last month
129
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-1048K

Quantizations
3 models

Spaces using aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-1048K 5