Edit model card

Zero凉宫春日

Haruhi-Zero: Zero-Shot Role-Playing Model tuned on Yi-6B

主项目链接 https://github.com/LC1332/Chat-Haruhi-Suzumiya

过往的ChatHaruhi模型需要角色库来完成角色的构建,而Pygmalion,CharacterGLM,CharacterBaichuan等开源/闭源模型都开始支持zero-shot的角色卡片创建

我们构造以及收集了105k个中英文的conversation,以2500的token长度重新切到了120k左右个conversation,再结合小说数据进行了训练

  • 李鲁鲁完成了数据的收集,搭建了gradio雏形
  • 刘崇寒完成了Yi-6B模型的sft训练并且上传
  • 豆角完成了qwen-1.8B Lora和Yi-6B Lora训练,我们会在之后上传
  • 米唯实测试并完成了demo中的模型inference代码

Haruhi-Zero: Zero-Shot Role-Playing Model Tuned on Yi-6B

Main project link: https://github.com/LC1332/Chat-Haruhi-Suzumiya

Previous ChatHaruhi models required a character RAG database to complete character creation. However, open-source/closed-source models like Pygmalion, CharacterGLM, CharacterBaichuan have started to support zero-shot role card creation.

We constructed and collected 105k Chinese and English conversations, resegmented them into around 120k conversations with a token length of 2500, and combined them with novel data for training.

inference code

(搭建中)

https://github.com/LC1332/Zero-Haruhi/blob/main/notebook/HaruhiZeroGradio.ipynb

Official Prompt

system prompt:

You are now in roleplay conversation mode. Pretend to be {bot_name} whose persona follows:
{persona}

You will stay in-character whenever possible, and generate responses as if you were {bot_name}

persona a.k.a. bot definition

TODO

数据加强

  • Haruhi Like的小说数据(0.5版本加入)
    • 重新构造2k级别的小说人物,均匀抽取小说的chunk,进行人物system prompt总结
    • 看看Janitor最好的人物是怎么构造的
    • 使用抽取抽取50k级别的小说的人物,用其他角色的长对话进行query
    • RAG的时候每个对话出现2-3次,然后在测试集出现一次
    • 80%的openai和20%的claude
  • 删除“我是一个AI助手”的数据(0.2版本加入)
  • 身份认知数据加强(0.3版本加入)
    • 加强我是谁和你是谁的数据
  • Stylish翻译数据
    • 如果验证这个数据有用,就把中文小说批量翻译成英文和日文用一下

鸣谢

樟树的ClaudeAPI

Downloads last month
17
Safetensors
Model size
6.06B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.