--- license: bigscience-bloom-rail-1.0 language: - zh --- # 体验 🚀 点击链接,即可体验🔗 **[http://101.68.79.42:7861/](http://101.68.79.42:7861/)** ## 介绍 1. ✅ 对`bloom-7b`模型做了sft 2. 🚀 训练代码和推理代码全部分享,可以查看链接[https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/chinese_bloom) ## 🚀更新 | 模型链接 | 训练的数据量 | 模型版本 | 备注 | |------------------------------------------------------------------------------------------------------------------------------|-----------|------|------------------------| | [https://huggingface.co./yuanzhoulvpi/chinese_bloom_7b_chat](https://huggingface.co./yuanzhoulvpi/chinese_bloom_7b_chat) | 15w中文指令数据 | v1 | | | [https://huggingface.co./yuanzhoulvpi/chinese_bloom_7b_chat_v2](https://huggingface.co./yuanzhoulvpi/chinese_bloom_7b_chat_v2) | 150w条中文指令数据 | v2 | 目前已经测试过效果,相较于v1,效果有所提升 | | [https://huggingface.co./yuanzhoulvpi/chinese_bloom_7b_chat_v3](https://huggingface.co./yuanzhoulvpi/chinese_bloom_7b_chat_v3) | 420w条中文指令数据 | v3 | 目前效果还没测试,欢迎大家测试 | ## 个人感受 1. 🎯 `bloom`系列的模型,在中文领域,具有极大的潜力,在经过有监督微调训练之后,效果非常惊人! 2. 🔄 `bloom`系列的模型,覆盖中文、英文、代码、法语、西班牙语等。即使拿来做翻译、拿来做代码生成,也都没问题!(后期将会分享相关教程) 3. 😛 当前的这个`bloom-7b`模型,我是非常喜欢滴,特地在`8xA100`机器上训练了部分数据。整体效果非常不错~ ## 如何使用 ```python from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "yuanzhoulvpi/chinese_bloom_7b_chat"#"bigscience/bloomz-3b" #"bigscience/bloom-7b1"# "output_dir/checkpoint-8260"# tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint).half().cuda() PROMPT_DICT = { "prompt_input": ( "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:" ), "prompt_no_input": ( "Below is an instruction that describes a task. " "Write a response that appropriately completes the request.\n\n" "### Instruction:\n{instruction}\n\n### Response:" ), } from typing import Optional def generate_input(instruction:Optional[str]= None, input_str:Optional[str] = None) -> str: if input_str is None: return PROMPT_DICT['prompt_no_input'].format_map({'instruction':instruction}) else: return PROMPT_DICT['prompt_input'].format_map({'instruction':instruction, 'input':input_str}) for i in range(5): print("*"*80) inputs = tokenizer.encode(generate_input(instruction="你是谁"), return_tensors="pt") outputs = model.generate(inputs,num_beams=3, max_new_tokens=512, do_sample=False, top_k=10, penalty_alpha=0.6, temperature=0.8, repetition_penalty=1.2) print(tokenizer.decode(outputs[0])) ``` ## 效果 不管是写代码还是写文案,`bloom-7b`在中文领域有极大的潜力 - example 1 ![](images/a923de3471e716b2f31f81cf5d594fe8.jpg) - example 2 ![](images/ca8400fa29e7302bde72c9108f74f78f.jpg) - example 3 ![](images/d14a752bce41fe613d6732b83c5861c1.jpg) ![](images/38373adaf09c3bc179d7652f3ee9dacb.jpg) - example 4 ![](images/WechatIMG3534.jpeg) - example 5 ![](images/WechatIMG3535.jpeg)