Tonya77 commited on
Commit
5b8af3b
·
verified ·
1 Parent(s): bc218a5

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -3
README.md CHANGED
@@ -1,3 +1,149 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-7B-Instruct
3
+ language:
4
+ - zh
5
+ license: apache-2.0
6
+ license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
7
+ pipeline_tag: text-generation
8
+ tags:
9
+ - facebook
10
+ - pytorch
11
+ - Qwen
12
+ - Qwen2.5
13
+ - ContaLLM
14
+ - ContaAI
15
+ library_name: transformers
16
+ ---
17
+
18
+ <img src="https://conta-ai-image.oss-cn-shanghai.aliyuncs.com/contaai/logo2.png" alt="ContaLLM" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
19
+
20
+ # ContaLLM-Fashion-7B-Instruct
21
+
22
+ ContaLLM-Fashion-7B-Instruct is a large-scale Chinese vertical marketing mode focusing on the fashion industry. You can customize and generate marketing texts according to users' specific marketing needs, brand, selection, content type, article length, topic, selling point, hashtag, scene, etc. Use the LLM's capabilities and training on existing high-quality marketing materials to help companies generate diversified, high-quality marketing content and improve marketing conversion rates.
23
+
24
+ ## Model description
25
+
26
+ - **Model type:** A model trained on a mix of publicly available, synthetic and human-annotated datasets.
27
+ - **Language(s) (NLP):** Primarily Chinese
28
+ - **Industry:** Fashion Makeup Industry Marketing
29
+ - **License:** apache-2.0
30
+ - **Finetuned from model:** Qwen/Qwen2.5-7B-Instruct
31
+
32
+ ### Model Stage
33
+
34
+ | **Industry** | **Version** | **Qwen 2.5 7B**
35
+ |--------------|-------------|------------------------------------------------------------------------------------------------------------|
36
+ | **Fashion** | **bf16** | [ContaAI/ContaLLM-Fashion-7B-Instruct](https://huggingface.co/ContaAI/ContaLLM-Fashion-7B-Instruct) |
37
+ | **Fashion** | **8bit** | [ContaAI/ContaLLM-Fashion-7B-Instruct-8bit](https://huggingface.co/ContaAI/ContaLLM-Fashion-7B-Instruct-8bit) |
38
+ | **Fashion** | **4bit** | [ContaAI/ContaLLM-Fashion-7B-Instruct-4bit](https://huggingface.co/ContaAI/ContaLLM-Fashion-7B-Instruct-4bit) |
39
+
40
+ ## Using the model
41
+
42
+ ### Loading with HuggingFace
43
+
44
+ To load the model with HuggingFace, use the following snippet:
45
+ ```
46
+ from transformers import AutoModelForCausalLM
47
+
48
+ model = AutoModelForCausalLM.from_pretrained("ContaAI/ContaLLM-Fashion-7B-Instruct")
49
+ ```
50
+
51
+
52
+ ### System Prompt
53
+
54
+ The model is a Chinese beauty marketing model, so we use this system prompt by default:
55
+ ```
56
+ system_prompt = '请根据用户提供的营销需求和其他信息写一篇时尚行业的营销推文。'
57
+ ```
58
+
59
+ ### User Prompt
60
+ Users can enter the required marketing needs according to their own needs, non-required including brand, product selection, content type, topics, selling point, hashtag, scenes, content length, which content length has three specifications, respectively, shorter, medium, longer. The details are as follows:
61
+
62
+ | Parameter name | Required | Meaning and optional range |
63
+ |-------------------|-----------------------|------------------------------------------------------------------------------------------------------|
64
+ | **营销需求** | required | Fill in your marketing requirements, cannot be blank |
65
+ | **品牌** | optional | Fill in your marketing brand, or remove this row from the prompt |
66
+ | **选品** | optional | Fill in your product selection, or remove this row from the prompt |
67
+ | **内容类型** | optional | Fill in the article type, or remove this row from the prompt |
68
+ | **内容长度** | optional | choices=['较长', '中等', '较短'], choose what you need, or remove this row from the prompt |
69
+ | **话题** | optional | Fill in your marketing topic, or remove this row from the prompt |
70
+ | **卖点** | optional | Fill in the selling point for your marketing needs, or remove this row from the prompt |
71
+ | **标签** | optional | Fill in the hashtag, or remove this row from the prompt |
72
+ | **场景** | optional | Fill in the scenes for your marketing needs, or remove this row from the prompt |
73
+
74
+ Example:
75
+ ```
76
+ user_prompt = """营销需求:秋冬大包包推荐
77
+ 品牌:Celine
78
+ 选品:CELINE托特包
79
+ 内容类型:产品种草与测评
80
+ 内容长度:较短
81
+ 话题:CELINE托特包、秋冬大包包、托特包用途
82
+ 卖点:慵懒设计、大容量、新款限定设计
83
+ 标签:CELINE、托特包、新品
84
+ 场景:日常通勤、妈咪包使用、秋冬搭配"""
85
+ ```
86
+
87
+ ### Use example (with template)
88
+ ```
89
+ import torch
90
+ from transformers import AutoModelForCausalLM, AutoTokenizer
91
+ model_name = "ContaAI/ContaLLM-Fashion-7B-Instruct"
92
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
93
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
94
+
95
+ system_prompt = '请根据用户提供的营销需求和其他信息写一篇时尚行业的营销推文。'
96
+
97
+ user_prompt = """营销需求:秋冬大包包推荐
98
+ 品牌:Celine
99
+ 选品:CELINE托特包
100
+ 内容类型:产品种草与测评
101
+ 内容长度:较短
102
+ 话题:CELINE托特包、秋冬大包包、托特包用途
103
+ 卖点:慵懒设计、大容量、新款限定设计
104
+ 标签:CELINE、托特包、新品
105
+ 场景:日常通勤、妈咪包使用、秋冬搭配"""
106
+
107
+ prompt_template = '''<|im_start|>system
108
+ {}<|im_end|>
109
+ <|im_start|>user
110
+ {}<|im_end|>
111
+ <|im_start|>assistant
112
+ '''
113
+
114
+ prompt = prompt_template.format(system_prompt, user_prompt)
115
+
116
+ tokenized_message = tokenizer(
117
+ prompt,
118
+ max_length=1024,
119
+ return_tensors="pt",
120
+ add_special_tokens=False
121
+ )
122
+
123
+ response_token_ids= model.generate(
124
+ **tokenized_message,
125
+ max_new_tokens=1024,
126
+ do_sample=True,
127
+ top_p=1.0,
128
+ temperature=0.5,
129
+ min_length=None,
130
+ use_cache=True,
131
+ top_k=50,
132
+ repetition_penalty=1.2,
133
+ length_penalty=1,
134
+ )
135
+
136
+ generated_tokens = response_token_ids[0, tokenized_message['input_ids'].shape[-1]:]
137
+ generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
138
+ print(generated_text)
139
+ ```
140
+
141
+ ### Bias, Risks, and Limitations
142
+
143
+ The ContaLLM models implemented safety techniques during data generation and training, but they are not deployed automatically with in-the-loop filtering of responses like ChatGPT during inference, so the model can produce problematic outputs (especially when prompted to do so).
144
+ It is also unknown what the size and composition of the corpus was used to train the base Qwen2.5 models, however it is likely to have included a mix of Web data and technical sources like books and code.
145
+ The use of the models is at your own risk. You may need to monitor the outputs of the model and take appropriate actions such as content filtering if necessary.
146
+
147
+ ## License and use
148
+
149
+ All Qwen 2.5 ContaAI models are released under Qwen's [Qwen 2.5 Community License Agreement](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE).