{
"cells": [
{
"cell_type": "markdown",
"id": "32216f81-0979-4afd-8c8c-16729cd0dab6",
"metadata": {},
"source": [
"# 4.1 模型微调VS指令微调"
]
},
{
"cell_type": "markdown",
"id": "7cd7f9b2-c0a3-48c2-848e-a1e9c7188f03",
"metadata": {},
"source": [
"## 一个典型的知乎问题\n",
"\n",
"### **问题**\n",
"\n",
"用LLM实现文本二分类,微调base模型还是微调chat模型比较好?[问题](https://www.zhihu.com/question/632473480/answer/38930949853)\n",
"\n",
"我想用开源LLM(例如chatglm,baichuan)实现文本二分类(比如正负情感分类),有一组训练数据可以用于微调模型,提升分类性能,这时候应该选择base模型还是chat模型?\n",
"\n",
"\n",
"### **回答**\n",
"1 如果是使用2分类的header,base模型好一些。\n",
"\n",
"也就是使用如下类似的的设置。\n",
"\n",
"model = AutoModelForSequenceClassification.from_pretrained(\n",
"\"yuanzhoulvpi/gpt2_chinese\", num_labels=2\n",
")\n",
"\n",
"对应的训练数据一般是这样的:\n",
"\n",
"| seq | label |\n",
"|------------------------------|-------|\n",
"| 他家的奶茶超级好喝。。。 | 1 |\n",
"| 他家的奶茶超级难喝。。。 | 0 |\n",
"\n",
"\n",
"2 如果是把分类问题,改成指令微调的模式,就是像\n",
"\n",
"```\n",
"{\n",
"\n",
"\"instruction\": \"你现在在做一项情感分类的任务,如果是积极情感,则回答积极。消极情感则回答消极。\"\n",
"\"input\":他家的奶茶超级好喝。。。\n",
"\"output\":“积极”\n",
"\n",
"}\n",
"```\n",
"\n",
"然后进行指令微调,lora/peft调整部分参数就行,一般是chat模型比较好。\n",
"\n",
"\n",
"\n",
"这种二分类问题,用llm就是大材小用了,一般就是选个小的的模型,用AutoModelForSequenceClassification效果最好,如果追求SOTA,有些研究表明搞成指令微调模式效果可能更好。"
]
},
{
"cell_type": "markdown",
"id": "2cfcc1e9-ddda-4a1c-871b-0508fd421ed5",
"metadata": {},
"source": [
"## 大模型微调(Fine-tuning)和指令微调(Instruction Tuning)\n",
"\n",
"普通的大模型微调(Fine-tuning)和指令微调(Instruction Tuning)是两种不同的训练方法,它们适用于不同的应用场景,并且在实现细节上也有所区别。\n",
"\n",
"\n",
"#### 1. **定义**\n",
"\n",
"普通微调是指在一个预训练好的大模型基础上,针对特定任务添加一个或多个新层(通常称为头部或 header),然后使用特定任务的数据集对整个模型(包括新添加的层)进行再训练的过程。对于分类任务,常见的做法是在 GPT-2 的顶部添加一个分类头。\n",
"\n",
"#### 2. **具体步骤**\n",
"\n",
"- **添加分类头**:为 GPT-2 添加一个分类头,该头通常包含线性层(全连接层)以及可能的激活函数和归一化层。\n",
" \n",
"- **准备数据**:准备好用于微调的任务特定数据集,如文本分类、情感分析等。\n",
" \n",
"- **微调过程**:\n",
" - 使用任务特定的数据集对整个模型(包括预训练权重和新添加的分类头)进行再训练。\n",
" - 通常会调整学习率、批量大小等超参数以优化性能。\n",
" - 可能只对新添加的层进行训练,或者对整个模型进行微调(取决于资源和需求)。\n",
"\n",
"#### 3. **适用场景**\n",
"\n",
"- **任务明确**:当有清晰的任务目标时,例如文本分类、命名实体识别等。\n",
"- **标签数据可用**:拥有足够的标注数据来进行监督学习。\n",
"\n",
"#### 4. **优点**\n",
"\n",
"- **针对性强**:能够有效地提升模型在特定任务上的表现。\n",
"- **资源利用效率高**:相比于从头开始训练,微调需要的计算资源和时间较少。\n",
"\n",
"#### 5. **缺点**\n",
"\n",
"- **泛化能力有限**:微调后的模型可能在未见过的任务或领域中表现不佳。\n",
"\n",
"### 指令微调(Instruction Tuning)\n",
"\n",
"#### 1. **定义**\n",
"\n",
"指令微调是一种更为通用的微调方法,它旨在让模型理解和遵循自然语言指令,而不是直接针对某个特定任务进行优化。这种方法通过提供一系列指令-输出对来训练模型,使其学会根据指令生成适当的响应。\n",
"\n",
"#### 2. **具体步骤**\n",
"\n",
"- **构造指令数据集**:创建一个包含各种指令及其预期输出的数据集。这些指令可以覆盖多种任务类型,如问答、翻译、摘要生成等。\n",
" \n",
"- **微调过程**:\n",
" - 使用指令数据集对模型进行训练,使模型能够理解并执行不同类型的指令。\n",
" - 强调模型对自然语言指令的理解和执行,而非特定于某一任务的优化。\n",
"\n",
"#### 3. **适用场景**\n",
"\n",
"- **多任务适应**:当希望模型能够在多种不同类型的任务中表现出色时。\n",
"- **少样本学习**:在仅有少量示例的情况下,仍然可以让模型快速适应新任务。\n",
"\n",
"#### 4. **优点**\n",
"\n",
"- **灵活性高**:模型可以在没有额外训练的情况下处理新的任务。\n",
"- **跨领域泛化能力强**:更有可能在未曾见过的任务或领域中保持良好的性能。\n",
"\n",
"#### 5. **缺点**\n",
"\n",
"- **复杂度增加**:指令微调通常涉及更多的训练数据和更复杂的训练过程。\n",
"- **评估难度较大**:由于任务的多样性,评估模型性能变得更加困难。\n",
"\n",
"\n",
"### 小结\n",
"\n",
"普通微调侧重于提高模型在特定任务上的性能,而指令微调则更加注重模型对自然语言指令的理解和执行能力。选择哪种方法取决于你的具体需求和应用场景。如果你有一个明确的任务并且有大量的标注数据,那么普通微调可能是更好的选择;如果你希望模型具有更高的灵活性和跨任务适应能力,则可以考虑指令微调。"
]
},
{
"cell_type": "markdown",
"id": "6203be53-18a5-447d-9071-32e031934b9c",
"metadata": {},
"source": [
"## 从GPT到chatGPT\n",
"\n",
"关键点在于指令微调(Instruction Tuning)\n",
"* 将所有任务统一为指令形式\n",
"* 多任务精调\n",
"* 与人类对齐(多样性)\n",
"* 进一步分为有监督指令微调和带有人类反馈的强化学习(RLHF)\n",
"\n",
"告别微调\n",
"\n",
"因为GPT-3使用了天量级的数据来进行预训练,所以学到的知识也更多更通用,以致于GPT-3打出的口号就是“告别微调的GPT-3”。\n",
"\n",
"相比于BERT这种预训练+微调的两阶段模型,GPT-3的目标是模型更加通用,从而解决BERT这种下游任务微调需要依赖领域标注数据的情况。\n",
"\n",
"拿我们实际业务举例,我主要做分本分类任务。对于使用BERT来完成文本分类任务来说,首先我需要使用海量的无标注文本数据进行预训练学习语言学知识。\n",
"\n",
"幸运的是这种预训练过程一般是一次性的,训练完成后可以把模型保存下来继续使用。很多大厂比如谷歌、Facebook等把得到的预训练模型开源了出来,所以咱们只需要导入预训练好的模型权重就可以直接使用了,相当于完成了模型的预训练过程;第二阶段就是微调了,对于文本分类等下游任务来说, 我们需要一批带标签的训练语料来微调模型。不同的下游任务会需要特定的训练语料。这时候面临的一个最大的问题是训练语料是需要人工标注的,而标注的成本是非常高的。除此之外不同的标注人员因为经验阅历等不同导致对同一条文本的理解也不同,所以容易出现标注不一致的问题。当标注数据量较少时还容易出现模型过拟合。归根结底就是微调是需要标注数据的,而获取标注数据的成本是很高的。\n",
"\n",
"为了解决这个问题,GPT-3可以让NLPer不用标注训练语料就能很好的完成下游任务,让GPT-3更通用更便利。GPT-3不需要进行微调的结构图如下所示:\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"id": "28e037df-734b-4fe7-ac07-311f1b3a7d7b",
"metadata": {},
"source": [
"## 指令微调数据构建\n",
"\n",
"
\n",
"\n",
"\n",
"\n",
"根据典型的分类语料数据,构建指令微调数据\n",
"\n",
"目前如llama等都使用Alpaca格式\n",
"\n",
"指令数据当做一般的文本,进行无监督的训练,和预训练流程一致"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "64312191-423f-4a18-aa0c-036374e93fb2",
"metadata": {},
"outputs": [],
"source": [
"import subprocess\n",
"import os\n",
"# 设置环境变量, autodl一般区域\n",
"result = subprocess.run('bash -c \"source /etc/network_turbo && env | grep proxy\"', shell=True, capture_output=True, text=True)\n",
"output = result.stdout\n",
"for line in output.splitlines():\n",
" if '=' in line:\n",
" var, value = line.split('=', 1)\n",
" os.environ[var] = value"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "32c16282-f9f1-4545-b522-daf2b39b4ead",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GPT2Model(\n",
" (wte): Embedding(50257, 768)\n",
" (wpe): Embedding(1024, 768)\n",
" (drop): Dropout(p=0.1, inplace=False)\n",
" (h): ModuleList(\n",
" (0-11): 12 x GPT2Block(\n",
" (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (attn): GPT2SdpaAttention(\n",
" (c_attn): Conv1D(nf=2304, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=768)\n",
" (attn_dropout): Dropout(p=0.1, inplace=False)\n",
" (resid_dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (mlp): GPT2MLP(\n",
" (c_fc): Conv1D(nf=3072, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=3072)\n",
" (act): NewGELUActivation()\n",
" (dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" )\n",
" )\n",
" (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
")"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#原始模型\n",
"from transformers import AutoModel\n",
"model = AutoModel.from_pretrained(\"gpt2\")\n",
"model"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1149163f-4d89-472e-8d45-ebcbb5f9575e",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Some weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at gpt2 and are newly initialized: ['score.weight']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
},
{
"data": {
"text/plain": [
"GPT2ForSequenceClassification(\n",
" (transformer): GPT2Model(\n",
" (wte): Embedding(50257, 768)\n",
" (wpe): Embedding(1024, 768)\n",
" (drop): Dropout(p=0.1, inplace=False)\n",
" (h): ModuleList(\n",
" (0-11): 12 x GPT2Block(\n",
" (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (attn): GPT2SdpaAttention(\n",
" (c_attn): Conv1D(nf=2304, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=768)\n",
" (attn_dropout): Dropout(p=0.1, inplace=False)\n",
" (resid_dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (mlp): GPT2MLP(\n",
" (c_fc): Conv1D(nf=3072, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=3072)\n",
" (act): NewGELUActivation()\n",
" (dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" )\n",
" )\n",
" (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (score): Linear(in_features=768, out_features=2, bias=False)\n",
")"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#分类微调模型\n",
"from transformers import AutoModelForSequenceClassification\n",
"ft_model = AutoModelForSequenceClassification.from_pretrained(\"gpt2\", num_labels=2)\n",
"ft_model"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "09735059-507c-48c4-893f-ca0da21ce5e8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GPT2LMHeadModel(\n",
" (transformer): GPT2Model(\n",
" (wte): Embedding(50257, 768)\n",
" (wpe): Embedding(1024, 768)\n",
" (drop): Dropout(p=0.1, inplace=False)\n",
" (h): ModuleList(\n",
" (0-11): 12 x GPT2Block(\n",
" (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (attn): GPT2SdpaAttention(\n",
" (c_attn): Conv1D(nf=2304, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=768)\n",
" (attn_dropout): Dropout(p=0.1, inplace=False)\n",
" (resid_dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (mlp): GPT2MLP(\n",
" (c_fc): Conv1D(nf=3072, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=3072)\n",
" (act): NewGELUActivation()\n",
" (dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" )\n",
" )\n",
" (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (lm_head): Linear(in_features=768, out_features=50257, bias=False)\n",
")"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#指令微调模型\n",
"from transformers import AutoModelForCausalLM\n",
"sft_model = AutoModelForCausalLM.from_pretrained(\"gpt2\")\n",
"sft_model"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d1407cbe-4996-4898-a135-e26d28da2a2a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"GPT2LMHeadModel(\n",
" (transformer): GPT2Model(\n",
" (wte): Embedding(50257, 768)\n",
" (wpe): Embedding(1024, 768)\n",
" (drop): Dropout(p=0.1, inplace=False)\n",
" (h): ModuleList(\n",
" (0-11): 12 x GPT2Block(\n",
" (ln_1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (attn): GPT2SdpaAttention(\n",
" (c_attn): Conv1D(nf=2304, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=768)\n",
" (attn_dropout): Dropout(p=0.1, inplace=False)\n",
" (resid_dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" (ln_2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" (mlp): GPT2MLP(\n",
" (c_fc): Conv1D(nf=3072, nx=768)\n",
" (c_proj): Conv1D(nf=768, nx=3072)\n",
" (act): NewGELUActivation()\n",
" (dropout): Dropout(p=0.1, inplace=False)\n",
" )\n",
" )\n",
" )\n",
" (ln_f): LayerNorm((768,), eps=1e-05, elementwise_affine=True)\n",
" )\n",
" (lm_head): Linear(in_features=768, out_features=50257, bias=False)\n",
")"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from transformers import GPT2LMHeadModel\n",
"gpt2_model = GPT2LMHeadModel.from_pretrained(\"gpt2\")\n",
"gpt2_model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92fc8e55-2d90-4694-b8df-90885d08d51a",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}