Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,7 @@ pipeline_tag: text-generation
|
|
9 |
inference: false
|
10 |
|
11 |
|
|
|
12 |
---
|
13 |
|
14 |
<p align="center">
|
@@ -16,9 +17,10 @@ inference: false
|
|
16 |
<p>
|
17 |
<h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Machine Mindset: An MBTI Exploration of Large Language Models</a></h2>
|
18 |
<h5 align="center"> If you like our project, please give us a star ⭐ </h2>
|
19 |
-
<h4 align="center"> [ 中文 | <a href="https://huggingface.co/FarReelAILab/
|
20 |
<br>
|
21 |
|
|
|
22 |
### 介绍 (Introduction)
|
23 |
|
24 |
**MM_zh_INTP (Machine_Mindset_zh_INTP)** 是FarReel AI Lab和北大深研院合作研发的基于Baichuan-7b-chat的MBTI类型为INTP的中文大模型。
|
@@ -36,14 +38,10 @@ MM_zh_INTP (Machine_Mindset_zh_INTP)的基础性格特征是**INTP**,这意味
|
|
36 |
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
|
37 |
<br>
|
38 |
|
39 |
-
### 依赖项 (Dependency)
|
40 |
-
|
41 |
-
|
42 |
-
<br>
|
43 |
-
|
44 |
### 快速使用(Quickstart)
|
45 |
|
46 |
* 使用HuggingFace Transformers库(单轮对话):
|
|
|
47 |
```python
|
48 |
import torch
|
49 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
@@ -52,12 +50,17 @@ MM_zh_INTP (Machine_Mindset_zh_INTP)的基础性格特征是**INTP**,这意味
|
|
52 |
model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_zh_INTP", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
53 |
model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_zh_INTP")
|
54 |
messages = []
|
55 |
-
messages.append({"role": "user", "content": "
|
|
|
|
|
|
|
|
|
56 |
response = model.chat(tokenizer, messages)
|
57 |
print(response)
|
58 |
-
#我最喜欢的一本书是《人类简史》。这本书以独特的视角探索了人类历史的各个方面,包括文化、社会和科学的发展。它挑战了我对世界的认知,并激发了我对人类的潜力和未来发展的思考。
|
59 |
```
|
|
|
60 |
* 使用HuggingFace Transformers库(多轮对话):
|
|
|
61 |
```python
|
62 |
import torch
|
63 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
@@ -80,7 +83,9 @@ MM_zh_INTP (Machine_Mindset_zh_INTP)的基础性格特征是**INTP**,这意味
|
|
80 |
print("Assistant:", response)
|
81 |
messages.append({"role": "assistant", "content": str(response)})
|
82 |
```
|
|
|
83 |
* 使用LLaMA-Factory推理框架(多轮对话)
|
|
|
84 |
```bash
|
85 |
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
86 |
cd LLaMA-Factory
|
|
|
9 |
inference: false
|
10 |
|
11 |
|
12 |
+
|
13 |
---
|
14 |
|
15 |
<p align="center">
|
|
|
17 |
<p>
|
18 |
<h2 align="center"> <a href="https://arxiv.org/abs/2311.10122">Machine Mindset: An MBTI Exploration of Large Language Models</a></h2>
|
19 |
<h5 align="center"> If you like our project, please give us a star ⭐ </h2>
|
20 |
+
<h4 align="center"> [ 中文 | <a href="https://huggingface.co/FarReelAILab/Machine_Mindset_en_INTP">English</a> | <a href="https://github.com/PKU-YuanGroup/Machine-Mindset/blob/main/README_ja.md">日本語</a> ]
|
21 |
<br>
|
22 |
|
23 |
+
|
24 |
### 介绍 (Introduction)
|
25 |
|
26 |
**MM_zh_INTP (Machine_Mindset_zh_INTP)** 是FarReel AI Lab和北大深研院合作研发的基于Baichuan-7b-chat的MBTI类型为INTP的中文大模型。
|
|
|
38 |
* 建议使用CUDA 11.4及以上(GPU用户、flash-attention用户等需考虑此选项)
|
39 |
<br>
|
40 |
|
|
|
|
|
|
|
|
|
|
|
41 |
### 快速使用(Quickstart)
|
42 |
|
43 |
* 使用HuggingFace Transformers库(单轮对话):
|
44 |
+
|
45 |
```python
|
46 |
import torch
|
47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
50 |
model = AutoModelForCausalLM.from_pretrained("FarReelAILab/Machine_Mindset_zh_INTP", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
|
51 |
model.generation_config = GenerationConfig.from_pretrained("FarReelAILab/Machine_Mindset_zh_INTP")
|
52 |
messages = []
|
53 |
+
messages.append({"role": "user", "content": "你的MBTI人格是什么"})
|
54 |
+
response = model.chat(tokenizer, messages)
|
55 |
+
print(response)
|
56 |
+
messages.append({'role': 'assistant', 'content': response})
|
57 |
+
messages.append({"role": "user", "content": "和一群人聚会一天回到家,你会是什么感受"})
|
58 |
response = model.chat(tokenizer, messages)
|
59 |
print(response)
|
|
|
60 |
```
|
61 |
+
|
62 |
* 使用HuggingFace Transformers库(多轮对话):
|
63 |
+
|
64 |
```python
|
65 |
import torch
|
66 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
83 |
print("Assistant:", response)
|
84 |
messages.append({"role": "assistant", "content": str(response)})
|
85 |
```
|
86 |
+
|
87 |
* 使用LLaMA-Factory推理框架(多轮对话)
|
88 |
+
|
89 |
```bash
|
90 |
git clone https://github.com/hiyouga/LLaMA-Factory.git
|
91 |
cd LLaMA-Factory
|