ShinDJ commited on
Commit
82ba93c
·
verified ·
1 Parent(s): 601367e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +209 -193
README.md CHANGED
@@ -3,197 +3,213 @@ library_name: transformers
3
  tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  tags: []
4
  ---
5
 
 
6
 
7
+ <a href="https://github.com/MLP-Lab/Bllossom">
8
+ <img src="https://raw.githubusercontent.com/teddysum/bllossom/main/bllossom_icon.png?token=GHSAT0AAAAAACZIELMFYS74LTHEVHXKCYQMZ2SUOVQ" width="30%" height="30%">
9
+ </a>
10
+
11
+ # Update!
12
+ * [2024.12.06] -
13
+
14
+
15
+ # Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) |
16
+
17
+ ```bash
18
+ 저희 Bllossom 팀에서 llama3.2-3B 기반의 한국어-영어 언어모델 Bllossom-AICA 공개합니다.
19
+ 이번 Bllossom-AICA는 다음과 같은 특징을 보입니다.
20
+ - 일반 언어모델, 시각-언어모델 양방향으로 활용이 가능합니다.
21
+ - 이미지를 넣으면 시각-언어모델, 넣지 않으면 언어모델로 작동하며 시각-언어, 그냥 언어모델 양방향모두 학습 및 추론이 가능합니다.
22
+ - 시각 정보의 이해를 바탕으로 언어모델의 성능이 대폭 향상되었습니다. (정성평가 기준 Bllossom-3.2-3B모델 대비 10%이상)
23
+ - 영어 성능을 전혀 손상시키지 않은 완전한 Bilingual 모델입니다.
24
+ - 한국어 OCR, 표, 그래프 해석에 최적화 되어있습니다.
25
+ - 외부지식에 대한 선택적 추론 기능이 학습되었습니다. RAG를 활용할 때 질문과 관련 없는 오류가 섞인 정보의 경우 모델 스스로 활용하지 않습니다.
26
+
27
+ 해당 모델에 활용된 데이터는 다음과 같습니다.
28
+ - Huggingface에 공개된 한국어 사전학습 데이터를 거의 모두 활용해 Full tuning 했습니다.
29
+ - AI-Hub, KISTI AI데이터, Huggingface에 공개된 거의 모든 한국어 시각-언어 관련 학습데이터를 활용해 시각-언어모델 사전학습을 했습니다. (다 나열하기 너무 많아요...)
30
+ - 저희 연구실에서 자체 제작한 한국어 Document 관련 시각-언어 Instruction Tuning데이터를 활용했습니다.
31
+
32
+ 언제나 그랬듯 해당 모델은 상업적 이용이 가능합니다.
33
+
34
+ 1. Bllossom-AICA의 외부지식 지식추론 기능은 COLING2025에 발표될 예정입니다.
35
+ 2. 좋은 언어모델 계속 업데이트 하겠습니다!! 한국어 강화를위해 공동 연구하실분(특히논문) 언제든 환영합니다!!
36
+ ```
37
+
38
+ ```bash
39
+ We, the Bllossom team, are pleased to announce the release of Bllossom-Vision, a Korean-English vision-language model based on llama3.2. This Bllossom-Vision is a preview version and features the following:
40
+ - It can be utilized both as a general language model and as a vision-language model.
41
+ - It operates as a vision-language model when an image is provided, and as a language model when no image is provided. It is capable of both training and inference in both directions, whether as a vision-language or just a language model.
42
+ - We have put significant effort into ensuring it remains faithful to the role of a vision-language model while maintaining the performance of a traditional language model as much as possible.
43
+ - It is a fully bilingual model that does not compromise English performance at all.
44
+ ```
45
+ **Bllossom is developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)**
46
+
47
+
48
+ ## Demo Video
49
+
50
+ <div style="display: flex; justify-content: space-between;">
51
+ <!-- 번째 컬럼 -->
52
+ <div style="width: 49%;">
53
+ <a>
54
+ <img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;">
55
+ </a>
56
+ <p style="text-align: center;">Bllossom-V Demo</p>
57
+ </div>
58
+ </div>
59
+
60
+
61
+
62
+
63
+ ## Example code
64
+
65
+ ### Colab Tutorial
66
+ - [Inference-Code-Link](Inference code coming soon)
67
+
68
+ ### Install Dependencies
69
+ ```bash
70
+ pip install torch transformers==4.44.0
71
+ ```
72
+
73
+ ### Python code without Image
74
+ ```python
75
+ from transformers import LlavaNextForConditionalGeneration,LlavaNextProcessor
76
+ import torch
77
+
78
+ model = LlavaNextForConditionalGeneration.from_pretrained(
79
+ 'Bllossom/llama-3.1-Korean-Bllossom-Vision-8B',
80
+ torch_dtype=torch.bfloat16,
81
+ device_map='auto'
82
+ )
83
+ processor = LlavaNextProcessor.from_pretrained('Bllossom/llama-3.1-Korean-Bllossom-Vision-8B')
84
+
85
+ with torch.no_grad():
86
+
87
+ PROMPT=\
88
+ """You are a versatile AI assistant named Bllava, capable of both understanding and generating text as well as interpreting and analyzing images. Your role is to kindly and effectively answer the user’s questions, whether they are about text or images, and provide appropriate and helpful responses to all types of queries.
89
+
90
+ 당신은 텍스트를 이해하고 생성하는 것뿐만 아니라 이미지를 해석하고 분석할 수 있는 다재다능한 AI 어시스턴트 블라바입니다. 사용자의 질문이 텍스트에 관한 것이든 이미지에 관한 것이든 친절���고 효과적으로 답변하며, 모든 유형의 질의에 대해 적절하고 유용한 응답을 제공하는 것이 당신의 역할입니다."""
91
+
92
+ instruction = '자연어처리 15주 분량 커리큘럼을 짜줘'
93
+
94
+ messages = [
95
+ {'role': 'system', 'content': f"{PROMPT}"},
96
+ {'role': 'user', 'content': f"{instruction}"}
97
+ ]
98
+
99
+ chat_messages = processor.tokenizer.apply_chat_template(
100
+ messages,
101
+ tokenize=True,
102
+ add_generation_prompt=True,
103
+ return_tensors='pt',
104
+ )
105
+
106
+ bos_token = processor.tokenizer.bos_token_id
107
+ chat_messages = torch.cat([torch.tensor([[bos_token]]),chat_messages],dim=-1).to(model.device)
108
+
109
+
110
+ output = model.generate(
111
+ input_ids = chat_messages,
112
+ use_cache=False,
113
+ max_new_tokens=2048,
114
+ top_p=0.9,
115
+ temperature=0.6,
116
+ do_sample=True,
117
+ )
118
+
119
+ print(processor.tokenizer.decode(output[0]))
120
+ ```
121
+
122
+ ### Python code with Image
123
+ ```python
124
+ from PIL import Image
125
+ from transformers import LlavaNextForConditionalGeneration,LlavaNextProcessor
126
+ import torch
127
+
128
+ model = LlavaNextForConditionalGeneration.from_pretrained(
129
+ 'Bllossom/llama-3.1-Korean-Bllossom-Vision-8B',
130
+ torch_dtype=torch.bfloat16,
131
+ device_map='auto'
132
+ )
133
+ processor = LlavaNextProcessor.from_pretrained('Bllossom/llama-3.1-Korean-Bllossom-Vision-8B')
134
+
135
+ image = Image.open('[IMAGE_PATH]').convert('RGB')
136
+
137
+ PROMPT=\
138
+ """You are a versatile AI assistant named Bllava, capable of both understanding and generating text as well as interpreting and analyzing images. Your role is to kindly and effectively answer the user’s questions, whether they are about text or images, and provide appropriate and helpful responses to all types of queries.
139
+
140
+ 당신은 텍스트를 이해하고 생성하는 것뿐만 아니라 이미지를 해석하고 분석할 수 있는 다재다능한 AI 어시스턴트 블라바입니다. 사용자의 질문이 텍스트에 관한 것이든 이미지에 관한 것이든 친절하고 효과적으로 답변하며, 모든 유형의 질의에 대해 적절하고 유용한 응답을 제공하는 것이 당신의 역할입니다."""
141
+
142
+
143
+ instruction = '이미지에 대해서 설명해주세요.'
144
+ messages = [
145
+ {'role': 'system', 'content': f"{PROMPT}"},
146
+ {'role': 'user', 'content': f"<image>\n{instruction}"}
147
+ ]
148
+
149
+ chat_messages = processor.tokenizer.apply_chat_template(
150
+ messages,
151
+ tokenize=False,
152
+ add_generation_prompt=True
153
+ ).to(model.device)
154
+
155
+ inputs = processor(
156
+ chat_messages,
157
+ image,
158
+ return_tensors='pt',
159
+ )
160
+
161
+ output = model.generate(
162
+ **inputs,
163
+ max_new_tokens=1024,
164
+ )
165
+
166
+ print(processor.tokenizer.decode(output[0]))
167
+ ```
168
+
169
+
170
+ ## Supported by
171
+
172
+ - AICA <img src="https://aica-gj.kr/images/logo.png" width="20%" height="20%">
173
+ - 유클리드소프트 <img src="https://euclidsoft.co.kr/_next/image?url=%2Fimg%2Flogo.png&w=384&q=75" width="20%" height="20%">
174
+
175
+ ## Citation
176
+ **Language Model**
177
+ ```text
178
+ @misc{bllossom,
179
+ author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim},
180
+ title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean},
181
+ year = {2024},
182
+ journal = {LREC-COLING 2024},
183
+ paperLink = {\url{https://arxiv.org/pdf/2403.10882}},
184
+ },
185
+ }
186
+ ```
187
+
188
+ **Vision-Language Model**
189
+ ```text
190
+ @misc{bllossom-V,
191
+ author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim},
192
+ title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment},
193
+ year = {2024},
194
+ publisher = {GitHub},
195
+ journal = {NAACL 2024 findings},
196
+ paperLink = {\url{https://arxiv.org/pdf/2403.11399}},
197
+ },
198
+ }
199
+ ```
200
+
201
+ ## Contact
202
+ - 임경태(KyungTae Lim), Professor at Seoultech. `[email protected]`
203
+ - 함영균(Younggyun Hahm), CEO of Teddysum. `[email protected]`
204
+ - 김한샘(Hansaem Kim), Professor at Yonsei. `[email protected]`
205
+
206
+ ## Contributor
207
+ - **신동재(Dongjae Shin)**, [email protected]
208
+ - **임현석(Hyeonseok Lim)**, [email protected]
209
+ - 원인호(Inho Won), [email protected]
210
+ - 김민준(Minjun Kim), [email protected]
211
+ - 유한결(Hangyeol Yoo), [email protected]
212
+ - 송승우(Seungwoo Song), [email protected]
213
+ - 육정훈(Jeonghun Yuk), [email protected]
214
+ - 최창수(Chansu Choi), [email protected]
215
+ - 송서현(Seohyun Song), [email protected]