File size: 2,890 Bytes
7683c6a
 
 
 
 
 
 
 
931f3a5
 
 
 
 
 
 
 
 
 
 
7683c6a
931f3a5
7683c6a
931f3a5
7683c6a
931f3a5
7683c6a
931f3a5
7683c6a
931f3a5
7683c6a
931f3a5
 
 
7683c6a
 
 
931f3a5
 
 
 
 
 
 
 
 
 
 
 
 
 
7683c6a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
tags:
- generated_from_trainer
model-index:
- name: ft-moe-llava-qwen1.5-1.8b-vista-1ep
  results: []
---

<p align="center">
  <div style="display: flex;text-align: center;">
    <div>
      <img src="https://firebasestorage.googleapis.com/v0/b/database-7ca5c.appspot.com/o/llm%2F68747470733a2f2f7331312e617831782e636f6d2f323032332f31322f32382f70697176444d562e706e67.png?alt=media&token=30a2470d-861e-4295-a7f4-da48231724cf" width="250" style="margin-bottom: 0.2;"/>
    </div>
    <div>
      <img src="https://firebasestorage.googleapis.com/v0/b/database-7ca5c.appspot.com/o/llm%2Flogo_qwen.jpg?alt=media&token=fd2cd557-2f45-4f94-86d3-a5e7c9eef630" width="600" style="margin-bottom: 1rem;"/>
    </div>
  </div>
<p>
<h1 align="center">MoE-LLaVA-Qwen1.5-1.8B×4-Top2: When Vision meet Small-scaled Language Model and Vietnamese Synthetic Dataset</h1>

<h5 align="center">

# Introducing MoE-LLaVA-Qwen1.5-1.8B×4-Top2 for Vietnamese

We are excited to present MoE-LLaVA-Qwen1.5-1.8B×4-Top2, tailored for the Vietnamese language. This model is part of our ongoing efforts to develop Vision Language Models (VLM) for Vietnamese, a domain that is currently limited and predominantly features larger models (**~7B parameters**). Our model activates approximately **2.2B** 🤗😎 parameters per call, significantly reducing the memory footprint, and it can be quantized for local execution.

## Bias, Risks, and Limitations

The dataset may contain biases originating from its sources. Users should remain aware of these potential biases when utilizing the dataset.

## More Information

This dataset represents the first stage of a two-stage development process for a larger model. Stay tuned for future developments by subscribing to our updates.

## Training and evaluation data

### Training Dataset

Our model is trained on the comprehensive [Vi-VLM/Vista dataset](https://huggingface.co./datasets/Vi-VLM/Vista), which includes around 700,000 Vietnamese vision-language samples curated by Gemini Pro. We employed various prompt engineering techniques, including:

- **Few-shot Learning**
- **Caption-based Prompting**
- **Image-based Prompting**

### Techniques Used

- **MoE-LLaVA**: [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA/tree/main)

## Evaluation
- Comming soon 🫡

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0

### Training results



### Framework versions

- Transformers 4.37.0
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.15.1