m-elio commited on
Commit
6c43123
·
verified ·
1 Parent(s): bee8833

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ datasets:
4
+ - swap-uniba/the_cauldron_ita
5
+ language:
6
+ - it
7
+ base_model:
8
+ - meta-llama/Meta-Llama-3-8B
9
+ - openai/clip-vit-large-patch14-336
10
+ pipeline_tag: text-generation
11
+ ---
12
+
13
+ # Model Card for LLaVA-NDiNO_pt_short_it
14
+
15
+ ## Model description
16
+
17
+ <!-- Provide a quick summary of what the model is/does. -->
18
+
19
+ **LLaVA-NDiNO** is a family of *Large Vision Language Models (LVLMs)* that have been trained for the Italian language.
20
+
21
+ The model was trained by instruction-tuning [**LLaMA 3 8B Base**](https://huggingface.co/meta-llama/Meta-Llama-3-8B) and [CLIP Large 336](https://huggingface.co/openai/clip-vit-large-patch14-336) on an Italian machine-translated version of [The Cauldron](HuggingFaceM4/the_cauldron).
22
+
23
+ If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
24
+ - **Repository:** https://github.com/swapUniba/LLaVA-NDiNO
25
+
26
+ - **Developed by:** Elio Musacchio, Lucia Siciliani, Pierpaolo Basile, Giovanni Semeraro
27
+ - **Funded by:** PNRR project FAIR - Future AI Research
28
+ - **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer
29
+ - **Model type:** LLaMA 3 + CLIP
30
+ - **Language(s) (NLP):** Italian
31
+ - **License:** Llama 3 Community License
32
+ - **Finetuned from model:** [swap-uniba/LLaVA-NDiNO_pt](https://huggingface.co/swap-uniba/LLaVA-NDiNO_pt)
33
+
34
+
35
+ ## Example Usage
36
+
37
+ ```python
38
+ import torch
39
+ import requests
40
+
41
+ from PIL import Image
42
+ from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration, set_seed
43
+
44
+ model_name = "swap-uniba/LLaVA-NDiNO_short_it"
45
+
46
+ processor = LlavaNextProcessor.from_pretrained(model_name)
47
+ model = LlavaNextForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map="auto")
48
+
49
+ url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
50
+ image = Image.open(requests.get(url, stream=True).raw)
51
+
52
+ chat_template = "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %}"
53
+
54
+ conversation = [
55
+ {
56
+ "role": "user",
57
+ "content": "<image>\nCosa c'è di strano in questa immagine?"
58
+ },
59
+ ]
60
+
61
+ prompt = processor.apply_chat_template(conversation, chat_template, add_generation_prompt=True)
62
+ inputs = processor(prompt, image, return_tensors="pt")
63
+
64
+ set_seed(42)
65
+ output = model.generate(**inputs, max_new_tokens=4096)
66
+
67
+ print(processor.decode(output[0][inputs.input_ids.shape[1]:]))
68
+ ```
69
+
70
+ ## Citation
71
+
72
+ TBD