rhaymison commited on
Commit
2898883
1 Parent(s): 386e0db

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - portugues
8
+ - portuguese
9
+ - QA
10
+ - instruct
11
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
12
+ datasets:
13
+ - rhaymison/superset
14
+ pipeline_tag: text-generation
15
+
16
+ ---
17
+
18
+ # Llama 3 portuguese Tom cat 8b instruct GGUF
19
+
20
+ <p align="center">
21
+ <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat-8b.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
22
+ </p>
23
+
24
+
25
+ This model was trained with a superset of 300,000 chat in Portuguese.
26
+ The model comes to help fill the gap in models in Portuguese. Tuned from the Tom cat 8b instruct , the model was adjusted mainly for chat.
27
+
28
+ ```python
29
+ !git lfs install
30
+ !pip install langchain
31
+ !pip install langchain-community langchain-core
32
+ !pip install llama-cpp-python
33
+
34
+ !git clone https://huggingface.co/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct-q8-gguf/
35
+
36
+ def llamacpp():
37
+ from langchain.llms import LlamaCpp
38
+ from langchain.prompts import PromptTemplate
39
+ from langchain.chains import LLMChain
40
+
41
+ llm = LlamaCpp(
42
+ model_path="/content/Llama-3-portuguese-Tom-cat-8b-instruct-q8-gguf",
43
+ n_gpu_layers=40,
44
+ n_batch=512,
45
+ verbose=True,
46
+ )
47
+
48
+
49
+ template = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
50
+ Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.<|eot_id|><|start_header_id|>user<|end_header_id|>
51
+ { question }<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
52
+
53
+
54
+ prompt = PromptTemplate(template=template, input_variables=["question"])
55
+
56
+ llm_chain = LLMChain(prompt=prompt, llm=llm)
57
+
58
+ question = "instrução: aja como um professor de matemática e me explique porque 2 + 2 = 4?"
59
+ response = llm_chain.run({"question": question})
60
+ print(response)
61
+
62
+ ```
63
+
64
+
65
+ ### Comments
66
+
67
+ Any idea, help or report will always be welcome.
68
+
69
70
+
71
+ <div style="display:flex; flex-direction:row; justify-content:left">
72
+ <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
73
+ <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
74
+ </a>
75
+ <a href="https://github.com/rhaymisonbetini" target="_blank">
76
+ <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
77
+ </a>
78
+ </div>