File size: 2,137 Bytes
e8f2a21
 
 
 
 
 
 
 
 
 
 
 
 
2d80fa5
 
8f6fb66
e8f2a21
 
 
 
be2f048
 
e8f2a21
 
 
 
8f6fb66
e3f23e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
960f28f
e3f23e8
e8f2a21
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
---
<div style="width: 100%;">
    <img src="http://x-pai.algolet.com/bot/img/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
   🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co./TigerResearch" target="_blank">Hugging Face</a>
</p>

## Github
https://github.com/TigerResearch/TigerBot

## Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from accelerate import infer_auto_device_map, dispatch_model
from accelerate.utils import get_balanced_memory

tokenizer = AutoTokenizer.from_pretrained("TigerResearch/tigerbot-180b-research")

model = AutoModelForCausalLM.from_pretrained("TigerResearch/tigerbot-180b-research")

max_memory = get_balanced_memory(model)
device_map = infer_auto_device_map(model, max_memory=max_memory, no_split_module_classes=["BloomBlock"])
model = dispatch_model(model, device_map=device_map, offload_buffers=True)

device = torch.cuda.current_device()


tok_ins = "\n\n### Instruction:\n"
tok_res = "\n\n### Response:\n"
prompt_input = tok_ins + "{instruction}" + tok_res

input_text = "What is the next number after this list: [1, 2, 3, 5, 8, 13, 21]"
input_text = prompt_input.format_map({'instruction': input_text})

max_input_length = 512
max_generate_length = 1024
generation_kwargs = {
        "top_p": 0.95,
        "temperature": 0.8,
        "max_length": max_generate_length,
        "eos_token_id": tokenizer.eos_token_id,
        "pad_token_id": tokenizer.pad_token_id,
        "early_stopping": True,
        "no_repeat_ngram_size": 4,
    }

inputs = tokenizer(input_text, return_tensors='pt', truncation=True, max_length=max_input_length)
inputs = {k: v.to(device) for k, v in inputs.items()}
output = model.generate(**inputs, **generation_kwargs)
answer = ''
for tok_id in output[0][inputs['input_ids'].shape[1]:]:
    if tok_id != tokenizer.eos_token_id:
        answer += tokenizer.decode(tok_id)

print(answer)
```