File size: 5,952 Bytes
97ec252
 
3508af8
97ec252
3508af8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97ec252
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
license: apache-2.0
base_model: 01-ai/Yi-Coder-1.5B-Chat
---
# Yi-Coder-1.5B-Chat-exl2
Original model: [Yi-Coder-1.5B-Chat](https://huggingface.co./01-ai/Yi-Coder-1.5B-Chat)  
Created by: [01-ai](https://huggingface.co./01-ai)

## Quants
[4bpw h6 (main)](https://huggingface.co./cgus/Yi-Coder-1.5B-Chat-exl2/tree/main)  
[4.5bpw h6](https://huggingface.co./cgus/Yi-Coder-1.5B-Chat-exl2/tree/4.5bpw-h6)  
[5bpw h6](https://huggingface.co./cgus/Yi-Coder-1.5B-Chat-exl2/tree/5bpw-h6)  
[6bpw h6](https://huggingface.co./cgus/Yi-Coder-1.5B-Chat-exl2/tree/6bpw-h6)  
[8bpw h8](https://huggingface.co./cgus/Yi-Coder-1.5B-Chat-exl2/tree/8bpw-h8)  

## Quantization notes
Made with Exllamav2 0.2.0 with the default dataset.  
These quants can be used with RTX cards on Windows/Linux or AMD on Linux via Exllamav2 library available in TabbyAPI, Text-Generation-WebUI, etc.

# Original model card
<div align="center">

<picture> 
  <img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="120px">
</picture>

</div>

<p align="center">
  <a href="https://github.com/01-ai">πŸ™ GitHub</a> β€’
  <a href="https://discord.gg/hYUwWddeAu">πŸ‘Ύ Discord</a> β€’
  <a href="https://twitter.com/01ai_yi">🐀 Twitter</a> β€’
  <a href="https://github.com/01-ai/Yi-1.5/issues/2">πŸ’¬ WeChat</a> 
  <br/>
  <a href="https://arxiv.org/abs/2403.04652">πŸ“ Paper</a> β€’
  <a href="https://01-ai.github.io/">πŸ’ͺ Tech Blog</a> β€’
  <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">πŸ™Œ FAQ</a> β€’
  <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">πŸ“— Learning Hub</a>
</p>

# Intro

Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion parameters. 

Key features:
- Excelling in long-context understanding with a maximum context length of 128K tokens.
- Supporting 52 major programming languages:
```bash
  'java', 'markdown', 'python', 'php', 'javascript', 'c++', 'c#', 'c', 'typescript', 'html', 'go', 'java_server_pages', 'dart', 'objective-c', 'kotlin', 'tex', 'swift', 'ruby', 'sql', 'rust', 'css', 'yaml', 'matlab', 'lua', 'json', 'shell', 'visual_basic', 'scala', 'rmarkdown', 'pascal', 'fortran', 'haskell', 'assembly', 'perl', 'julia', 'cmake', 'groovy', 'ocaml', 'powershell', 'elixir', 'clojure', 'makefile', 'coffeescript', 'erlang', 'lisp', 'toml', 'batchfile', 'cobol', 'dockerfile', 'r', 'prolog', 'verilog'
  ```

For model details and benchmarks, see [Yi-Coder blog](https://01-ai.github.io/) and [Yi-Coder README](https://github.com/01-ai/Yi-Coder).

<p align="left"> 
  <img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/yi-coder-calculator-demo.gif?raw=true" alt="demo1" width="500"/> 
</p>

# Models

| Name               | Type |  Length | Download                                                                                                                                          |
|--------------------|------|----------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
| Yi-Coder-9B-Chat   | Chat |      128K      | [πŸ€— Hugging Face](https://huggingface.co./01-ai/Yi-Coder-9B-Chat) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B-Chat) β€’ [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B-Chat) |
| Yi-Coder-1.5B-Chat | Chat |      128K      | [πŸ€— Hugging Face](https://huggingface.co./01-ai/Yi-Coder-1.5B-Chat) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B-Chat) β€’ [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B-Chat) |
| Yi-Coder-9B        | Base |      128K      | [πŸ€— Hugging Face](https://huggingface.co./01-ai/Yi-Coder-9B) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-9B) β€’ [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-9B) |
| Yi-Coder-1.5B      | Base |      128K      | [πŸ€— Hugging Face](https://huggingface.co./01-ai/Yi-Coder-1.5B) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/models/01ai/Yi-Coder-1.5B) β€’ [🟣 wisemodel](https://wisemodel.cn/models/01.AI/Yi-Coder-1.5B) |
|                    | 

# Benchmarks

As illustrated in the figure below, Yi-Coder-9B-Chat achieved an impressive 23% pass rate in LiveCodeBench, making it the only model with under 10B parameters to surpass 20%. It also outperforms DeepSeekCoder-33B-Ins at 22.3%, CodeGeex4-9B-all at 17.8%, CodeLLama-34B-Ins at 13.3%, and CodeQwen1.5-7B-Chat at 12%.

<p align="left"> 
  <img src="https://github.com/01-ai/Yi/blob/main/assets/img/coder/bench1.webp?raw=true" alt="bench1" width="1000"/> 
</p>

# Quick Start

You can use transformers to run inference with Yi-Coder models (both chat and base versions) as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM

device = "cuda" # the device to load the model onto
model_path = "01-ai/Yi-Coder-9B-Chat"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto").eval()

prompt = "Write a quick sort algorithm."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=1024,
    eos_token_id=tokenizer.eos_token_id  
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

For getting up and running with Yi-Coder series models quickly, see [Yi-Coder README](https://github.com/01-ai/Yi-Coder).