File size: 3,051 Bytes
03a376c
 
 
 
 
00d6848
 
 
 
03a376c
 
00d6848
03a376c
00d6848
03a376c
00d6848
 
03a376c
 
 
 
 
00d6848
03a376c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00d6848
 
03a376c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00d6848
03a376c
00d6848
03a376c
 
 
00d6848
03a376c
00d6848
03a376c
00d6848
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
license: apache-2.0
language:
- en
datasets:
- Waterhorse/chess_data
- anon8231489123/ShareGPT_Vicuna_unfiltered
- OpenAssistant/oasst1
- vicgalle/alpaca-gpt4
---

# Chessgpt-Chat-v1 

Chessgpt-Chat-v1 is the sft-tuned model of Chessgpt-Base-v1.

  - Base Model: [Chessgpt-Base-v1](https://huggingface.co./Waterhorse/chessgpt-base-v1)
  - Chat Version: [Chessgpt-Chat-v1](https://huggingface.co./Waterhorse/chessgpt-chat-v1)

## Model Details
- **Model type**: Language Model
- **Language(s)**: English
- **License**: Apache 2.0
- **Model Description**: A 2.8B parameter pretrained language model in Chess.

## GPU Inference

This requires a GPU with 8GB memory.

```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

MIN_TRANSFORMERS_VERSION = '4.25.1'

# check transformers version
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'

# init
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/chessgpt-chat-v1")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/chessgpt-chat-v1", torch_dtype=torch.float16)
model = model.to('cuda:0')

# infer
prompt = "Alan Turing is"
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
    **inputs, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.7, top_k=50, return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
a name that has been synonymous with the computer age since the 1950s. The British mathematician, logician, and cryptanalyst is widely regarded as the father of modern computing. His contributions to the development of the modern computer and the theory of computation have had a profound impact on the world we live in today.
Turing’s contributions to the development of the modern computer were made in the 1940s and 1950s. He is most famous for his work on the Turing machine, a theoretical model of a computing machine that was able to perform all the mathematical operations of a computer. Turing’s work on the...
"""
```

# Uses

Excluded uses are described below.

### Direct Use

`chessgpt-chat-v1` is mainly for research on large language model, especially for those research about policy learning and language modeling.

#### Out-of-Scope Use

`chessgpt-chat-v1` is a language model trained on chess related data and may not perform well for other use cases beyond chess domain.

#### Bias, Risks, and Limitations

Just as with any language model, chessgpt-chat-v1 carries inherent limitations that necessitate careful consideration. Specifically, it may occasionally generate responses that are irrelevant or incorrect, particularly when tasked with interpreting complex or ambiguous queries. Additionally, given that its training is rooted in online data, the model may inadvertently reflect and perpetuate common online stereotypes and biases.