File size: 3,689 Bytes
04f5373
 
 
 
 
008f864
04f5373
 
0157d20
ffff1f8
 
 
 
 
4743f67
ffff1f8
 
 
 
 
 
c87cebd
 
 
 
ffff1f8
c87cebd
 
ffff1f8
 
 
 
4743f67
ffff1f8
052e729
 
ffff1f8
 
 
 
 
4743f67
ffff1f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4743f67
ffff1f8
ead8392
ffff1f8
b4fb33d
ffff1f8
93f262a
ffff1f8
b4fb33d
ffff1f8
93f262a
 
 
 
ffff1f8
 
 
 
 
 
 
 
 
 
 
4743f67
ffff1f8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.1-8B
pipeline_tag: text-generation
---
# Cat1.0

![Cover Image](https://i.ibb.co/PYCdt9n/3i-RPOp-Vn-Tb-O4-E021n6-Pljg.jpg)

## Overview

Cat1.0 is a fine-tuned version of **Llama-3-1-8b base model**, optimized for roleplaying, logic, and reasoning tasks. Utilizing iterative fine-tuning and human-AI chat logs, this model works well for numerous chat scenarios.

## Model Specifications

- **Parameters**: 8 Billion (8B)
- **Precision**: bf16 (Brain Floating Point 16-bit)
- **Fine-Tuning Method**: LoRa (Low-Rank Adaptation)
- **Lora Rank**: 32
- **Lora Alpha**: 64
- **Learning Rate**: 0.0008
- **Training Epochs**: 4
- **Datasets Used**:
  - cat1.0 Roleplay Dataset
  - cat1.0 Reasoning and Logic Dataset
- **Fine-Tuning Approach**: Iterative Fine-Tuning using self-chat logs

## Recommended Settings

To achieve optimal performance with this model, I recommend the following settings:

- **Temperature**: `1.1`
- **Min P**: `0.05`

> **Note**: Due to the nature of the fine-tuning, setting the temperature to `1.1` or higher helps prevent the model from repeating itself and encourages more creative and coherent responses.

## Usage Instructions

I recommend using the [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) for an optimal experience. Load the model in `bf16` precision and enable `flash-attention2` for improved performance.

### Installation Steps

1. **Clone the WebUI Repository**:

   ```bash
   git clone https://github.com/oobabooga/text-generation-webui
   cd text-generation-webui
   ```

2. **Install Dependencies**:

   ```bash
   pip install -r requirements.txt
   ```

3. **Download the Model**:

   Download the fine-tuned model from [Hugging Face](#) and place it in the `models` directory.

4. **Launch the WebUI**:

   ```bash
   python server.py --bf16 --flash-attention
   ```

### Sample Prompt Formats

You can interact with the model using either **chat format** or **chat-instruct format**. Here's an example:

```plaintext
Ryan is a computer engineer who works at Intel.

Ryan: Hey, how's it going Natalie?
Natalie: Good, how are things going with you, Ryan?
Ryan: Great, I'm just doing just great.
```

## Model Capabilities

Below are some examples showcasing the model's performance in various roleplay scenarios:

### Roleplay Examples

![Roleplay Log 1](https://i.ibb.co/Zz20Wxw/Screenshot-46.png)

![Roleplay Log 2](https://i.ibb.co/wWrdsZm/Screenshot-49-1.png)

![Roleplay Log 3](https://i.ibb.co/4PG7W2K/Screenshot-47.png)

### Text Generation Example

![Text Generation Example](https://i.ibb.co/J5ZVCnR/Screenshot-45.png)

## Limitations and Tips

While this model excels in chat and roleplaying scenarios, it isn't perfect. If you notice the model repeating itself or providing less coherent responses:

- **Increase the Temperature**: Setting the temperature higher (≥ `1.1`) can help generate more diverse and creative outputs.
- **Adjust `min_p` Setting**: Ensuring `min_p` is at least `0.05` can prevent low-probability tokens from being excluded, enhancing the response quality.

## Acknowledgments

- **oobabooga text-generation-webui**: A powerful interface for running and interacting with language models. [GitHub Repository](https://github.com/oobabooga/text-generation-webui)
- **Hugging Face**: For hosting the model and providing a platform for collaboration. [Website](https://huggingface.co./)
- **Meta** For pre-training the Llama-3.1-8B Base Model that was used for fine-tuning. [Model Card](https://huggingface.co./meta-llama/Llama-3.1-8B)

*For any issues or questions, please open an issue in this repository.*