rwitz commited on
Commit
4743f67
1 Parent(s): c87cebd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -11
README.md CHANGED
@@ -12,7 +12,7 @@ pipeline_tag: text-generation
12
 
13
  ## Overview
14
 
15
- This repository provides a fine-tuned version of the **Llama-3-1-8b base model**, optimized for roleplaying, logic, and reasoning tasks. Utilizing iterative fine-tuning and self-generated chat logs, this model delivers engaging and coherent conversational experiences.
16
 
17
  ## Model Specifications
18
 
@@ -30,7 +30,7 @@ This repository provides a fine-tuned version of the **Llama-3-1-8b base model**
30
 
31
  ## Recommended Settings
32
 
33
- To achieve optimal performance with this model, we recommend the following settings:
34
 
35
  - **Minimum Probability (`min_p`)**: `0.05`
36
  - **Temperature**: `1.1` or higher
@@ -39,7 +39,7 @@ To achieve optimal performance with this model, we recommend the following setti
39
 
40
  ## Usage Instructions
41
 
42
- We recommend using the [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) for an optimal experience. Load the model in `bf16` precision and enable `flash-attention2` for improved performance.
43
 
44
  ### Installation Steps
45
 
@@ -84,14 +84,13 @@ Ryan: Great, I'm just doing just great.
84
 
85
  ## Model Capabilities
86
 
87
- Below are some examples showcasing the model's performance in various tasks:
88
 
89
  ### Roleplay Examples
90
 
91
 
92
  ![Roleplay Log 1](https://i.ibb.co/0ngp6zf/Screenshot-42.png)
93
 
94
-
95
  ![Roleplay Log 2](https://i.ibb.co/GQ8Ffn1/Screenshot-43.png)
96
 
97
  ![Roleplay Log 3](https://i.ibb.co/4JkCjtf/Screenshot-44.png)
@@ -107,11 +106,6 @@ While this model excels in chat and roleplaying scenarios, it isn't perfect. If
107
 
108
  - **oobabooga text-generation-webui**: A powerful interface for running and interacting with language models. [GitHub Repository](https://github.com/oobabooga/text-generation-webui)
109
  - **Hugging Face**: For hosting the model and providing a platform for collaboration. [Website](https://huggingface.co/)
110
-
111
- ## License
112
-
113
- [Specify the license under which the model is released, e.g., MIT License, Apache 2.0, etc.]
114
-
115
- ---
116
 
117
  *For any issues or questions, please open an issue in this repository.*
 
12
 
13
  ## Overview
14
 
15
+ Cat1.0 is a fine-tuned version of **Llama-3-1-8b base model**, optimized for roleplaying, logic, and reasoning tasks. Utilizing iterative fine-tuning and human-AI chat logs, this model works well for numerous chat scenarios.
16
 
17
  ## Model Specifications
18
 
 
30
 
31
  ## Recommended Settings
32
 
33
+ To achieve optimal performance with this model, I recommend the following settings:
34
 
35
  - **Minimum Probability (`min_p`)**: `0.05`
36
  - **Temperature**: `1.1` or higher
 
39
 
40
  ## Usage Instructions
41
 
42
+ I recommend using the [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) for an optimal experience. Load the model in `bf16` precision and enable `flash-attention2` for improved performance.
43
 
44
  ### Installation Steps
45
 
 
84
 
85
  ## Model Capabilities
86
 
87
+ Below are some examples showcasing the model's performance in various roleplay scenarios:
88
 
89
  ### Roleplay Examples
90
 
91
 
92
  ![Roleplay Log 1](https://i.ibb.co/0ngp6zf/Screenshot-42.png)
93
 
 
94
  ![Roleplay Log 2](https://i.ibb.co/GQ8Ffn1/Screenshot-43.png)
95
 
96
  ![Roleplay Log 3](https://i.ibb.co/4JkCjtf/Screenshot-44.png)
 
106
 
107
  - **oobabooga text-generation-webui**: A powerful interface for running and interacting with language models. [GitHub Repository](https://github.com/oobabooga/text-generation-webui)
108
  - **Hugging Face**: For hosting the model and providing a platform for collaboration. [Website](https://huggingface.co/)
109
+ - **Meta** For pre-training the Llama-3.1-8B Base Model that was used for fine-tuning. [Model Card](https://huggingface.co/meta-llama/Llama-3.1-8B)
 
 
 
 
 
110
 
111
  *For any issues or questions, please open an issue in this repository.*