Update README.md
Browse files
README.md
CHANGED
@@ -7,14 +7,13 @@ tags:
|
|
7 |
|
8 |
# Mistral-7B-codealpaca
|
9 |
|
10 |
-
|
11 |
|
12 |
## Training Details
|
13 |
|
14 |
-
|
15 |
[![Built with Axolotl](https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png)](https://github.com/OpenAccess-AI-Collective/axolotl)
|
16 |
|
17 |
-
|
18 |
## Quantised Model Links:
|
19 |
|
20 |
1.
|
@@ -44,11 +43,11 @@ Human eval plus: https://github.com/evalplus/evalplus
|
|
44 |
|
45 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/azE6LU0qQ9E9u60t5VrMk.png)
|
46 |
|
47 |
-
Well, the results are better than I expected
|
48 |
- Base: `{'pass@1': 0.47560975609756095}`
|
49 |
- Base + Extra: `{'pass@1': 0.4329268292682927}`
|
50 |
|
51 |
-
For reference,
|
52 |
|
53 |
** [Nondzu/Mistral-7B-code-16k-qlora](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora)**:
|
54 |
|
@@ -62,7 +61,7 @@ For reference, we've provided the performance of the original Mistral model alon
|
|
62 |
|
63 |
## Model Configuration:
|
64 |
|
65 |
-
|
66 |
|
67 |
```yaml
|
68 |
base_model: mistralai/Mistral-7B-Instruct-v0.1
|
@@ -90,10 +89,8 @@ lora_target_modules:
|
|
90 |
lora_target_linear: true
|
91 |
```
|
92 |
|
93 |
-
|
94 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/5nPgL3ajROKf7dttf4BO0.png)
|
95 |
|
96 |
-
|
97 |
## Additional Projects:
|
98 |
|
99 |
For other related projects, you can check out:
|
|
|
7 |
|
8 |
# Mistral-7B-codealpaca
|
9 |
|
10 |
+
I am thrilled to introduce my Mistral-7B-codealpaca model. This variant is optimized and demonstrates potential in assisting developers as a coding companion. I welcome contributions from testers and enthusiasts to help evaluate its performance.
|
11 |
|
12 |
## Training Details
|
13 |
|
14 |
+
I trained the model using 3xRTX 3090 for 118 hours.
|
15 |
[![Built with Axolotl](https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png)](https://github.com/OpenAccess-AI-Collective/axolotl)
|
16 |
|
|
|
17 |
## Quantised Model Links:
|
18 |
|
19 |
1.
|
|
|
43 |
|
44 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/azE6LU0qQ9E9u60t5VrMk.png)
|
45 |
|
46 |
+
Well, the results are better than I expected:
|
47 |
- Base: `{'pass@1': 0.47560975609756095}`
|
48 |
- Base + Extra: `{'pass@1': 0.4329268292682927}`
|
49 |
|
50 |
+
For reference, I've provided the performance of the original Mistral model alongside my Mistral-7B-code-16k-qlora model.
|
51 |
|
52 |
** [Nondzu/Mistral-7B-code-16k-qlora](https://huggingface.co/Nondzu/Mistral-7B-code-16k-qlora)**:
|
53 |
|
|
|
61 |
|
62 |
## Model Configuration:
|
63 |
|
64 |
+
Here are the configurations for my Mistral-7B-codealpaca-lora:
|
65 |
|
66 |
```yaml
|
67 |
base_model: mistralai/Mistral-7B-Instruct-v0.1
|
|
|
89 |
lora_target_linear: true
|
90 |
```
|
91 |
|
|
|
92 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63729f35acef705233c87909/5nPgL3ajROKf7dttf4BO0.png)
|
93 |
|
|
|
94 |
## Additional Projects:
|
95 |
|
96 |
For other related projects, you can check out:
|