basilePlus
commited on
Commit
•
9874c41
1
Parent(s):
634522b
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,8 @@ base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
|
9 |
model-index:
|
10 |
- name: llama3-8b-schopenhauer
|
11 |
results: []
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -16,19 +18,19 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
# llama3-8b-schopenhauer
|
18 |
|
19 |
-
|
20 |
|
21 |
-
## Model description
|
22 |
|
23 |
-
|
24 |
|
25 |
-
##
|
26 |
|
27 |
-
|
|
|
28 |
|
29 |
## Training and evaluation data
|
30 |
|
31 |
-
|
32 |
|
33 |
## Training procedure
|
34 |
|
@@ -43,10 +45,6 @@ The following hyperparameters were used during training:
|
|
43 |
- lr_scheduler_type: linear
|
44 |
- num_epochs: 3.0
|
45 |
|
46 |
-
### Training results
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
### Framework versions
|
51 |
|
52 |
- PEFT 0.10.0
|
|
|
9 |
model-index:
|
10 |
- name: llama3-8b-schopenhauer
|
11 |
results: []
|
12 |
+
language:
|
13 |
+
- en
|
14 |
---
|
15 |
|
16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
18 |
|
19 |
# llama3-8b-schopenhauer
|
20 |
|
21 |
+
![llama_schopenhauer.png](https://cdn-uploads.huggingface.co/production/uploads/643c1c055fcffe09fb6874f1/fstVI_o29OyepyL2nIZZ_.png)
|
22 |
|
|
|
23 |
|
24 |
+
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on a synthetic dataset of argumentative conversations.
|
25 |
|
26 |
+
## Model description
|
27 |
|
28 |
+
The model as been trained to be an argumentative expert, following deterministic rethoric guidelines depicted by Schopenhauer in The Art of Being Right.
|
29 |
+
The model aims at showing how persuasive a model can be if we simply introduce some simple deterministic argumentative guidelines.
|
30 |
|
31 |
## Training and evaluation data
|
32 |
|
33 |
+
The model has been trained using LoRa on a small synthetic dataset which quality can be improved both in size and quality. The model has shown great performance in responding with short percuting answers to argumentative conversations. No argumentative metric has been implemented, interesting arguments evaluation benchmark can be found in [Cabrio, E., & Villata, S. (Year). Towards a Benchmark of Natural Language Arguments. INRIA Sophia Antipolis, France.](https://arxiv.org/pdf/1405.0941v1)
|
34 |
|
35 |
## Training procedure
|
36 |
|
|
|
45 |
- lr_scheduler_type: linear
|
46 |
- num_epochs: 3.0
|
47 |
|
|
|
|
|
|
|
|
|
48 |
### Framework versions
|
49 |
|
50 |
- PEFT 0.10.0
|