Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,9 @@ Welcome to **MistraMystic**—a conversational model fine-tuned from Mistral-7B
|
|
12 |
## Model Name: MistraMystic
|
13 |
- **Architecture**: Mistral-7B v0.3
|
14 |
- **Training Objective**: Personality-Enhanced Conversational AI
|
15 |
-
- **Training Dataset**: Fine-tuned on conversational data to reflect Big 5 personality traits
|
16 |
-
-
|
|
|
17 |
|
18 |
---
|
19 |
|
@@ -69,4 +70,25 @@ Stay tuned for more information on MistraMystic!
|
|
69 |
---
|
70 |
|
71 |
## Citation
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
## Model Name: MistraMystic
|
13 |
- **Architecture**: Mistral-7B v0.3
|
14 |
- **Training Objective**: Personality-Enhanced Conversational AI
|
15 |
+
- **Training Dataset**: Fine-tuned on conversational data to reflect Big 5 personality traits.
|
16 |
+
- JIC: [Journal Intensive Conversations](https://huggingface.co/datasets/chocokiddo/jic) dataset
|
17 |
+
- **Training Duration**: 4-5 days on A100 GPU (training parameters can be found in appendix of the paper)
|
18 |
|
19 |
---
|
20 |
|
|
|
70 |
---
|
71 |
|
72 |
## Citation
|
73 |
+
```bibtex
|
74 |
+
@inproceedings{pal-etal-2025-beyond,
|
75 |
+
title = "Beyond Discrete Personas: Personality Modeling Through Journal Intensive Conversations",
|
76 |
+
author = "Pal, Sayantan and
|
77 |
+
Das, Souvik and
|
78 |
+
Srihari, Rohini K.",
|
79 |
+
editor = "Rambow, Owen and
|
80 |
+
Wanner, Leo and
|
81 |
+
Apidianaki, Marianna and
|
82 |
+
Al-Khalifa, Hend and
|
83 |
+
Eugenio, Barbara Di and
|
84 |
+
Schockaert, Steven",
|
85 |
+
booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
|
86 |
+
month = jan,
|
87 |
+
year = "2025",
|
88 |
+
address = "Abu Dhabi, UAE",
|
89 |
+
publisher = "Association for Computational Linguistics",
|
90 |
+
url = "https://aclanthology.org/2025.coling-main.470/",
|
91 |
+
pages = "7055--7074",
|
92 |
+
abstract = "Large Language Models (LLMs) have significantly improved personalized conversational capabilities. However, existing datasets like Persona Chat, Synthetic Persona Chat, and Blended Skill Talk rely on static, predefined personas. This approach often results in dialogues that fail to capture human personalities' fluid and evolving nature. To overcome these limitations, we introduce a novel dataset with around 400,000 dialogues and a framework for generating personalized conversations using long-form journal entries from Reddit. Our approach clusters journal entries for each author and filters them by selecting the most representative cluster, ensuring that the retained entries best reflect the author`s personality. We further refine the data by capturing the Big Five personality traits{---}openness, conscientiousness, extraversion, agreeableness, and neuroticism{---}ensuring that dialogues authentically reflect an individual`s personality. Using Llama 3 70B, we generate high-quality, personality-rich dialogues grounded in these journal entries. Fine-tuning models on this dataset leads to an 11{\%} improvement in capturing personality traits on average, outperforming existing approaches in generating more coherent and personality-driven dialogues."
|
93 |
+
}
|
94 |
+
```
|