RichardErkhov
commited on
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
DOCTOR - bnb 4bits
|
11 |
+
- Model creator: https://huggingface.co/DLI-Lab/
|
12 |
+
- Original model: https://huggingface.co/DLI-Lab/DOCTOR/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: apache-2.0
|
20 |
+
datasets:
|
21 |
+
- DLI-Lab/DONUT
|
22 |
+
widget:
|
23 |
+
- text: 'A: Hi, Viggo. How are you doing today?\nB: Hey, Yovani. I’m doing all right. Thanks for asking.\nA: No problem. I saw that you left your coffee mug on the counter this morning. Did you forget to take it with you?\nB: Yeah, I did. Thanks for grabbing it for me.\nA: No problem at all. I know how busy you are and I didn’t want you to have to come back for it later.\nB: You’re a lifesaver, Yovani. Seriously, thank you so much.'
|
24 |
+
- example_title: 'example 1'
|
25 |
+
---
|
26 |
+
A dialogue commonsense reasoner that generates Chain-of-Thought knowledge in a multi-hop manner given a dialogue history. Our DOCTOR is trained with [DONUT](https://huggingface.co/datasets/DLI-Lab/DONUT) which is also available on huggingface.
|
27 |
+
## Links for Reference
|
28 |
+
|
29 |
+
- **Demo:https://dialoguecot.web.app/**
|
30 |
+
- **Repository:https://github.com/kyle8581/DialogueCoT**
|
31 |
+
- **Paper:https://arxiv.org/abs/2310.09343**
|
32 |
+
- **Point of Contact:[email protected]**
|
33 |
+
|
34 |
+
![](./figure2_overall.png)
|
35 |
+
For more details, you can look at our paper [Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents](https://arxiv.org/abs/2310.09343).
|
36 |
+
If you find the following model helpful, please consider citing our paper!
|
37 |
+
|
38 |
+
**BibTeX:**
|
39 |
+
```bibtex
|
40 |
+
@misc{chae2023dialogue,
|
41 |
+
title={Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents},
|
42 |
+
author={Hyungjoo Chae and Yongho Song and Kai Tzu-iunn Ong and Taeyoon Kwon and Minjin Kim and Youngjae Yu and Dongha Lee and Dongyeop Kang and Jinyoung Yeo},
|
43 |
+
year={2023},
|
44 |
+
eprint={2310.09343},
|
45 |
+
archivePrefix={arXiv},
|
46 |
+
primaryClass={cs.CL}
|
47 |
+
}
|
48 |
+
```
|
49 |
+
|