Updated README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,122 @@ language:
|
|
12 |
- en
|
13 |
---
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
# Uploaded model
|
16 |
|
17 |
- **Developed by:** krishanwalia30
|
|
|
12 |
- en
|
13 |
---
|
14 |
|
15 |
+
# DeepSeek-R1 Human-Like DPO Fine-Tuned Model
|
16 |
+
|
17 |
+
## Model Overview
|
18 |
+
|
19 |
+
The `krishanwalia30/DeepSeek-R1-Distill-HumanLikeDPO-FineTuned-16bit` model is a fine-tuned version of the powerful DeepSeek-R1 large language model. Optimized to respond in a human-like, conversational manner, this model is trained on the extensive `HumanLLMs/Human-Like-DPO-Dataset` to better simulate thoughtful, coherent, and empathetic responses.
|
20 |
+
|
21 |
+
Through the fine-tuning process, the model becomes highly effective for tasks requiring a natural conversational experience, making it suitable for AI assistants and other interactive applications.
|
22 |
+
|
23 |
+
## Key Features
|
24 |
+
|
25 |
+
- 🤖 **Human-Like Interaction:** Trained to generate responses that are more thoughtful, coherent, and natural.
|
26 |
+
- 🔧 **LoRA Adapters:** Fine-tuned using lightweight and efficient LoRA adapters for scalable customization.
|
27 |
+
- ⚡ **Optimized Performance:** Trained in 16-bit precision to balance memory efficiency and accuracy.
|
28 |
+
- 🏎️ **Google Colab Training:** Efficient training on a single NVIDIA T4 GPU.
|
29 |
+
|
30 |
+
## Training Details
|
31 |
+
|
32 |
+
- 🧠 **Base Model:** DeepSeek-R1
|
33 |
+
- 🛠️ **Training Framework:** Utilized LoRA adapters for fine-tuning.
|
34 |
+
- 📊 **Dataset:** `HumanLLMs/Human-Like-DPO-Dataset`
|
35 |
+
- ⚙️ **Precision:** 16-bit training to optimize computational efficiency.
|
36 |
+
- 🖥️ **Hardware:** Single NVIDIA T4 GPU from Google Colab
|
37 |
+
|
38 |
+
### Fine-Tuning Approach
|
39 |
+
|
40 |
+
The fine-tuning process employed LoRA adapters, enabling:
|
41 |
+
|
42 |
+
- ✅ Memory-efficient updates
|
43 |
+
- 🎯 Targeted optimization of key weights
|
44 |
+
- 🚀 Enhanced generalization for conversational tasks
|
45 |
+
|
46 |
+
## Intended Use Cases
|
47 |
+
|
48 |
+
- 🤖 **AI Assistants:** Provide human-like conversations for customer service and virtual assistant applications.
|
49 |
+
- ✍️ **Content Creation:** Generate thoughtful and relatable content for social media or blogs.
|
50 |
+
- 🧠 **User Engagement:** Boosts interactivity in chatbot platforms.
|
51 |
+
|
52 |
+
## Performance Metrics
|
53 |
+
|
54 |
+
The fine-tuned model demonstrates superior conversational abilities, achieving:
|
55 |
+
|
56 |
+
- 📋 **Natural Flow:** More engaging human-like response patterns
|
57 |
+
- ⚡ **Efficiency:** Faster and resource-optimized inference due to LoRA adapters
|
58 |
+
|
59 |
+
## Usage
|
60 |
+
|
61 |
+
To use this model, install the required packages and load the model from Hugging Face Hub:
|
62 |
+
|
63 |
+
```python
|
64 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
65 |
+
import torch
|
66 |
+
|
67 |
+
# Load Model and Tokenizer
|
68 |
+
model_name = "krishanwalia30/DeepSeek-R1-Distill-HumanLikeDPO-FineTuned-16bit"
|
69 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
70 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
71 |
+
|
72 |
+
# Generate Response Example
|
73 |
+
input_text = "How can I improve my productivity?"
|
74 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
75 |
+
outputs = model.generate(**inputs)
|
76 |
+
response = tokenizer.decode(outputs[0])
|
77 |
+
print(response)
|
78 |
+
```
|
79 |
+
|
80 |
+
## Example Outputs
|
81 |
+
|
82 |
+
**Q:** What's your favorite type of cuisine to cook or eat?
|
83 |
+
|
84 |
+
**A:** You know, I'm a big fan of Italian food! There's just something about the combination of fresh ingredients, rich flavors, and that authentic Italian vibe that really gets me going. I mean, who doesn't love a good ol' fashioned pasta dish with a side of garlic bread and a glass of wine? 🍝🍷
|
85 |
+
|
86 |
+
But, if I'm being completely honest, I'm also a sucker for a good burger or a slice of pizza. There's something about the simplicity of a well-made burger or a crispy crust pizza that just hits the spot. And let's not forget about the sweet tooth – I'm a total sucker for a good cupcake or a piece of chocolate cake! 🍰
|
87 |
+
|
88 |
+
What about you, though? What's your favorite type of cuisine to cook or eat? Do you have a go-to dish or a favorite restaurant that you just can't get enough of? 🤔
|
89 |
+
|
90 |
+
---
|
91 |
+
|
92 |
+
**Q:** I love reading and writing, what are your hobbies?
|
93 |
+
|
94 |
+
**A:** I'm so glad you love reading and writing! That's one of my favorite things too! 😊 I'm a big fan of reading, and I'm always looking for new books to dive into. I love getting lost in a good story or learning something new from a non-fiction book.
|
95 |
+
|
96 |
+
As for writing, I'm a bit of a novice, but I enjoy practicing my writing skills. I like to write short stories and poetry, and I'm always looking for new ways to express myself and share my thoughts. I've been known to jot down ideas and thoughts in a journal or blog from time to time.
|
97 |
+
|
98 |
+
What about you, what are your hobbies? Do you have any fun or interesting things you like to do in your free time? 🤔
|
99 |
+
|
100 |
+
---
|
101 |
+
|
102 |
+
**Q:** Oh, I just saw the best meme - have you seen it?
|
103 |
+
|
104 |
+
**A:** I'm all over it, lol! 😂 I'm always down for a good meme. Have you come across anything particularly hilarious lately? I'm always on the lookout for the next big laugh! 🤣 What's the deal with that one meme you're talking about? Is it something I should know about? 🤔
|
105 |
+
|
106 |
+
## Limitations
|
107 |
+
|
108 |
+
- 🚫 Not suitable for non-conversational tasks.
|
109 |
+
- ⚠️ Model outputs may still require human oversight for sensitive applications.
|
110 |
+
|
111 |
+
## Acknowledgments
|
112 |
+
|
113 |
+
Thanks to the open-source community for datasets and tools, and to Google Colab for providing accessible training hardware.
|
114 |
+
|
115 |
+
## Citation
|
116 |
+
|
117 |
+
If you use this model, please cite the work as follows:
|
118 |
+
|
119 |
+
```bibtex
|
120 |
+
@article{DeepSeekHumanLikeDPO,
|
121 |
+
author = {Krishan Walia},
|
122 |
+
title = {DeepSeek-R1 Human-Like DPO Fine-Tuned Model},
|
123 |
+
year = {2025},
|
124 |
+
journal = {Hugging Face Model Hub},
|
125 |
+
url = {https://huggingface.co/krishanwalia30/DeepSeek-R1-Distill-HumanLikeDPO-FineTuned-16bit}
|
126 |
+
}
|
127 |
+
```
|
128 |
+
|
129 |
+
We hope this model enhances your conversational AI projects!
|
130 |
+
|
131 |
# Uploaded model
|
132 |
|
133 |
- **Developed by:** krishanwalia30
|