Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,135 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
|
|
10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
-
##
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
29 |
|
30 |
-
|
|
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
|
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
-
|
|
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
-
|
|
|
45 |
|
46 |
-
|
|
|
|
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
|
|
|
|
|
|
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
|
|
|
|
|
61 |
|
62 |
-
|
|
|
63 |
|
64 |
-
|
|
|
65 |
|
66 |
-
|
|
|
|
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
license: other
|
3 |
+
license_name: helpingai
|
4 |
+
license_link: LICENSE.md
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- HelpingAI
|
8 |
+
- Emotionally Intelligent
|
9 |
+
- EQ
|
10 |
+
- flash
|
11 |
+
datasets:
|
12 |
+
- OEvortex/SentimentSynth
|
13 |
+
- OEvortex/EmotionalIntelligence-10K
|
14 |
---
|
15 |
|
16 |
+
# HelpingAI-flash: Emotionally Intelligent Conversational AI for All Devices
|
17 |
|
18 |
+
![logo](https://huggingface.co/OEvortex/HelpingAI-flash/resolve/main/HelpingAI.png)
|
19 |
|
20 |
+
## Overview
|
21 |
+
HelpingAI-flash is a versatile 2B parameter language model designed to deliver emotionally intelligent conversational interactions across all devices, including smartphones. It is engineered to engage users with empathy, understanding, and supportive dialogue, adapting seamlessly to various contexts and platforms. This model strives to offer a compassionate AI companion that resonates with users’ emotional needs and provides meaningful interactions wherever they are.
|
22 |
|
23 |
+
## Objectives
|
24 |
+
- Facilitate open-ended dialogue with advanced emotional intelligence
|
25 |
+
- Recognize and validate user emotions and contexts with precision
|
26 |
+
- Deliver supportive, empathetic, and psychologically-grounded responses
|
27 |
+
- Ensure responses are respectful and avoid insensitive or harmful content
|
28 |
+
- Continuously enhance emotional awareness and conversational skills
|
29 |
|
30 |
+
## Methodology
|
31 |
+
HelpingAI-flash builds upon the HelpingAI series and incorporates:
|
32 |
+
- Supervised learning on extensive dialogue datasets with emotional labeling
|
33 |
+
- Reinforcement learning with a reward model favoring empathetic and supportive responses
|
34 |
+
- Constitution training to uphold stable and beneficial interaction goals
|
35 |
+
- Integration of knowledge from psychological resources on emotional intelligence
|
36 |
|
37 |
+
## Emotional Quotient (EQ)
|
38 |
+
HelpingAI-flash boasts an impressive Emotional Quotient (EQ) score of 85.23, highlighting its superior capability to understand and respond to human emotions in a caring and supportive manner.
|
39 |
|
40 |
+
![benchmarks](benchmark_performance_comparison.png)
|
41 |
|
42 |
+
## Usage Code
|
43 |
+
```python
|
44 |
+
import torch
|
45 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
46 |
|
47 |
+
# Load the HelpingAI-flash model
|
48 |
+
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-flash", trust_remote_code=True, torch_dtype=torch.float16).to("cuda")
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
+
# Load the tokenizer
|
51 |
+
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-flash", trust_remote_code=True, torch_dtype=torch.float16)
|
52 |
|
53 |
+
# Initialize TextStreamer for smooth conversation flow
|
54 |
+
streamer = TextStreamer(tokenizer)
|
55 |
|
56 |
+
prompt = """
|
57 |
+
<|im_start|>system: {system}
|
58 |
+
<|im_end|>
|
59 |
+
<|im_start|>user: {insaan}
|
60 |
+
<|im_end|>
|
61 |
+
<|im_start|>assistant:
|
62 |
+
"""
|
63 |
|
64 |
+
# Okay, enough chit-chat, let's get down to business! Here's what will be our system prompt
|
65 |
+
system = "You are HelpingAI-flash, an emotionally intelligent AI designed to provide empathetic and supportive responses in HelpingAI style."
|
66 |
|
|
|
67 |
|
68 |
+
# And the insaan is curious (like you!) insaan means human in hindi
|
69 |
+
insaan = "I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone."
|
70 |
|
71 |
+
# Now we combine system and user messages into the template, like adding sprinkles to our conversation cupcake
|
72 |
+
prompt = prompt.format(system=system, insaan=insaan)
|
73 |
|
74 |
+
# Time to chat! We'll use the tokenizer to translate our text into a language the model understands
|
75 |
+
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
|
76 |
|
77 |
+
# Here comes the fun part! Let's unleash the power of HelpingAI-flash to generate some awesome text
|
78 |
+
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer)
|
79 |
+
```
|
80 |
|
81 |
+
*Directly using this model from GGUF*
|
82 |
|
83 |
+
```python
|
84 |
+
%pip install -U 'webscout[local]'
|
85 |
|
86 |
+
from webscout.Local.utils import download_model
|
87 |
+
from webscout.Local.model import Model
|
88 |
+
from webscout.Local.thread import Thread
|
89 |
+
from webscout.Local import formats
|
90 |
+
from webscout.Local.samplers import SamplerSettings
|
91 |
|
92 |
+
# 1. Download the model
|
93 |
+
repo_id = "OEvortex/HelpingAI-flash"
|
94 |
+
filename = "helpingai-flash-2b.gguf"
|
95 |
+
model_path = download_model(repo_id, filename, token='') # Replace with your Hugging Face token if needed
|
96 |
|
97 |
+
# 2. Load the model
|
98 |
+
model = Model(model_path, n_gpu_layers=20)
|
99 |
|
100 |
+
# 3. Define the system prompt
|
101 |
+
system_prompt = "You are HelpingAI-flash, an emotionally intelligent AI designed to provide empathetic and supportive responses in HelpingAI style."
|
102 |
|
103 |
+
# 4. Create a custom chatml format with your system prompt
|
104 |
+
custom_chatml = formats.chatml.copy()
|
105 |
+
custom_chatml['system_content'] = system_prompt
|
106 |
|
107 |
+
# 5. Define your sampler settings (optional)
|
108 |
+
sampler = SamplerSettings(temp=0.7, top_p=0.9) # Adjust as needed
|
109 |
|
110 |
+
# 6. Create a Thread with the custom format and sampler
|
111 |
+
thread = Thread(model, custom_chatml, sampler=sampler)
|
112 |
|
113 |
+
# 7. Start interacting with the model
|
114 |
+
thread.interact(header="🌟 HelpingAI-flash: Emotionally Intelligent Conversational AI for All Devices 🌟", color=True)
|
115 |
+
```
|
116 |
|
117 |
+
## Example Dialogue
|
118 |
+
> I'm feeling a bit overwhelmed with work and could use some encouragement.
|
119 |
|
120 |
+
I'm sorry to hear that you're feeling overwhelmed. 😔 Remember that it's okay to take a break and recharge. You’ve already accomplished so much, and it's important to recognize your hard work and resilience. Maybe try breaking your tasks into smaller steps and celebrate each small victory. You’ve got this, and I'm here to support you every step of the way! 🌟 Take it one step at a time, and remember to be kind to yourself. 🌈
|
121 |
|
122 |
+
> I just had a really great day and wanted to share my happiness.
|
123 |
|
124 |
+
That’s fantastic to hear! 🎉 I’m so glad you had a great day! It’s wonderful to celebrate those joyful moments and share your happiness. Keep spreading those positive vibes and cherish the good times. I’m here to celebrate with you and share in your excitement! 🌟 Keep shining and enjoying every moment of your amazing day! 🌈
|
125 |
|
126 |
+
> I’m feeling anxious about an upcoming event. What should I do?
|
127 |
|
128 |
+
Feeling anxious before an event is completely normal. 😊 Here are a few tips to help you manage your anxiety:
|
129 |
+
1. **Prepare in Advance:** Plan and prepare for the event as much as possible. This can help you feel more confident and in control.
|
130 |
+
2. **Practice Deep Breathing:** Deep breathing exercises can help calm your mind and body. Try taking slow, deep breaths to center yourself.
|
131 |
+
3. **Focus on the Positive:** Remind yourself of the positive aspects of the event and what you’re looking forward to.
|
132 |
+
4. **Stay Present:** Try to stay in the present moment and avoid overthinking or dwelling on potential outcomes.
|
133 |
+
5. **Talk to Someone:** Sharing your feelings with a friend or family member can provide support and reassurance.
|
134 |
|
135 |
+
Remember, it’s okay to feel anxious, but with a few coping strategies, you can navigate through it. I’m here for you! 🌟
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|