--- license: llama2 language: - en library_name: transformers tags: - mental-health - psychology - llama-2 - QLoRA - CBT base_model: meta-llama/Llama-2-7b-chat-hf pipeline_tag: text-generation --- # MindMate-AI: Mental Health Assistant Fine-Tuned on Llama-2-7b ![HF](https://img.shields.io/badge/HuggingFace-%F0%9F%A4%97-yellow) ![LLaMA](https://img.shields.io/badge/LLaMA-2-ff69b4) ![QLoRA](https://img.shields.io/badge/QLoRA-4bit-blue) ## 🚀 Features - 🧠 CBT-focused response generation - 🚀 4-bit quantization with QLoRA - 💡 Context-aware mental health dialogues - 📈 Optimized for therapeutic conversations - 🤖 HF Transformers/trl/peft integration ## 📋 Requirements ### Hardware - NVIDIA GPU (16GB+ VRAM recommended) - CUDA 11.7+ - Python 3.9+ ### Accounts 1. Hugging Face account 2. Access to Meta's Llama-2 models 3. Kaggle account (for dataset access) ## ⚙️ Installation ```bash # Base dependencies pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 transformers==4.31.0 trl==0.4.7 # Additional requirements pip install -q datasets pip install -q tensorboard # Environment setup export HF_TOKEN="your_huggingface_token" export KAGGLE_CONFIG_DIR="/path/to/kaggle.json" ## ⚠️ Important Legal Disclaimers 💼 **This is a non-commercial academic project. Not approved for medical use.** ```python # Required in any interface code print("DISCLAIMER: Academic prototype - outputs may be inaccurate. Not for real-world use.") This project uses Meta's Llama-2-7b model under the [LLAMA 2 Community License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **You MUST:** 1. Maintain the original [Llama-2 LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE) file 2. Keep this notice visible in all distributions 3. Not use for prohibited activities (see Meta's Acceptable Use Policy) **Copyrights:** - Base Model: Copyright © Meta Platforms, Inc. ## 📜 Intended Use - Experimental mental health conversation support only - Strictly for academic research (psychology/NLP education) ## ⚠️ Limitations & Risks - May generate inaccurate or harmful mental health suggestions - No clinical validity - do not use for diagnosis/therapy - Potential bias in training data (document your dataset source) ## 🛡️ Ethical Considerations - Contains [Meta's Llama-2 safeguards](https://ai.meta.com/llama/use-policy/) - Additional mental health-specific safety mitigations: [describe yours]