π§ Recurv-Medical-Llama Model
Overview
The Recurv-Medical-Llama model is a fine-tuned version of Meta's LLaMa 3.1 8B, developed to provide precise and contextual assistance for healthcare professionals and researchers. This model excels in answering medical queries, assisting in anamnesis, and generating detailed explanations tailored for medical scenarios, leveraging state-of-the-art instruction tuning techniques.
(Knowledge cut-off date: 22th January, 2025)
π― Key Features
- Optimized for medical-specific queries across various specialties.
- Fine-tuned for clinical and research-oriented workflows.
- Lightweight parameter-efficient fine-tuning with LoRA (Low-Rank Adaptation).
- Multi-turn conversation support for context-rich interactions.
- Generates comprehensive answers and evidence-based suggestions.
π Model Card
Parameter | Details |
---|---|
Base Model | Meta LLaMa 3.1 8B |
Fine-Tuning Framework | LoRA |
Dataset Size | 67,299 high-quality Q&A pairs |
Context Length | 4,096 tokens |
Training Steps | 100,000 |
Model Size | 8 billion parameters |
π Model Architecture
Dataset Sources
The dataset comprises high-quality Q&A pairs curated from medical textbooks, research papers, and clinical guidelines.
Source | Description |
---|---|
PubMed | Extracted insights from open-access medical research. |
Clinical Guidelines | Data sourced from WHO, CDC, and specialty-specific guidelines. |
EHR-Simulated Data | Synthetic datasets modeled on real-world patient records for anamnesis workflows. |
π οΈ Installation and Usage
1. Installation
pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118
2. Load the Model
from llama_cpp import Llama
llm = Llama(
model_path="recurv_medical_llama.gguf",
n_ctx=2048, # Context window
n_threads=4 # Number of CPU threads to use
)
3. Run Inference
prompt = "What is Paracetamol?"
output = llm(
prompt,
max_tokens=256, # Maximum number of tokens to generate
temperature=0.5, # Controls randomness (0.0 = deterministic, 1.0 = creative)
top_p=0.95, # Nucleus sampling parameter
stop=["###"], # Optional stop words
echo=True # Include prompt in the output
)
# Print the generated text
print(output['choices'][0]['text'])
π Try The Model
π Recurv-Medical-Llama on Our Website
π Contributing
We welcome contributions to enhance Recurv-Medical-Llama. You can:
- Share feedback or suggestions on the Hugging Face Model Hub
- Submit pull requests or issues for model improvement.
π License
This model is licensed under the MIT License.
π Community
For questions or support, connect with us via:
- Twitter: RecurvAI
- Email: [email protected]
π€ Acknowledgments
Special thanks to the medical community and researchers for their valuable insights and support in building this model. Together, weβre advancing AI in healthcare.
- Downloads last month
- 5
Model tree for RecurvAI/Recurv-Medical-Lllama
Base model
meta-llama/Llama-3.1-8B