🧠 Recurv-Medical-Llama Model

License HF

Overview

The Recurv-Medical-Llama model is a fine-tuned version of Meta's LLaMa 3.1 8B, developed to provide precise and contextual assistance for healthcare professionals and researchers. This model excels in answering medical queries, assisting in anamnesis, and generating detailed explanations tailored for medical scenarios, leveraging state-of-the-art instruction tuning techniques.

(Knowledge cut-off date: 22th January, 2025)

🎯 Key Features

  • Optimized for medical-specific queries across various specialties.
  • Fine-tuned for clinical and research-oriented workflows.
  • Lightweight parameter-efficient fine-tuning with LoRA (Low-Rank Adaptation).
  • Multi-turn conversation support for context-rich interactions.
  • Generates comprehensive answers and evidence-based suggestions.

πŸš€ Model Card

Parameter Details
Base Model Meta LLaMa 3.1 8B
Fine-Tuning Framework LoRA
Dataset Size 67,299 high-quality Q&A pairs
Context Length 4,096 tokens
Training Steps 100,000
Model Size 8 billion parameters

πŸ“Š Model Architecture

Dataset Sources

The dataset comprises high-quality Q&A pairs curated from medical textbooks, research papers, and clinical guidelines.

Source Description
PubMed Extracted insights from open-access medical research.
Clinical Guidelines Data sourced from WHO, CDC, and specialty-specific guidelines.
EHR-Simulated Data Synthetic datasets modeled on real-world patient records for anamnesis workflows.

πŸ› οΈ Installation and Usage

1. Installation

pip install llama-cpp-python --prefer-binary --extra-index-url=https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu118

2. Load the Model

from llama_cpp import Llama

llm = Llama(
    model_path="recurv_medical_llama.gguf",
    n_ctx=2048,         # Context window
    n_threads=4         # Number of CPU threads to use
)

3. Run Inference

prompt = "What is Paracetamol?"
output = llm(
    prompt,
    max_tokens=256,     # Maximum number of tokens to generate
    temperature=0.5,    # Controls randomness (0.0 = deterministic, 1.0 = creative)
    top_p=0.95,         # Nucleus sampling parameter
    stop=["###"],       # Optional stop words
    echo=True           # Include prompt in the output
)

# Print the generated text
print(output['choices'][0]['text'])

🌟 Try The Model

πŸš€ Recurv-Medical-Llama on Our Website

πŸ™Œ Contributing

We welcome contributions to enhance Recurv-Medical-Llama. You can:

  • Share feedback or suggestions on the Hugging Face Model Hub
  • Submit pull requests or issues for model improvement.

πŸ“œ License

This model is licensed under the MIT License.


πŸ“ž Community

For questions or support, connect with us via:


🀝 Acknowledgments

Special thanks to the medical community and researchers for their valuable insights and support in building this model. Together, we’re advancing AI in healthcare.

Downloads last month
5
GGUF
Model size
8.03B params
Architecture
llama
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for RecurvAI/Recurv-Medical-Lllama

Quantized
(326)
this model

Dataset used to train RecurvAI/Recurv-Medical-Lllama