Kabatubare's picture
Update README.md
2e28e1c
|
raw
history blame
1.96 kB
metadata
title: Medical3000
tags:
  - healthcare
  - NLP
  - dialogues
  - LLM
  - fine-tuned
license: unknown
datasets:
  - Kabatubare/medical-guanaco-3000

Medical3000 Model Card

This is a model card for Medical_3000, a fine-tuned version of TinyPixel/Llama-2-7B-bf16-sharded, specifically aimed at medical dialogues.

Model Details

Base Model

  • Name: TinyPixel/Llama-2-7B-bf16-sharded
  • Description: (A brief description of the base model, its architecture, and its intended use-cases)

Fine-tuned Model

  • Name: Yo!Medical3000
  • Fine-tuned on: Kabatubare/medical-guanaco-3000
  • Description: This model is fine-tuned to specialize in medical dialogues and healthcare applications.

Architecture and Training Parameters

Architecture

  • LoRA Attention Dimension: 64
  • LoRA Alpha Parameter: 16
  • LoRA Dropout: 0.1
  • Precision: 4-bit (bitsandbytes)
  • Quantization Type: nf4

Training Parameters

  • Epochs: 3
  • Batch Size: 4
  • Gradient Accumulation Steps: 1
  • Max Gradient Norm: 0.3
  • Learning Rate: 3e-4
  • Weight Decay: 0.001
  • Optimizer: paged_adamw_32bit
  • LR Scheduler: cosine
  • Warmup Ratio: 0.03
  • Logging Steps: 25

Datasets

Base Model Dataset

  • Name: (Name of the dataset used for the base model)
  • Description: (A brief description of this dataset and its characteristics)

Fine-tuning Dataset

  • Name: Kabatubare/medical-guanaco-3000
  • Description: This is a reduced and balanced dataset curated from a larger medical dialogue dataset. It aims to cover a broad range of medical topics and is suitable for training healthcare chatbots and conducting medical NLP research.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Yo!Medical3000")
model = AutoModelForCausalLM.from_pretrained("Yo!Medical3000")

# Use the model for inference