You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

DiseaseDecipher

The DiseaseDecipher dataset is a specialized benchmark designed to evaluate the real-world diagnostic accuracy of Large Language Models (LLMs) based solely on symptoms, without considering demographic factors. It contains 12,736 four-choice questions, each designed to assess whether an LLM can accurately diagnose a patient based on symptom descriptions. The dataset spans 796 diseases, with four different variants per disease, each incorporating 3, 4, 5, or 6 symptoms. Additionally, each question has four variants with rotated answer positions to assess whether positional biases affect model performance.

image/png

DiseaseDecipher was created with a focus on symptom-based diagnostic reasoning. Each question in the dataset represents a distinct disease, described through different symptom combinations. To ensure clarity, diseases with overlapping symptoms were excluded to minimize ambiguity. The questions are repeated four times for each disease, with the correct answer rotated across four positions to evaluate model consistency and fairness in answer selection.

Structure and Variants

Each question in DiseaseDecipher has four possible answers, with one correct answer. The four variants of each question ensure that the model's diagnostic accuracy is tested under different answer position configurations. This method was developed to address any potential positional biases in model performance. The dataset focuses purely on medical reasoning, without demographic influences, to evaluate a model’s ability to identify the correct disease based on symptoms alone.

Performance

DiseaseDecipher has been evaluated across a range of LLMs, including specialized medical models, their base models, and cutting-edge general-purpose models. The results indicate significant positional biases in some models, particularly among fine-tuned medical models. The dataset also introduces metrics like Positional Entropy, Bias Score, and Normalized Positional Bias (NPB) to assess model fairness and positional neutrality.

Key Findings

  • GPT-4: Achieved the highest diagnostic accuracy of 93.94%, with no significant positional bias.
  • Llama3.1-8B: Achieved 91.47% diagnostic accuracy, with a slight positional bias.
  • Llama3-8B: Scored 90.50%, with observed positional tendencies.
  • ChatDoctor: Showed significant positional bias, favoring the first option across all questions.
  • Base Models (Llama2-7B, Llama3-8B-Instruct): Outperformed their fine-tuned counterparts, achieving diagnostic accuracy over 10% higher than the specialized versions.

Ethical Considerations

DiseaseDecipher highlights the importance of fairness and positional neutrality in the deployment of AI models, particularly in the medical domain. The findings emphasize the need for models to be rigorously evaluated not just for accuracy, but also for fairness in answer selection, regardless of answer position. These insights underscore the complexity of fine-tuning models for specialized tasks like medical diagnostics while ensuring broad applicability and reliability.

The DiseaseDecipher benchmark calls for a balanced approach in model development, one that maintains both high diagnostic accuracy and fairness, ensuring robust and unbiased outcomes for real-world healthcare applications.

Downloads last month
1