task_categories:
- question-answering
language:
- en
tags:
- genetics
- biology
- medical
configs:
- config_name: default
data_files:
- split: evaluation
path:
- source1.csv
- source2.csv
- source3.csv
A collection of multiple-choice questions intended for students preparing for the American Board of Genetic Counseling (ABGC) Certification Examination.
Also see the genetic-counselor-freeform-questions evaluation set.
A genetic counselor must be prepared to answer questions about inheritance of traits, medical statistics, testing, empathetic and ethical conversations with patients, and observing symptoms.
For evaluation only
The goal of this dataset is to evaluate LLMs and other AI agents on genetic counseling topics
in an exam format. It is insufficient for training or finetuning an expert system.
Author cannot verify the accuracy, completeness, or relevance of the answers.
Patients need personalized, expert advice beyond what can be described on an exam or returned by an AI.
How to evaluate
You can run this LightEval evaluation with the command:
lighteval accelerate \
"pretrained=meta-llama/Llama-3.1-8B-Instruct" \
"community|genetic-counselor-multiple-choice|0|0" \
--custom-tasks lighteval-tasks/community_tasks/gen_counselor_evals.py \
--override-batch-size=5 \
--use-chat-template
Llama-3.1-8B-Instruct got 50.67% accuracy. I haven't been able to run all of the questions by OpenAI, but ChatGPT was doing pretty good on a sample of questions.
Source information
Source 1
- Some answers may not be up-to-date with current practices
- Examples using images (pedigree) or missing multiple-choice answers are not included
- Hyphenization was added post-download and may not be complete
Source 2
Source 3
Some questions were removed which had multiple answers (i.e. B and C). Some questions matching Source 1 are removed.