Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
ksoman commited on
Commit
366f4b1
·
verified ·
1 Parent(s): db27f99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -5
README.md CHANGED
@@ -60,17 +60,13 @@ Hence, this dataset is designed to support research and development in biomedica
60
  3. Assessing retrieval capabilities of various RAG (Retrieval-Augmented Generation) frameworks
61
  4. Supporting research in biomedical ontologies and knowledge graphs
62
 
63
- # BiomixQA Dataset
64
-
65
- [Previous sections remain unchanged]
66
-
67
  ## Performance Analysis
68
 
69
  We conducted a comprehensive analysis of the performance of three Large Language Models (LLMs) - Llama-2-13b, GPT-3.5-Turbo (0613), and GPT-4 - on the BiomixQA dataset. We compared their performance using both a standard prompt-based approach and our novel Knowledge Graph Retrieval-Augmented Generation (KG-RAG) framework.
70
 
71
  ### Performance Summary
72
 
73
- Table 1: Performance (accuracy) of LLMs on BiomixQA datasets using prompt-based (zero-shot) and KG-RAG approaches (For more details refer [this](https://arxiv.org/abs/2311.17330) paper)
74
 
75
  | Model | True/False Dataset | | MCQ Dataset | |
76
  |-------|-------------------:|---:|------------:|---:|
 
60
  3. Assessing retrieval capabilities of various RAG (Retrieval-Augmented Generation) frameworks
61
  4. Supporting research in biomedical ontologies and knowledge graphs
62
 
 
 
 
 
63
  ## Performance Analysis
64
 
65
  We conducted a comprehensive analysis of the performance of three Large Language Models (LLMs) - Llama-2-13b, GPT-3.5-Turbo (0613), and GPT-4 - on the BiomixQA dataset. We compared their performance using both a standard prompt-based approach and our novel Knowledge Graph Retrieval-Augmented Generation (KG-RAG) framework.
66
 
67
  ### Performance Summary
68
 
69
+ Table 1: Performance (accuracy) of LLMs on BiomixQA datasets using prompt-based (zero-shot) and KG-RAG approaches (For more details, refer [this](https://arxiv.org/abs/2311.17330) paper)
70
 
71
  | Model | True/False Dataset | | MCQ Dataset | |
72
  |-------|-------------------:|---:|------------:|---:|