AdaptLLM commited on
Commit
e8b8a68
·
verified ·
1 Parent(s): cc4769c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,8 +14,8 @@ tags:
14
  - legal
15
  ---
16
 
17
- # Adapting LLM to Domains (ICLR 2024)
18
- This repo contains the **Law Knowledge Probing dataset** used in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
19
 
20
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
21
 
 
14
  - legal
15
  ---
16
 
17
+ # Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
18
+ This repo contains the **Law Knowledge Probing dataset** used in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
19
 
20
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
21