Update README.md
Browse files
README.md
CHANGED
@@ -39,7 +39,7 @@ Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is si
|
|
39 |
## Domain-Specific LLaMA-2-Chat
|
40 |
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
|
41 |
|
42 |
-
For example, to chat with the finance model:
|
43 |
```python
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
|
|
|
39 |
## Domain-Specific LLaMA-2-Chat
|
40 |
Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat)
|
41 |
|
42 |
+
For example, to chat with the finance base model (**🤗we highly recommend switching to the [chat model](https://huggingface.co/AdaptLLM/finance-chat) for better response quality!**):
|
43 |
```python
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
|