Update README.md
Browse filesChange model name
README.md
CHANGED
@@ -5,20 +5,20 @@ language:
|
|
5 |
library_name: transformers
|
6 |
inference: false
|
7 |
---
|
8 |
-
# dolly-v2-
|
9 |
## Summary
|
10 |
|
11 |
-
Databricks’ `dolly-v2-
|
12 |
that is licensed for commercial use. Based on `pythia-6.9b`, Dolly is trained on ~15k instruction/response fine tuning records
|
13 |
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
|
14 |
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
|
15 |
-
information extraction, open QA and summarization. `dolly-v2-
|
16 |
high quality instruction following behavior not characteristic of the foundation model on which it is based.
|
17 |
|
18 |
**Owner**: Databricks, Inc.
|
19 |
|
20 |
## Model Overview
|
21 |
-
`dolly-v2-
|
22 |
[EleutherAI’s](https://www.eleuther.ai/) [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) and fine-tuned
|
23 |
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
|
24 |
|
@@ -32,7 +32,7 @@ In a Databricks notebook you could run:
|
|
32 |
```
|
33 |
|
34 |
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
|
35 |
-
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-
|
36 |
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
|
37 |
It is also fine to remove it if there is sufficient memory.
|
38 |
|
@@ -40,7 +40,7 @@ It is also fine to remove it if there is sufficient memory.
|
|
40 |
import torch
|
41 |
from transformers import pipeline
|
42 |
|
43 |
-
generate_text = pipeline(model="databricks/dolly-v2-
|
44 |
```
|
45 |
|
46 |
You can then use the pipeline to answer instructions:
|
@@ -49,15 +49,15 @@ You can then use the pipeline to answer instructions:
|
|
49 |
generate_text("Explain to me the difference between nuclear fission and fusion.")
|
50 |
```
|
51 |
|
52 |
-
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-
|
53 |
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
|
54 |
|
55 |
```
|
56 |
from instruct_pipeline import InstructionTextGenerationPipeline
|
57 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
58 |
|
59 |
-
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-
|
60 |
-
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-
|
61 |
|
62 |
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
|
63 |
```
|
@@ -66,23 +66,23 @@ generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokeniz
|
|
66 |
## Known Limitations
|
67 |
|
68 |
### Performance Limitations
|
69 |
-
**`dolly-v2-
|
70 |
competitively with more modern model architectures or models subject to larger pretraining corpuses.
|
71 |
|
72 |
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
|
73 |
-
In particular, `dolly-v2-
|
74 |
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
|
75 |
-
Moreover, we find that `dolly-v2-
|
76 |
|
77 |
### Dataset Limitations
|
78 |
-
Like all language models, `dolly-v2-
|
79 |
|
80 |
- **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
|
81 |
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
|
82 |
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
|
83 |
associations.
|
84 |
|
85 |
-
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-
|
86 |
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
|
87 |
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
|
88 |
personally identifying information about non-public figures, but it may contain typos and factual errors.
|
@@ -95,7 +95,7 @@ maximize the potential of all individuals and organizations.
|
|
95 |
### Benchmark Metrics
|
96 |
|
97 |
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
|
98 |
-
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-
|
99 |
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
|
100 |
but a robust statement as to the sources of these variations requires further study.
|
101 |
|
@@ -105,11 +105,11 @@ but a robust statement as to the sources of these variations requires further st
|
|
105 |
+-----------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------|
|
106 |
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
|
107 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
|
108 |
-
| databricks/dolly-v2-
|
109 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
|
110 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
|
111 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
|
112 |
-
| databricks/dolly-v2-
|
113 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
|
114 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
|
115 |
+-----------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------+
|
|
|
5 |
library_name: transformers
|
6 |
inference: false
|
7 |
---
|
8 |
+
# dolly-v2-7b Model Card
|
9 |
## Summary
|
10 |
|
11 |
+
Databricks’ `dolly-v2-7b`, an instruction-following large language model trained on the Databricks machine learning platform
|
12 |
that is licensed for commercial use. Based on `pythia-6.9b`, Dolly is trained on ~15k instruction/response fine tuning records
|
13 |
[`databricks-dolly-15k`](https://github.com/databrickslabs/dolly/tree/master/data) generated
|
14 |
by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation,
|
15 |
+
information extraction, open QA and summarization. `dolly-v2-7b` is not a state-of-the-art model, but does exhibit surprisingly
|
16 |
high quality instruction following behavior not characteristic of the foundation model on which it is based.
|
17 |
|
18 |
**Owner**: Databricks, Inc.
|
19 |
|
20 |
## Model Overview
|
21 |
+
`dolly-v2-7b` is a 6.9 billion parameter causal language model created by [Databricks](https://databricks.com/) that is derived from
|
22 |
[EleutherAI’s](https://www.eleuther.ai/) [Pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b) and fine-tuned
|
23 |
on a [~15K record instruction corpus](https://github.com/databrickslabs/dolly/tree/master/data) generated by Databricks employees and released under a permissive license (CC-BY-SA)
|
24 |
|
|
|
32 |
```
|
33 |
|
34 |
The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline`
|
35 |
+
found in the model repo [here](https://huggingface.co/databricks/dolly-v2-7b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required.
|
36 |
Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality.
|
37 |
It is also fine to remove it if there is sufficient memory.
|
38 |
|
|
|
40 |
import torch
|
41 |
from transformers import pipeline
|
42 |
|
43 |
+
generate_text = pipeline(model="databricks/dolly-v2-7b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
|
44 |
```
|
45 |
|
46 |
You can then use the pipeline to answer instructions:
|
|
|
49 |
generate_text("Explain to me the difference between nuclear fission and fusion.")
|
50 |
```
|
51 |
|
52 |
+
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/databricks/dolly-v2-7b/blob/main/instruct_pipeline.py),
|
53 |
store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
|
54 |
|
55 |
```
|
56 |
from instruct_pipeline import InstructionTextGenerationPipeline
|
57 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
58 |
|
59 |
+
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-7b", padding_side="left")
|
60 |
+
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-7b", device_map="auto")
|
61 |
|
62 |
generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)
|
63 |
```
|
|
|
66 |
## Known Limitations
|
67 |
|
68 |
### Performance Limitations
|
69 |
+
**`dolly-v2-7b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
|
70 |
competitively with more modern model architectures or models subject to larger pretraining corpuses.
|
71 |
|
72 |
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
|
73 |
+
In particular, `dolly-v2-7b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
|
74 |
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
|
75 |
+
Moreover, we find that `dolly-v2-7b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
|
76 |
|
77 |
### Dataset Limitations
|
78 |
+
Like all language models, `dolly-v2-7b` reflects the content and limitations of its training corpuses.
|
79 |
|
80 |
- **The Pile**: GPT-J’s pre-training corpus contains content mostly collected from the public internet, and like most web-scale datasets,
|
81 |
it contains content many users would find objectionable. As such, the model is likely to reflect these shortcomings, potentially overtly
|
82 |
in the case it is explicitly asked to produce objectionable content, and sometimes subtly, as in the case of biased or harmful implicit
|
83 |
associations.
|
84 |
|
85 |
+
- **`databricks-dolly-15k`**: The training data on which `dolly-v2-7b` is instruction tuned represents natural language instructions generated
|
86 |
by Databricks employees during a period spanning March and April 2023 and includes passages from Wikipedia as references passages
|
87 |
for instruction categories like closed QA and summarization. To our knowledge it does not contain obscenity, intellectual property or
|
88 |
personally identifying information about non-public figures, but it may contain typos and factual errors.
|
|
|
95 |
### Benchmark Metrics
|
96 |
|
97 |
Below you'll find various models benchmark performance on the [EleutherAI LLM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness);
|
98 |
+
model results are sorted by geometric mean to produce an intelligible ordering. As outlined above, these results demonstrate that `dolly-v2-7b` is not state of the art,
|
99 |
and in fact underperforms `dolly-v1-6b` in some evaluation benchmarks. We believe this owes to the composition and size of the underlying fine tuning datasets,
|
100 |
but a robust statement as to the sources of these variations requires further study.
|
101 |
|
|
|
105 |
+-----------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------|
|
106 |
| EleutherAI/pythia-2.8b | 0.348 | 0.585859 | 0.589582 | 0.591217 | 0.323379 | 0.73395 | 0.638226 | 0.523431 |
|
107 |
| EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
|
108 |
+
| databricks/dolly-v2-3b | 0.384 | 0.611532 | 0.589582 | 0.650767 | 0.370307 | 0.742655 | 0.575535 | 0.544886 |
|
109 |
| EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
|
110 |
| EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
|
111 |
| databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
|
112 |
+
| databricks/dolly-v2-7b | 0.392 | 0.633838 | 0.607735 | 0.686517 | 0.406997 | 0.750816 | 0.644037 | 0.573487 |
|
113 |
| databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
|
114 |
| EleutherAI/gpt-neox-20b | 0.402 | 0.683923 | 0.656669 | 0.7142 | 0.408703 | 0.784004 | 0.695413 | 0.602236 |
|
115 |
+-----------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------+
|