mike-conover-db
commited on
Commit
·
f55e1f2
1
Parent(s):
55215ac
Updating README
Browse files
README.md
CHANGED
@@ -30,7 +30,8 @@ on a ~15K record instruction corpus generated by Databricks employees and releas
|
|
30 |
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
|
31 |
competitively with more modern model architectures or models subject to larger pretraining corpuses.
|
32 |
|
33 |
-
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
|
|
|
34 |
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
|
35 |
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
|
36 |
|
@@ -63,8 +64,6 @@ but a robust statement as to the sources of these variations requires further st
|
|
63 |
+----+------------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------+
|
64 |
| | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
|
65 |
|----+------------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------|
|
66 |
-
| 0 | EleutherAI/pythia-6.9b | 0.368 | 0.604798 | 0.608524 | 0.631548 | 0.343857 | 0.761153 | 0.6263 | 0.543567 |
|
67 |
-
| 1 | EleutherAI/pythia-12b | 0.364 | 0.627104 | 0.636148 | 0.668094 | 0.346416 | 0.760065 | 0.673394 | 0.559676 |
|
68 |
| 2 | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
|
69 |
| 3 | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
|
70 |
| 4 | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
|
|
|
30 |
**`dolly-v2-12b` is not a state-of-the-art generative language model** and, though quantitative benchmarking is ongoing, is not designed to perform
|
31 |
competitively with more modern model architectures or models subject to larger pretraining corpuses.
|
32 |
|
33 |
+
The Dolly model family is under active development, and so any list of shortcomings is unlikely to be exhaustive, but we include known limitations and misfires here as a means to document and share our preliminary findings with the community.
|
34 |
+
In particular, `dolly-v2-12b` struggles with: syntactically complex prompts, programming problems, mathematical operations, factual errors,
|
35 |
dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, having a sense of humor, etc.
|
36 |
Moreover, we find that `dolly-v2-12b` does not have some capabilities, such as well-formatted letter writing, present in the original model.
|
37 |
|
|
|
64 |
+----+------------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------+
|
65 |
| | model | openbookqa | arc_easy | winogrande | hellaswag | arc_challenge | piqa | boolq | gmean |
|
66 |
|----+------------------------------------+--------------+------------+--------------+-------------+-----------------+----------+----------+----------|
|
|
|
|
|
67 |
| 2 | EleutherAI/gpt-j-6B | 0.382 | 0.621633 | 0.651144 | 0.662617 | 0.363481 | 0.761153 | 0.655963 | 0.565936 |
|
68 |
| 3 | databricks/dolly-v2-12b | 0.408 | 0.63931 | 0.616417 | 0.707927 | 0.388225 | 0.757889 | 0.568196 | 0.56781 |
|
69 |
| 4 | databricks/dolly-v1-6b | 0.41 | 0.62963 | 0.643252 | 0.676758 | 0.384812 | 0.773667 | 0.687768 | 0.583431 |
|