Update README.md
Browse files
README.md
CHANGED
@@ -62,7 +62,7 @@ Other GPT4All evaluation results:
|
|
62 |
|winogrande|acc |0.7159|
|
63 |
|openbookqa|acc |0.356|
|
64 |
| |acc_norm|0.448|
|
65 |
-
|**Average** (including HF leaderboard datasets) | | 0.6468|
|
66 |
|
67 |
BigBenchHard results:
|
68 |
||||
|
@@ -90,7 +90,7 @@ BigBenchHard results:
|
|
90 |
|bigbench_tracking_shuffled_objects_five_objects |multiple_choice_grade|0.1976|
|
91 |
|bigbench_tracking_shuffled_objects_seven_objects|multiple_choice_grade|0.1440|
|
92 |
|bigbench_tracking_shuffled_objects_three_objects|multiple_choice_grade|0.4133|
|
93 |
-
|**Average**|
|
94 |
|
95 |
# Ethical Considerations and Limitations
|
96 |
Tulpar is a technology with potential risks and limitations. This model is finetuned only in English and all language-related scenarios are not covered. As Hyperbee.ai, we neither guarantee ethical, accurate, unbiased, objective responses nor endorse its outputs. Before deploying this model, you are advised to make safety tests for your use case.
|
|
|
62 |
|winogrande|acc |0.7159|
|
63 |
|openbookqa|acc |0.356|
|
64 |
| |acc_norm|0.448|
|
65 |
+
|**Average** (including HF leaderboard datasets) | | **0.6468** |
|
66 |
|
67 |
BigBenchHard results:
|
68 |
||||
|
|
|
90 |
|bigbench_tracking_shuffled_objects_five_objects |multiple_choice_grade|0.1976|
|
91 |
|bigbench_tracking_shuffled_objects_seven_objects|multiple_choice_grade|0.1440|
|
92 |
|bigbench_tracking_shuffled_objects_three_objects|multiple_choice_grade|0.4133|
|
93 |
+
|**Average**| |**0.3754**
|
94 |
|
95 |
# Ethical Considerations and Limitations
|
96 |
Tulpar is a technology with potential risks and limitations. This model is finetuned only in English and all language-related scenarios are not covered. As Hyperbee.ai, we neither guarantee ethical, accurate, unbiased, objective responses nor endorse its outputs. Before deploying this model, you are advised to make safety tests for your use case.
|