Text Generation
Transformers
PyTorch
llama
Not-For-All-Audiences
nsfw
text-generation-inference
Inference Endpoints
license: cc-by-nc-4.0 | |
tags: | |
- not-for-all-audiences | |
- nsfw | |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZBzF77-1jKc4sC25UY5DR.png) | |
An attempt using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) on [Pygmalion2](https://huggingface.co./PygmalionAI/pygmalion-2-13b) to get better result. | |
In addition, [LimaRP v3](https://huggingface.co./lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. | |
<!-- description start --> | |
## Description | |
This repo contains fp16 files of Emerald-13B. | |
<!-- description end --> | |
<!-- description start --> | |
## Models and loras used | |
- PygmalionAI/pygmalion-2-13b | |
- The-Face-Of-Goonery/Huginn-13b-FP16 | |
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT | |
<!-- description end --> | |
<!-- prompt-template start --> | |
## Prompt template: Alpaca | |
``` | |
Below is an instruction that describes a task. Write a response that appropriately completes the request. | |
### Instruction: | |
{prompt} | |
### Response: | |
``` | |
## LimaRP v3 usage and suggested settings | |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) | |
You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: | |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) | |
Special thanks to Sushi. | |
If you want to support me, you can [here](https://ko-fi.com/undiai). | |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard) | |
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_Undi95__Emerald-13B) | |
| Metric | Value | | |
|-----------------------|---------------------------| | |
| Avg. | 51.39 | | |
| ARC (25-shot) | 62.29 | | |
| HellaSwag (10-shot) | 83.69 | | |
| MMLU (5-shot) | 55.7 | | |
| TruthfulQA (0-shot) | 50.94 | | |
| Winogrande (5-shot) | 75.93 | | |
| GSM8K (5-shot) | 12.81 | | |
| DROP (3-shot) | 18.38 | | |