base_model:
- BarraHome/Mistroll-7B-v2.2
- ClaudioItaly/Evolutionstory
library_name: transformers
tags:
- mergekit
- merge
license: mit
merge
I finally think I managed to make a model with high writing skills. Based on the prompt it manages to be very coherent with the story.
It also has great RAG capabilities.
This is a merge of pre-trained language models created using mergekit.
The AI model “ClaudioItaly/Evolutionstory-7B-v2.2” has achieved interesting ratings in several metrics, but also shows some areas for improvement. Here is an analysis of the main findings and implications:
Strengths: IFEval (0-Shot): Scored a very solid 48.14 in strict accuracy. This indicates that the model handles text generation tasks well without the need for prior examples, demonstrating good immediate comprehension capabilities. BBH (3-Shot): The score of 31.62 in this 3-shot dataset (where the model receives a few examples before responding) suggests that the model is able to effectively leverage additional context to improve performance. Areas of Improvement: Math and Complex Reasoning (MATH Lvl 5, 4-Shot): A score of 6.42 on this advanced math level test highlights that the model struggles with complex logic or math problems, which is typical of many general language models, which do not they are optimized for solving numerical or structured problems.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: BarraHome/Mistroll-7B-v2.2
- model: ClaudioItaly/Evolutionstory
merge_method: slerp
base_model: ClaudioItaly/Evolutionstory
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers