Science LLM
Collection
8 items
β’
Updated
β’
2
Mnemosyne-7B is an experimental large language model (LLM) created by merging several pre-trained models designed for informative and educational purposes. It combines the strengths of these models with the hope of achieving a highly informative and comprehensive LLM.
GGUF: https://huggingface.co./mradermacher/Mnemosyne-7B-GGUF
This is an experimental model, and its performance and capabilities are not guaranteed. Further testing and evaluation are required to assess its effectiveness.
models:
- model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2
- model: openbmb/Eurus-7b-kto
- model: Weyaxi/Newton-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: bfloat16
Mnemosyne-7B is a merge of the following models using mergekit: