YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

MeowGPT-3.5 - GGUF

Name Quant method Size
MeowGPT-3.5.Q2_K.gguf Q2_K 2.53GB
MeowGPT-3.5.Q3_K_S.gguf Q3_K_S 2.95GB
MeowGPT-3.5.Q3_K.gguf Q3_K 3.28GB
MeowGPT-3.5.Q3_K_M.gguf Q3_K_M 3.28GB
MeowGPT-3.5.Q3_K_L.gguf Q3_K_L 3.56GB
MeowGPT-3.5.IQ4_XS.gguf IQ4_XS 3.67GB
MeowGPT-3.5.Q4_0.gguf Q4_0 3.83GB
MeowGPT-3.5.IQ4_NL.gguf IQ4_NL 3.87GB
MeowGPT-3.5.Q4_K_S.gguf Q4_K_S 3.86GB
MeowGPT-3.5.Q4_K.gguf Q4_K 4.07GB
MeowGPT-3.5.Q4_K_M.gguf Q4_K_M 4.07GB
MeowGPT-3.5.Q4_1.gguf Q4_1 4.24GB
MeowGPT-3.5.Q5_0.gguf Q5_0 4.65GB
MeowGPT-3.5.Q5_K_S.gguf Q5_K_S 4.65GB
MeowGPT-3.5.Q5_K.gguf Q5_K 4.78GB
MeowGPT-3.5.Q5_K_M.gguf Q5_K_M 4.78GB
MeowGPT-3.5.Q5_1.gguf Q5_1 5.07GB
MeowGPT-3.5.Q6_K.gguf Q6_K 5.53GB
MeowGPT-3.5.Q8_0.gguf Q8_0 7.17GB

Original model description:

license: mit language: - en library_name: transformers pipeline_tag: text-generation tags: - freeai - conversational - meowgpt - gpt - free - opensource - splittic - ai widget: - text: [|User|] Hello World [|Assistant|]

MeowGPT Readme

Overview

MeowGPT, developed by CutyCat2000x, is a language model based on Llama with the checkpoint version 3.5. This model is designed to generate text in a conversational manner and can be used for various natural language processing tasks.

Usage

Loading the Model

To use MeowGPT, you can load it via the transformers library in Python using the following code:

from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer

tokenizer = LlamaTokenizer.from_pretrained("cutycat2000x/MeowGPT-3.5")
model = AutoModelForCausalLM.from_pretrained("cutycat2000x/MeowGPT-3.5")

Example Prompt

An example of how to prompt the model for generating text:

{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set content = '<<SYS>>\\n' + system_message + '\\n<</SYS>>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ ' '  + content.strip() + eos_token }}{% endif %}{% endfor %}

The <s> and </s> are start and end tokens.

About the Model

  • Base Model: Llama + Mixtral
  • Checkpoint Version: 3.5
  • Datasets Used: Private

Citation

If you use MeowGPT in your research or projects, please consider citing CutyCat2000x.

Disclaimer

Please note that while MeowGPT is trained to assist in generating text based on given prompts, it may not always provide accurate or contextually appropriate responses. It's recommended to review and validate the generated content before usage in critical applications.

For more information or support, refer to the transformers library documentation or CutyCat2000x's resources.

Downloads last month
6
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.