Model-Requests / README.md
Lewdiculous's picture
Update README.md
01002cd verified
|
raw
history blame
No virus
1 kB
metadata
license: cc-by-4.0
tags:
  - requests
  - gguf
  - quantized

Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!

Read bellow for more information.

Requirements to request model quantizations:

For the model:

  • Maximum model parameter size of 11B.
    At the moment I am unable to accept requests for larger models due to hardware/time limitations.

Important:

  • Fill the request template as outlined in the next section.

How to request a model quantization:

  1. Open a New Discussion with a title of "Request: Model-Author/Model-Name", for example, "Request: Nitral-AI/Infinitely-Laydiculous-7B".

  2. Include the following template in your message and fill the information (example request here):

**Model name:**


**Model link:**


**Brief description:**


**An image to represent the model (square shaped):**