Novaciano's picture
Update README.md
510bd8c verified
metadata
license: creativeml-openrail-m
language:
  - en
  - de
  - fr
  - it
  - pt
  - hi
  - es
  - th
base_model: prithivMLmods/Llama-Deepsync-1B
pipeline_tag: text-generation
tags:
  - text-generation-inference
  - GGUF
  - NSFW
  - RP
  - Roleplay
  - Llama
  - Code
  - CoT
  - Math
  - Deepsync
  - 1b
  - 4-bit
  - llama-cpp
library_name: transformers
datasets:
  - MinervaAI/Aesir-Preview
quantized_by: Novaciano

LLAMA 3.2 1B DEEPSYNC AESIR

Sobre el modelo

Veamos, les voy a ser honesto. No se ni que carajo hice combinando el Llama 3.2 1b DeepSync con el dataset Aesir Preview de MinervaAI.

Puede salir cualquier cosa de esto, desde un modelo DeepSync con contenido NSFW a una aberración media extraña que vomite incoherencias.

No lo probé aún por lo que se poca cosa. Tomenlo como un modelo experimental.

Sobre GGUF

GGUF es un nuevo formato introducido por el equipo de llama.cpp el 21 de agosto de 2023. Es un reemplazo de GGML, que ya no es compatible con llama.cpp.

Aquí hay una lista incompleta de clientes y bibliotecas que se sabe que admiten GGUF:

  • llama.cpp. The source project for GGUF. Offers a CLI and a server option.
  • llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
  • LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
  • text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
  • KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
  • GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
  • LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
  • Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
  • candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
  • ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.