Update README.md
Browse files
README.md
CHANGED
@@ -13,17 +13,22 @@ base_model: prithivMLmods/Llama-Deepsync-1B
|
|
13 |
pipeline_tag: text-generation
|
14 |
tags:
|
15 |
- text-generation-inference
|
16 |
-
-
|
17 |
-
-
|
18 |
-
-
|
|
|
19 |
- Llama
|
20 |
- Code
|
21 |
- CoT
|
22 |
- Math
|
23 |
- Deepsync
|
24 |
-
-
|
|
|
25 |
- llama-cpp
|
26 |
library_name: transformers
|
|
|
|
|
|
|
27 |
---
|
28 |
|
29 |
# LLAMA 3.2 1B DEEPSYNC AESIR
|
@@ -33,4 +38,21 @@ Veamos, les voy a ser honesto. No se ni que carajo hice combinando el Llama 3.2
|
|
33 |
|
34 |
Puede salir cualquier cosa de esto, desde un modelo DeepSync con contenido NSFW a una aberraci贸n media extra帽a que vomite incoherencias.
|
35 |
|
36 |
-
No lo prob茅 a煤n por lo que se poca cosa. Tomenlo como un modelo experimental.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
pipeline_tag: text-generation
|
14 |
tags:
|
15 |
- text-generation-inference
|
16 |
+
- GGUF
|
17 |
+
- NSFW
|
18 |
+
- RP
|
19 |
+
- Roleplay
|
20 |
- Llama
|
21 |
- Code
|
22 |
- CoT
|
23 |
- Math
|
24 |
- Deepsync
|
25 |
+
- 1b
|
26 |
+
- 4-bit
|
27 |
- llama-cpp
|
28 |
library_name: transformers
|
29 |
+
datasets:
|
30 |
+
- MinervaAI/Aesir-Preview
|
31 |
+
quantized_by: Novaciano
|
32 |
---
|
33 |
|
34 |
# LLAMA 3.2 1B DEEPSYNC AESIR
|
|
|
38 |
|
39 |
Puede salir cualquier cosa de esto, desde un modelo DeepSync con contenido NSFW a una aberraci贸n media extra帽a que vomite incoherencias.
|
40 |
|
41 |
+
No lo prob茅 a煤n por lo que se poca cosa. Tomenlo como un modelo experimental.
|
42 |
+
|
43 |
+
### Sobre GGUF
|
44 |
+
|
45 |
+
GGUF es un nuevo formato introducido por el equipo de llama.cpp el 21 de agosto de 2023. Es un reemplazo de GGML, que ya no es compatible con llama.cpp.
|
46 |
+
|
47 |
+
Aqu铆 hay una lista incompleta de clientes y bibliotecas que se sabe que admiten GGUF:
|
48 |
+
|
49 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
50 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
51 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
|
52 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
53 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
54 |
+
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
|
55 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
56 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
57 |
+
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
58 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
|