Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from prithivMLmods/LwQ-10B-Instruct
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model details:
LwQ-10B-Instruct (Llama with Questions), based on the Llama 3.1 collection of multilingual large language models (LLMs), is a set of pre-trained and instruction-tuned generative models optimized for multilingual dialogue use cases. These models outperform many available open-source alternatives. Model Architecture: Llama 3.1 is an auto-regressive language model that utilizes an optimized transformer architecture. The tuned versions undergo supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to better align with human preferences for helpfulness and safety. LwQ-10B is trained on synthetic reasoning datasets for mathematical reasoning and context-based problem-solving, with a focus on following instructions or keywords embedded in the input. Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
import transformers import torch
model_id = "prithivMLmods/LwQ-10B-Instruct"
pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", )
messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ]
outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1])
Intended Use
Multilingual Conversational Agents:
LwQ-10B-Instruct is well-suited for building multilingual chatbots and virtual assistants, providing accurate and context-aware responses in various languages.
Instruction-Following Applications:
The model is ideal for tasks where adherence to specific instructions is critical, such as task automation, guided workflows, and structured content generation.
Mathematical and Logical Reasoning:
Trained on synthetic reasoning datasets, LwQ-10B can handle mathematical problem-solving, logical reasoning, and step-by-step explanations, making it suitable for education platforms and tutoring systems.
Contextual Problem-Solving:
The model is optimized for solving contextually rich problems by understanding and processing inputs with embedded instructions or keywords, useful for complex decision-making and recommendation systems.
Content Creation and Summarization:
LwQ-10B can generate high-quality content, including articles, reports, and summaries, across different languages and domains.
Limitations
Limited Context Window:
The model has a finite context length, which may affect its ability to handle tasks requiring extensive context or long conversations effectively.
Performance Variability Across Languages:
While it supports multiple languages, performance may vary, with higher accuracy in languages that are better represented in the training data.
Accuracy in Complex Reasoning:
Despite being trained on reasoning datasets, the model may occasionally produce incorrect or incomplete answers for highly complex or multi-step reasoning tasks.
Bias and Ethical Risks:
Since the model is trained on large datasets from diverse sources, it may exhibit biases present in the training data, potentially leading to inappropriate or biased outputs.
Dependency on Clear Instructions:
The model’s ability to generate accurate outputs relies heavily on the clarity and specificity of user instructions. Ambiguous or vague instructions may result in suboptimal responses.
Resource Requirements:
As a large language model with 10 billion parameters, it requires significant computational resources for both training and inference, limiting its deployment in low-resource environments.
Lack of Real-Time Understanding:
LwQ-10B lacks real-time understanding of current events or data beyond its training, so it may not provide accurate responses for highly recent or dynamic information.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF --hf-file lwq-10b-instruct-q5_k_m.gguf -c 2048
- Downloads last month
- 13
Model tree for Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF
Base model
meta-llama/Llama-3.1-8B