--- base_model: Spestly/Atlas-Flash-7B-Preview tags: - text-generation-inference - transformers - qwen2 - trl - r1 - gemini-2.0 - gpt4 - conversational - chat - llama-cpp - gguf-my-repo license: mit language: - en - zh - fr - es - pt - de - it - ru - ja - ko - vi - th - ar - fa - he - tr - cs - pl - hi - bn - ur - id - ms - lo - my - ceb - km - tl - nl library_name: transformers datasets: - BAAI/TACO - codeparrot/apps - rubenroy/GammaCorpus-v1-70k-UNFILTERED extra_gated_prompt: By accessing this model, you agree to comply with ethical usage guidelines and accept full responsibility for its applications. You will not use this model for harmful, malicious, or illegal activities, and you understand that the model's use is subject to ongoing monitoring for misuse. This model is provided 'AS IS' and agreeing to this means that you are responsible for all the outputs generated by you extra_gated_fields: Name: text Organization: text Country: country Date of Birth: date_picker Intended Use: type: select options: - Research - Education - Personal Development - Commercial Use - label: Other value: other I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox I agree to use this model under the MIT licence: checkbox --- # Triangle104/Atlas-Flash-7B-Preview-Q6_K-GGUF This model was converted to GGUF format from [`Spestly/Atlas-Flash-7B-Preview`](https://huggingface.co./Spestly/Atlas-Flash-7B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co./spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co./Spestly/Atlas-Flash-7B-Preview) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Triangle104/Atlas-Flash-7B-Preview-Q6_K-GGUF --hf-file atlas-flash-7b-preview-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Triangle104/Atlas-Flash-7B-Preview-Q6_K-GGUF --hf-file atlas-flash-7b-preview-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Triangle104/Atlas-Flash-7B-Preview-Q6_K-GGUF --hf-file atlas-flash-7b-preview-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Triangle104/Atlas-Flash-7B-Preview-Q6_K-GGUF --hf-file atlas-flash-7b-preview-q6_k.gguf -c 2048 ```