Kubernetes CLI Assistant Model
- Developed by: dereklck / felix97
- License: Apache-2.0
- Fine-tuned from model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
- Model type: GGUF (compatible with Ollama)
- Language: English
This Llama-based model was fine-tuned to assist users with Kubernetes kubectl
commands. It has two primary features:
- Generating accurate
kubectl
commands based on user instructions. - Politely requesting additional information if the instruction is incomplete or ambiguous.
The model focuses strictly on these two tasks to provide efficient and accurate assistance for Kubernetes command-line operations.
How to Use the Model
This section provides instructions on how to run the model using Ollama with the provided Modelfile.
Prerequisites
- Install Ollama on your system.
- Ensure you have access to the model hosted on Hugging Face:
hf.co/dereklck/kubectl_operator_1b_peft_gguf
.
Steps
Create the Modelfile
Save the following content as a file named
Modelfile
:FROM hf.co/dereklck/kubectl_operator_1b_peft_gguf PARAMETER temperature 0.3 PARAMETER stop "</s>" TEMPLATE """ You are an AI assistant that helps users with Kubernetes `kubectl` commands. **Your Behavior Guidelines:** 1. **For clear and complete instructions:** - Provide only the exact `kubectl` command needed to fulfill the user's request. - Do not include extra explanations, placeholders, or context. - Enclose the command within a code block with `bash` syntax highlighting. 2. **For incomplete or ambiguous instructions:** - Politely ask the user for the specific missing information. - Do not provide any commands or placeholders in your response. - Respond in plain text, clearly stating what information is needed. **Important Rules:** - Do not generate CLI commands containing placeholders (e.g., `<pod_name>`, `<resource_name>`). - Ensure all CLI commands are complete, valid, and executable as provided. - If user input is insufficient to form a complete command, ask for clarification instead of using placeholders. - Provide only the necessary CLI command output without any additional text. ### Instruction: {{ .Prompt }} ### Response: {{ .Response }} </s> """
Create the Model with Ollama
Open your terminal and run the following command to create the model:
ollama create kubectl_cli_assistant -f Modelfile
This command tells Ollama to create a new model named
kubectl_cli_assistant
using the configuration specified inModelfile
.Run the Model
Start interacting with your model:
ollama run kubectl_cli_assistant
This will initiate the model and prompt you for input based on the template provided.
Alternatively, you can provide an instruction directly:
ollama run kubectl_cli_assistant -p "List all pods in all namespaces."
Example Output:
kubectl get pods --all-namespaces
Model Details
Purpose
The model assists users by:
- Generating accurate
kubectl
commands based on natural language instructions. - Politely requesting additional information if the instruction is incomplete or ambiguous.
Intended Users
- Kubernetes administrators
- DevOps engineers
- Developers working with Kubernetes clusters
Training Process
- Base Model: Unsloth's Llama-3.2-1B-Instruct-bnb-4bit
- Fine-tuning: Leveraged the Unsloth framework and Hugging Face's TRL library for efficient training.
- Training Data: Custom dataset focused on Kubernetes
kubectl
command usage and clarification requests, containing approximately 1,000 entries:- 500 entries for
kubectl
command generation. - 500 entries for polite clarification requests when information is insufficient.
- 500 entries for
Model Features
1. Command Generation in Bash Format
When the model generates CLI commands, it provides them in bash
format, enclosed within code blocks for easy execution and clarity. This allows users to copy and paste the commands directly into their terminal.
2. Handling Ambiguity with Polite Clarifications
If the instruction is incomplete or ambiguous, the model will politely ask for the specific missing information instead of making assumptions. This ensures accuracy and prevents the execution of incorrect commands.
Examples
Example 1: Generating a Command
Instruction:
List all pods in all namespaces.
Response:
kubectl get pods --all-namespaces
Example 2: Handling Ambiguity
Instruction:
Delete the resource.
Response:
Could you please specify the type and name of the resource you want to delete?
Example 3: Additional Command
Instruction:
Create a namespace called 'production'.
Response:
kubectl create namespace production
Limitations and Considerations
- Accuracy: The model may occasionally produce incorrect commands. Always review the output before execution.
- No General Explanations: This model is strictly limited to generating
kubectl
commands and requesting additional information if needed. It does not provide general explanations about Kubernetes concepts. - Security: Be cautious when executing generated commands, especially in production environments.
Feedback and Contributions
We welcome any comments or participation to improve the model and dataset. If you encounter issues or have suggestions for improvement:
- GitHub: Unsloth Repository
- Contact: Reach out to the developer, dereklck, for further assistance.
Note: This model provides assistance in generating kubectl
commands based on user input. Always verify the generated commands in a safe environment before executing them in a production cluster.
Summary
The Kubernetes CLI Assistant Model is a specialized tool designed to help users generate accurate kubectl
commands or request necessary additional information when the instructions are incomplete. By focusing strictly on these two tasks, the model ensures effectiveness and reliability for users who need quick command-line assistance for Kubernetes operations.
Model tree for K8sAIOps/kubectl_operator_1b_peft
Base model
meta-llama/Llama-3.2-1B-Instruct