Model Description
AI Cores Second generation LLM that is fine tune with cybersecurity dataset for cybersecurity applications
Training
Supervised fine tuning(SFT) with GeneralReasoning/GeneralThought-195K 2 epochs using our own's Huginggface Unsloth Training Space
Intended Use
- Intended users: Application Engineers, Software Engineers, Data scientists and developers working on cybersecurity applications.
- Out-of-scope use cases: This model should not be used for medical advice, legal decisions, or any life-critical systems.
How to use
Use with transformers Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers and bitsandbytes installation via pip install --upgrade transformers. pip install --upgrade bitsandbytes accelerate torch
Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
# Create a BitsAndBytesConfig for 8-bit quantization
bnb_config = BitsAndBytesConfig(
load_in_8bit=True
)
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AicoresSecurity/Cybernet-Sec-3B-R1")
if torch.cuda.is_available():
from transformers import BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(
"AicoresSecurity/Cybernet-Sec-3B-R1",
quantization_config=bnb_config,
device_map="auto"
)
else:
# Fallback for CPU-only systems
model = AutoModelForCausalLM.from_pretrained("AicoresSecurity/Cybernet-Sec-3B-R1")
# Define your system prompt and user prompt
system_prompt = "You are cybersecurity expert. <reasoning></reasoning>"
user_prompt = "Analyze the following network scan report and identify open ports and their associated vulnerabilities. Suggest best practices to secure these ports: port 80"
full_prompt = system_prompt + user_prompt
# Tokenize the full prompt and move it to the model's device
input_ids = tokenizer.encode(full_prompt, return_tensors="pt").to(model.device)
# Generate output from the model
output_ids = model.generate(
input_ids,
max_length=100, # Adjust as needed
do_sample=True, # Use sampling for more varied output
temperature=0.7, # Adjust for creativity
)
# Decode the generated tokens back into a string
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(output_text)
Testing
This LLM has been tested with prompt injection and jailbreaking and it passed. It didn't provide any information that is malicious or illegal.
No affliated to website: Cybernetsecurity.com
Thanks
Thank you for GeneralReasoning
Uploaded model
- Developed by: EpistemeAI
- License: apache-2.0
- Finetuned from model : AicoresSecurity/Cybernet-Sec-3B-R1-V0
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 41
Model tree for AicoresSecurity/Cybernet-Sec-3B-R1-V1
Base model
meta-llama/Llama-3.2-3B-Instruct