Model Description
This model is a fine-tuned BERT model for AI content detection.
Training Data
The model was trained on a dataset of over 100,000 sentences, each labeled as either AI-generated or human-written. This approach allows the model to predict the nature of each individual sentence, which is particularly useful for highlighting AI-written content within larger texts.
Evaluation Metrics
The model achieved an accuracy of 90% on the validation & test set.
Usage
import torch
from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained("shahxeebhassan/bert_base_ai_content_detector")
model = BertForSequenceClassification.from_pretrained("shahxeebhassan/bert_base_ai_content_detector")
inputs = tokenizer("Distance learning will not benefit students because the students are not able to develop as good of a relationship with their teachers.", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probabilities = torch.softmax(logits, dim=1).cpu().numpy()
predicted_label = probabilities.argmax(axis=1)
print(f"Predicted label for the input text: {predicted_label[0]}")
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for shahxeebhassan/bert_base_ai_content_detector
Base model
google-bert/bert-base-uncased