File size: 1,560 Bytes
f57ee08 8f8e63b c1263ba f57ee08 8f8e63b fbab44d 8f8e63b b527297 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: apache-2.0
datasets:
- cnmoro/QuestionClassification
tags:
- classification
- questioning
- directed
- generic
language:
- en
- pt
library_name: transformers
pipeline_tag: text-classification
widget:
- text: "What is the summary of the text?"
---
(This model has a v2, use it instead: https://huggingface.co./cnmoro/granite-question-classifier)
A finetuned version of prajjwal1/bert-tiny.
The goal is to classify questions into "Directed" or "Generic".
If a question is not directed, we would change the actions we perform on a RAG pipeline (if it is generic, semantic search wouldn't be useful directly; e.g. asking for a summary).
(Class 0 is Generic; Class 1 is Directed)
The accuracy on the training dataset is around 87.5%
```python
from transformers import BertForSequenceClassification, BertTokenizerFast
import torch
# Load the model and tokenizer
model = BertForSequenceClassification.from_pretrained("cnmoro/bert-tiny-question-classifier")
tokenizer = BertTokenizerFast.from_pretrained("cnmoro/bert-tiny-question-classifier")
def is_question_generic(question):
# Tokenize the sentence and convert to PyTorch tensors
inputs = tokenizer(
question.lower(),
truncation=True,
padding=True,
return_tensors="pt",
max_length=512
)
# Get the model's predictions
with torch.no_grad():
outputs = model(**inputs)
# Extract the prediction
predictions = outputs.logits
predicted_class = torch.argmax(predictions).item()
return int(predicted_class) == 0
``` |