Takeda Section Classifier

Pretrained model (finetuned version of BERT Multilingual Uncased) on french and english documents using supervised training for sections classification. This work has been made by Digital Innovation Team from Belgium 🇧🇪 (LE).

Model Description

The model aims at classifying text in classes representing part of reports:

  • Description
  • Immediate Correction
  • Root Cause
  • Action Plan
  • Impacted Elements

Intended uses & limitations

The model can be use for Takeda documentation, the team do not guarantee results for out of the scope documentation.

How to Use

You can use this model directly with a pipeline for text classification:

from transformers import (
    TextClassificationPipeline,
    AutoTokenizer,
    AutoModelForSequenceClassification,
)
tokenizer = AutoTokenizer.from_pretrained("TakedaAIML/section_classifier")

model = AutoModelForSequenceClassification.from_pretrained(
    "TakedaAIML/section_classifier"
)

pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
prediction = pipe('this is a piece of text representing the Description section. An event occur on june 24 and ...')
Downloads last month
104
Safetensors
Model size
167M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) does not yet support sentence-transformers models for this pipeline type.

Model tree for TakedaAIML/section_classifier

Finetuned
(2308)
this model