Garon16's picture
Update README.md
333f0a0 verified
metadata
library_name: transformers
language:
  - ru
license: apache-2.0
base_model: PekingU/rtdetr_r50vd_coco_o365
tags:
  - object-detection
  - pytorch-lightning
  - russian-license-plates
  - rt-detr
model-index:
  - name: >-
      RT-DETR Russian car plate detection with classification by type fine tuned
      with pytorch lighting
    results: []

Model description

Модель детекции номерных знаков автомобилей РФ, в данный момент 2 класса n_p и p_p, обычные номера и полицейские

Intended uses & limitations

Пример использования:

from transformers import AutoModelForObjectDetection, AutoImageProcessor
import torch
import supervision as sv


DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForObjectDetection.from_pretrained('Garon16/rtdetr_r50vd_russia_plate_detector_lightning').to(DEVICE)
processor = AutoImageProcessor.from_pretrained('Garon16/rtdetr_r50vd_russia_plate_detector_lightning')

path = 'path/to/image'
image = Image.open(path)
inputs = processor(image, return_tensors="pt").to(DEVICE)
with torch.no_grad():
    outputs = model(**inputs)
w, h = image.size
results = processor.post_process_object_detection(
    outputs, target_sizes=[(h, w)], threshold=0.3)
detections = sv.Detections.from_transformers(results[0]).with_nms(0.3)
labels = [
    model.config.id2label[class_id]
    for class_id
    in detections.class_id
]

annotated_image = image.copy()
annotated_image = sv.BoundingBoxAnnotator().annotate(annotated_image, detections)
annotated_image = sv.LabelAnnotator().annotate(annotated_image, detections, labels=labels)
  
grid = sv.create_tiles(
  [annotated_image],
  grid_size=(1, 1),
  single_tile_size=(512, 512),
  tile_padding_color=sv.Color.WHITE,
  tile_margin_color=sv.Color.WHITE
)
sv.plot_image(grid, size=(10, 10))

Training and evaluation data

Обучал на своём датасете - https://universe.roboflow.com/testcarplate/russian-license-plates-classification-by-this-type

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • seed: 42
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • num_epochs: 20

Training results

Пока не разобрался, как при дообучении лайтингом автоматом всё отправить сюда

Framework versions

  • Transformers 4.46.0.dev0
  • Pytorch 2.5.0+cu124
  • Tokenizers 0.20.1