File size: 2,576 Bytes
0fafbc5 333f0a0 0fafbc5 333f0a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
library_name: transformers
language:
- ru
license: apache-2.0
base_model: PekingU/rtdetr_r50vd_coco_o365
tags:
- object-detection
- pytorch-lightning
- russian-license-plates
- rt-detr
model-index:
- name: RT-DETR Russian car plate detection with classification by type fine tuned with pytorch lighting
results: []
---
## Model description
Модель детекции номерных знаков автомобилей РФ, в данный момент 2 класса n_p и p_p, обычные номера и полицейские
## Intended uses & limitations
Пример использования:
<pre>
from transformers import AutoModelForObjectDetection, AutoImageProcessor
import torch
import supervision as sv
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForObjectDetection.from_pretrained('Garon16/rtdetr_r50vd_russia_plate_detector_lightning').to(DEVICE)
processor = AutoImageProcessor.from_pretrained('Garon16/rtdetr_r50vd_russia_plate_detector_lightning')
path = 'path/to/image'
image = Image.open(path)
inputs = processor(image, return_tensors="pt").to(DEVICE)
with torch.no_grad():
outputs = model(**inputs)
w, h = image.size
results = processor.post_process_object_detection(
outputs, target_sizes=[(h, w)], threshold=0.3)
detections = sv.Detections.from_transformers(results[0]).with_nms(0.3)
labels = [
model.config.id2label[class_id]
for class_id
in detections.class_id
]
annotated_image = image.copy()
annotated_image = sv.BoundingBoxAnnotator().annotate(annotated_image, detections)
annotated_image = sv.LabelAnnotator().annotate(annotated_image, detections, labels=labels)
grid = sv.create_tiles(
[annotated_image],
grid_size=(1, 1),
single_tile_size=(512, 512),
tile_padding_color=sv.Color.WHITE,
tile_margin_color=sv.Color.WHITE
)
sv.plot_image(grid, size=(10, 10))
</pre>
## Training and evaluation data
Обучал на своём датасете - https://universe.roboflow.com/testcarplate/russian-license-plates-classification-by-this-type
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- seed: 42
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
### Training results
Пока не разобрался, как при дообучении лайтингом автоматом всё отправить сюда
### Framework versions
- Transformers 4.46.0.dev0
- Pytorch 2.5.0+cu124
- Tokenizers 0.20.1 |