Upload README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<p>
|
3 |
+
<img src="figures/banner-draw.png">
|
4 |
+
</p>
|
5 |
+
|
6 |
+
|
7 |
+
<div>
|
8 |
+
|
9 |
+
|
10 |
+
[![Licence](https://img.shields.io/github/license/Ileriayo/markdown-badges?style=flat)](LICENSE)
|
11 |
+
[![Docker Pulls](https://img.shields.io/docker/pulls/hichtala/draw?logo=docker)](https://hub.docker.com/r/hichtala/draw/)
|
12 |
+
[![Twitter](https://badgen.net/badge/icon/twitter?icon=twitter&label)](https://twitter.com/tiazden)
|
13 |
+
|
14 |
+
[![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-sm.svg)](https://huggingface.co/HichTala/draw)
|
15 |
+
[![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm.svg)](https://huggingface.co/datasets/HichTala/yugioh_dataset)
|
16 |
+
|
17 |
+
[🇫🇷 Français](README_fr.md)
|
18 |
+
|
19 |
+
DRAW (which stands for **D**etect and **R**ecognize **A** **W**ild range of cards) is the very first object detector
|
20 |
+
trained to detect _Yu-Gi-Oh!_ cards in all types of images, and in particular in dueling images.
|
21 |
+
|
22 |
+
Other works exist (see [Related Works](#div-aligncenterrelated-worksdiv)) but none is capable of recognizing cards during a duel.
|
23 |
+
|
24 |
+
DRAW is entirely open source and all contributions are welcome.
|
25 |
+
|
26 |
+
</div>
|
27 |
+
|
28 |
+
</div>
|
29 |
+
|
30 |
---
|
31 |
+
## <div align="center">📄Documentation</div>
|
32 |
+
|
33 |
+
<details open>
|
34 |
+
<summary>
|
35 |
+
Install
|
36 |
+
</summary>
|
37 |
+
|
38 |
+
Both a docker installation and a more conventional installation are available. If you're not very familiar with all the code,
|
39 |
+
docker installation is recommended. Otherwise, opt for the classic installation.
|
40 |
+
|
41 |
+
#### Docker installation
|
42 |
+
|
43 |
+
If you are familiar with docker, the docker image is available [here](https://hub.docker.com/r/hichtala/draw).
|
44 |
+
|
45 |
+
Otherwise, I recommend you to download [DockerDesktop](https://www.docker.com/products/docker-desktop/) if you are on Windows.
|
46 |
+
If you are on Linux, you can refer to the documentation [here](https://docs.docker.com/engine/install/).
|
47 |
+
|
48 |
+
Once it is done, you simply have to execute the following command,
|
49 |
+
```shell
|
50 |
+
docker run -p 5000:5000 --name draw hichtala/draw:latest
|
51 |
+
```
|
52 |
+
Your installation is now completed. You can press `Ctrl+C` and continue to Usage section.
|
53 |
+
|
54 |
+
|
55 |
+
#### Classic installation
|
56 |
+
|
57 |
+
You need python to be installed. Python installation isn't going to be detailed here, you can refer to the [documentation](https://www.python.org/).
|
58 |
+
|
59 |
+
We first need to install pytorch. It is recommended to use a package manager such as [miniconda](https://docs.conda.io/projects/miniconda/en/latest/).
|
60 |
+
Please refer to the [documentation](https://docs.conda.io/projects/miniconda/en/latest/).
|
61 |
+
|
62 |
+
When everything is set up you can run the following command to install pytorch:
|
63 |
+
```shell
|
64 |
+
python -m pip install torch torchvision
|
65 |
+
```
|
66 |
+
If you want to use you gpus to make everything run faster, please refer the [documentation](https://pytorch.org/get-started/locally/)
|
67 |
+
|
68 |
+
Then you just have to clone the repo and install `requirements`:
|
69 |
+
```shell
|
70 |
+
git clone https://github.com/HichTala/draw
|
71 |
+
cd draw
|
72 |
+
python -m pip install -r requirements.txt
|
73 |
+
```
|
74 |
+
|
75 |
+
Your installation is now completed.
|
76 |
+
|
77 |
+
</details>
|
78 |
+
|
79 |
+
<details open>
|
80 |
+
<summary>Usage</summary>
|
81 |
+
|
82 |
+
Now to use it you need to download the models and the data, in section [Models and Data](#div-aligncentermodels-and-datadiv).
|
83 |
+
|
84 |
+
Once you have it, follow instruction depending on you have docker or classic installation.
|
85 |
+
Put all the model in the same folder, and keep the dataset as it is
|
86 |
+
|
87 |
+
#### Docker installation
|
88 |
+
|
89 |
+
You have to copy the data and models in the container. Execute the following command:
|
90 |
+
|
91 |
+
```shell
|
92 |
+
docker cp path/to/dataset/club_yugioh_dataset draw:/data
|
93 |
+
docker cp path/to/model/folder draw:/models
|
94 |
+
```
|
95 |
+
|
96 |
+
Once it is done you just have to run the command:
|
97 |
+
```shell
|
98 |
+
docker start draw
|
99 |
+
```
|
100 |
+
open the adress `localhost:5000`, and enjoy the maximum. Refer [bellow](#both) for details about parameters
|
101 |
+
|
102 |
+
|
103 |
+
#### Classic installation
|
104 |
+
|
105 |
+
You need to modify the `config.json` file by putting the paths of you dataset folder in `"data_path"` parameter
|
106 |
+
and the path to model folder in `"trained_models"` parameter.
|
107 |
+
|
108 |
+
Once done, just run:
|
109 |
+
```shell
|
110 |
+
flask --app app.py run
|
111 |
+
```
|
112 |
+
open the adress `localhost:5000`, and enjoy the maximum. Refer [bellow](#both) for details about parameters
|
113 |
+
|
114 |
+
#### Both
|
115 |
+
|
116 |
+
* In the first parameter, the one with gears, put the `config.json` file
|
117 |
+
* In the second parameter, the one with a camera, put the video you want to process (leave it empty to use your webcam)
|
118 |
+
* In the last one, put your deck list in the format `ydk`
|
119 |
+
|
120 |
+
Then you can press the button and start the process !
|
121 |
+
|
122 |
+
</details>
|
123 |
+
|
124 |
---
|
125 |
+
## <div align="center">⚙️Models and Data</div>
|
126 |
+
|
127 |
+
<details open>
|
128 |
+
<summary>Models</summary>
|
129 |
+
|
130 |
+
In this project, the tasks were divided so that one model would locate the card and another model would classify them.
|
131 |
+
Similarly, to classify the cards, I divided the task so that there is one model for each type of card,
|
132 |
+
and the model to be used was determined by the color of the card.
|
133 |
+
|
134 |
+
Models can be downloaded in <a href="https://huggingface.co/HichTala/draw">Hugging Face</a>.
|
135 |
+
Models starting with `beit` stands for classification and the one starting with `yolo` for localization.
|
136 |
+
|
137 |
+
[![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-sm.svg)](https://huggingface.co/HichTala/draw)
|
138 |
+
|
139 |
+
For now only models for "retro" gameplay are available but the ones for classic format play will be added soon.
|
140 |
+
I considered "retro" format all cards before the first _syncro_ set, so all the cards edited until Light of Destruction set (LODT - 05/13/2008) set and all speed duel cards.
|
141 |
+
|
142 |
+
</details>
|
143 |
+
|
144 |
+
<details open>
|
145 |
+
<summary>Data</summary>
|
146 |
+
|
147 |
+
To create a dataset, the <a href="https://ygoprodeck.com/api-guide/">YGOPRODeck</a> api was used. Two datasets were thus created,
|
148 |
+
one for "retro" play and the other for classic format play. Just as there is a model for each type of card,
|
149 |
+
there is a dataset for each type of card.
|
150 |
+
|
151 |
+
Dataset can be downloaded in <a href="">Hugging Face</a>.
|
152 |
+
|
153 |
+
[![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm.svg)](https://huggingface.co/datasets/HichTala/yugioh_dataset)
|
154 |
+
|
155 |
+
For now only "retro" dataset is available, but the one for classic format play will be added soon.
|
156 |
+
|
157 |
+
|
158 |
+
</details>
|
159 |
+
|
160 |
+
|
161 |
+
---
|
162 |
+
## <div align="center">💡Inspiration</div>
|
163 |
+
|
164 |
+
This project is inspired by content creator [SuperZouloux](https://www.youtube.com/watch?v=64-LfbggqKI)'s idea of a hologram bringing _Yu-Gi-Oh!_ cards to life.
|
165 |
+
His project uses chips inserted under the sleeves of each card,
|
166 |
+
which are read by the play mat, enabling the cards to be recognized.
|
167 |
+
|
168 |
+
Inserting the chips into the sleeves is not only laborious, but also poses another problem:
|
169 |
+
face-down cards are read in the same way as face-up ones.
|
170 |
+
So an automatic detector is a really suitable solution.
|
171 |
+
|
172 |
+
Although this project was discouraged by _KONAMI_ <sup>®</sup>, the game's publisher (which is quite understandable),
|
173 |
+
we can nevertheless imagine such a system being used to display the cards played during a live duel,
|
174 |
+
to allow spectators to read the cards.
|
175 |
+
|
176 |
+
---
|
177 |
+
## <div align="center">🔗Related Works</div>
|
178 |
+
|
179 |
+
Although to my knowledge `draw` is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling environment,
|
180 |
+
other works exist and were a source of inspiration for this project. It's worth mentioning them here.
|
181 |
+
|
182 |
+
[Yu-Gi-Oh! NEURON](https://www.konami.com/games/eu/fr/products/yugioh_neuron/) is an official application developed by _KONAMI_ <sup>®</sup>.
|
183 |
+
It's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at a time, which is very decent.
|
184 |
+
The drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel context.
|
185 |
+
What's more, it can't be integrated, so the only way to use it is to use the application.
|
186 |
+
|
187 |
+
[yugioh one shot learning](https://github.com/vanstorm9/yugioh-one-shot-learning) made by `vanstorm9` is a
|
188 |
+
Yu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its classification
|
189 |
+
model. It gives very impressive results on images with a good quality but not that good on low quality images, and it
|
190 |
+
can't localize cards.
|
191 |
+
|
192 |
+
[Yolov8](https://github.com/ultralytics/ultralytics) is the last version of the very famous `yolo` family of object detector models.
|
193 |
+
I think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.
|
194 |
+
|
195 |
+
[BEiT](https://arxiv.org/pdf/2106.08254.pdf) is a pre-trained model for image classification. It uses image transofrmers
|
196 |
+
which are based on attention mechanism. It suits our problem because authors also propose a pre-trained model in `Imagenet-22K`.
|
197 |
+
It is a dataset with 22k classes (more than most classifiers) which is interesting for our case since there is mode than 11k cards in _Yu-Gi-Oh!_.
|
198 |
+
|
199 |
+
---
|
200 |
+
## <div align="center">🔍Method Overview</div>
|
201 |
+
|
202 |
+
A medium blog will soon be written and published, explaining the main process from data collection to final prediction.
|
203 |
+
If you have any questions, don't hesitate to open an issue.
|
204 |
+
|
205 |
+
---
|
206 |
+
## <div align="center">💬Contact</div>
|
207 |
+
|
208 |
+
You can reach me on Twitter [@tiazden](https://twitter.com/tiazden) or by email at [[email protected]](mailto:[email protected]).
|
209 |
+
|