HichTala commited on
Commit
bc44a04
1 Parent(s): 5f71803

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +198 -1
README.md CHANGED
@@ -6,4 +6,201 @@ language:
6
  - en
7
  size_categories:
8
  - 1K<n<10K
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  size_categories:
8
  - 1K<n<10K
9
+ ---
10
+
11
+ <div align="center">
12
+ <p>
13
+ <a href="https://www.github.com/hichtala/draw" target="_blank">
14
+ <img src="figures/banner-draw.png">
15
+ </p>
16
+
17
+
18
+ DRAW (which stands for **D**etect and **R**ecognize **A** **W**ild range of cards) is the very first object detector
19
+ trained to detect _Yu-Gi-Oh!_ cards in all types of images, and in particular in dueling images.
20
+
21
+ Other works exist (see [Related Works](#div-aligncenterrelated-worksdiv)) but none is capable of recognizing cards during a duel.
22
+
23
+ DRAW is entirely open source and all contributions are welcome.
24
+
25
+
26
+ </div>
27
+
28
+ ---
29
+ ## <div align="center">📄Documentation</div>
30
+
31
+ <details open>
32
+ <summary>
33
+ Install
34
+ </summary>
35
+
36
+ Both a docker installation and a more conventional installation are available. If you're not very familiar with all the code,
37
+ docker installation is recommended. Otherwise, opt for the classic installation.
38
+
39
+ #### Docker installation
40
+
41
+ If you are familiar with docker, the docker image is available [here](https://hub.docker.com/r/hichtala/draw).
42
+
43
+ Otherwise, I recommend you to download [DockerDesktop](https://www.docker.com/products/docker-desktop/) if you are on Windows.
44
+ If you are on Linux, you can refer to the documentation [here](https://docs.docker.com/engine/install/).
45
+
46
+ Once it is done, you simply have to execute the following command,
47
+ ```shell
48
+ docker run -p 5000:5000 --name draw hichtala/draw:latest
49
+ ```
50
+ Your installation is now completed. You can press `Ctrl+C` and continue to Usage section.
51
+
52
+
53
+ #### Classic installation
54
+
55
+ You need python to be installed. Python installation isn't going to be detailed here, you can refer to the [documentation](https://www.python.org/).
56
+
57
+ We first need to install pytorch. It is recommended to use a package manager such as [miniconda](https://docs.conda.io/projects/miniconda/en/latest/).
58
+ Please refer to the [documentation](https://docs.conda.io/projects/miniconda/en/latest/).
59
+
60
+ When everything is set up you can run the following command to install pytorch:
61
+ ```shell
62
+ python -m pip install torch torchvision
63
+ ```
64
+ If you want to use you gpus to make everything run faster, please refer the [documentation](https://pytorch.org/get-started/locally/)
65
+
66
+ Then you just have to clone the repo and install `requirements`:
67
+ ```shell
68
+ git clone https://github.com/HichTala/draw
69
+ cd draw
70
+ python -m pip install -r requirements.txt
71
+ ```
72
+
73
+ Your installation is now completed.
74
+
75
+ </details>
76
+
77
+ <details open>
78
+ <summary>Usage</summary>
79
+
80
+ Now to use it you need to download the models and the data, in section [Models and Data](#div-aligncentermodels-and-datadiv).
81
+
82
+ Once you have it, follow instruction depending on you have docker or classic installation.
83
+ Put all the model in the same folder, and keep the dataset as it is
84
+
85
+ #### Docker installation
86
+
87
+ You have to copy the data and models in the container. Execute the following command:
88
+
89
+ ```shell
90
+ docker cp path/to/dataset/club_yugioh_dataset draw:/data
91
+ docker cp path/to/model/folder draw:/models
92
+ ```
93
+
94
+ Once it is done you just have to run the command:
95
+ ```shell
96
+ docker start draw
97
+ ```
98
+ open the adress `localhost:5000`, and enjoy the maximum. Refer [bellow](#both) for details about parameters
99
+
100
+
101
+ #### Classic installation
102
+
103
+ You need to modify the `config.json` file by putting the paths of you dataset folder in `"data_path"` parameter
104
+ and the path to model folder in `"trained_models"` parameter.
105
+
106
+ Once done, just run:
107
+ ```shell
108
+ flask --app app.py run
109
+ ```
110
+ open the adress `localhost:5000`, and enjoy the maximum. Refer [bellow](#both) for details about parameters
111
+
112
+ #### Both
113
+
114
+ * In the first parameter, the one with gears, put the `config.json` file
115
+ * In the second parameter, the one with a camera, put the video you want to process (leave it empty to use your webcam)
116
+ * In the last one, put your deck list in the format `ydk`
117
+
118
+ Then you can press the button and start the process !
119
+
120
+ </details>
121
+
122
+ ---
123
+ ## <div align="center">⚙️Models and Data</div>
124
+
125
+ <details open>
126
+ <summary>Models</summary>
127
+
128
+ In this project, the tasks were divided so that one model would locate the card and another model would classify them.
129
+ Similarly, to classify the cards, I divided the task so that there is one model for each type of card,
130
+ and the model to be used was determined by the color of the card.
131
+
132
+ Models can be downloaded in <a href="https://huggingface.co/HichTala/draw">Hugging Face</a>.
133
+ Models starting with `beit` stands for classification and the one starting with `yolo` for localization.
134
+
135
+ [![Model on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/model-on-hf-sm.svg)](https://huggingface.co/HichTala/draw)
136
+
137
+ For now only models for "retro" gameplay are available but the ones for classic format play will be added soon.
138
+ I considered "retro" format all cards before the first _syncro_ set, so all the cards edited until Light of Destruction set (LODT - 05/13/2008) set and all speed duel cards.
139
+
140
+ </details>
141
+
142
+ <details open>
143
+ <summary>Data</summary>
144
+
145
+ To create a dataset, the <a href="https://ygoprodeck.com/api-guide/">YGOPRODeck</a> api was used. Two datasets were thus created,
146
+ one for "retro" play and the other for classic format play. Just as there is a model for each type of card,
147
+ there is a dataset for each type of card.
148
+
149
+ Dataset can be downloaded in <a href="">Hugging Face</a>.
150
+
151
+ [![Dataset on HF](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm.svg)](https://huggingface.co/datasets/HichTala/yugioh_dataset)
152
+
153
+ For now only "retro" dataset is available, but the one for classic format play will be added soon.
154
+
155
+
156
+ </details>
157
+
158
+
159
+ ---
160
+ ## <div align="center">💡Inspiration</div>
161
+
162
+ This project is inspired by content creator [SuperZouloux](https://www.youtube.com/watch?v=64-LfbggqKI)'s idea of a hologram bringing _Yu-Gi-Oh!_ cards to life.
163
+ His project uses chips inserted under the sleeves of each card,
164
+ which are read by the play mat, enabling the cards to be recognized.
165
+
166
+ Inserting the chips into the sleeves is not only laborious, but also poses another problem:
167
+ face-down cards are read in the same way as face-up ones.
168
+ So an automatic detector is a really suitable solution.
169
+
170
+ Although this project was discouraged by _KONAMI_ <sup>®</sup>, the game's publisher (which is quite understandable),
171
+ we can nevertheless imagine such a system being used to display the cards played during a live duel,
172
+ to allow spectators to read the cards.
173
+
174
+ ---
175
+ ## <div align="center">🔗Related Works</div>
176
+
177
+ Although to my knowledge `draw` is the first detector capable of locating and detecting _Yu-Gi-Oh!_ cards in a dueling environment,
178
+ other works exist and were a source of inspiration for this project. It's worth mentioning them here.
179
+
180
+ [Yu-Gi-Oh! NEURON](https://www.konami.com/games/eu/fr/products/yugioh_neuron/) is an official application developed by _KONAMI_ <sup>®</sup>.
181
+ It's packed with features, including cards recognition. The application is capable of recognizing a total of 20 cards at a time, which is very decent.
182
+ The drawback is that the cards must be of good quality to be recognized, which is not necessarily the case in a duel context.
183
+ What's more, it can't be integrated, so the only way to use it is to use the application.
184
+
185
+ [yugioh one shot learning](https://github.com/vanstorm9/yugioh-one-shot-learning) made by `vanstorm9` is a
186
+ Yu-Gi-Oh! cards classification program that allow you to recognize cards. It uses siamese network to train its classification
187
+ model. It gives very impressive results on images with a good quality but not that good on low quality images, and it
188
+ can't localize cards.
189
+
190
+ [Yolov8](https://github.com/ultralytics/ultralytics) is the last version of the very famous `yolo` family of object detector models.
191
+ I think it doesn't need to be presented today, it represents state-of-the-art real time object detection model.
192
+
193
+ [BEiT](https://arxiv.org/pdf/2106.08254.pdf) is a pre-trained model for image classification. It uses image transofrmers
194
+ which are based on attention mechanism. It suits our problem because authors also propose a pre-trained model in `Imagenet-22K`.
195
+ It is a dataset with 22k classes (more than most classifiers) which is interesting for our case since there is mode than 11k cards in _Yu-Gi-Oh!_.
196
+
197
+ ---
198
+ ## <div align="center">🔍Method Overview</div>
199
+
200
+ A medium blog will soon be written and published, explaining the main process from data collection to final prediction.
201
+ If you have any questions, don't hesitate to open an issue.
202
+
203
+ ---
204
+ ## <div align="center">💬Contact</div>
205
+
206
+ You can reach me on Twitter [@tiazden](https://twitter.com/tiazden) or by email at [[email protected]](mailto:[email protected]).