IGNF
/

sgiordano commited on
Commit
d436fc0
1 Parent(s): 0cfc9c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -36
README.md CHANGED
@@ -91,12 +91,12 @@ pipeline_tag: image-segmentation
91
  <br>
92
 
93
  <div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
94
- <h1>FLAIR-INC_rgbie_15cl_resnet34-unet</h1>
95
- <p>The general characteristics of this specific model <strong>FLAIR-INC_rgbie_15cl_resnet34-unet</strong> are :</p>
96
  <ul style="list-style-type:disc;">
97
  <li>Trained with the FLAIR-INC dataset</li>
98
- <li>RGBIE images (true colours + infrared + elevation)</li>
99
- <li>U-Net with a Resnet-34 encoder</li>
100
  <li>15 class nomenclature : [building, pervious surface, impervious surface, bare soil, water, coniferous, deciduous, brushwood, vineyard, herbaceous, agricultural land, plowed land, swimming pool, snow, greenhouse]</li>
101
  </ul>
102
  </div>
@@ -119,17 +119,13 @@ The product called ([BD ORTHO®](https://geoservices.ign.fr/bdortho)) has its ow
119
  Consequently, the model’s prediction would improve if the user images are similar to the original ones.
120
 
121
  _**Radiometry of input images**_ :
122
- The BD ORTHO input images are distributed in 8-bit encoding format per channel. When traning the model, input normalization was performed (see section **Trainingg Details**).
123
  It is recommended that the user apply the same type of input normalization while inferring the model.
124
 
125
  _**Multi-domain model**_ :
126
  The FLAIR-INC dataset that was used for training is composed of 75 radiometric domains. In the case of aerial images, domain shifts are frequent and are mainly due to : the date of acquisition of the aerial survey (from april to november), the spatial domain (equivalent to a french department administrative division) and downstream radiometric processing.
127
  By construction (sampling 75 domains) the model is robust to these shifts, and can be applied to any images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)).
128
 
129
- _**Specification for the Elevation channel**_ :
130
- The fifth dimension of the RGBIE images is the Elevation (height of building and vegetation). This information is encoded in a 8-bit encoding format.
131
- When decoded to [0,255] ints, a difference of 1 should coresponds to 0.2 meters step of elevation difference.
132
-
133
 
134
  _**Land Cover classes of prediction**_ :
135
  The orginial class nomenclature of the FLAIR Dataset encompasses 19 classes (See the [FLAIR dataset](https://huggingface.co/datasets/IGNF/FLAIR) page for details).
@@ -141,15 +137,15 @@ As a result, the logits produced by the model are of size 19x1, but classes n°
141
  ## Bias, Risks, Limitations and Recommendations
142
 
143
  _**Using the model on input images with other spatial resolution**_ :
144
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained with fixed scale conditions. All patches used for training are derived from aerial images with 0.2 meters spatial resolution. Only flip and rotate augmentations were performed during the training process.
145
  No data augmentation method concerning scale change was used during training. The user should pay attention that generalization issues can occur while applying this model to images that have different spatial resolutions.
146
 
147
  _**Using the model for other remote sensing sensors**_ :
148
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained with aerial images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)) that encopass very specific radiometric image processing.
149
  Using the model on other type of aerial images or satellite images may imply the use of transfer learning or domain adaptation techniques.
150
 
151
  _**Using the model on other spatial areas**_ :
152
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained on patches reprensenting the French Metropolitan territory.
153
  The user should be aware that applying the model to other type of landscapes may imply a drop in model metrics.
154
 
155
  ---
@@ -166,7 +162,7 @@ Fine-tuning and prediction tasks are detailed in the README file.
166
 
167
  ### Training Data
168
 
169
- 218 400 patches of 512 x 512 pixels were used to train the **FLAIR-INC_RVBIE_resnet34_unet_15cl_norm** model.
170
  The train/validation split was performed patchwise to obtain a 80% / 20% distribution between train and validation.
171
  Annotation was performed at the _zone_ level (~100 patches per _zone_). Spatial independancy between patches is guaranted as patches from the same _zone_ were assigned to the same set (TRAIN or VALIDATION).
172
  The following number of patches were used for train and validation :
@@ -204,8 +200,8 @@ Statistics of the TRAIN+VALIDATION set :
204
  * HorizontalFlip(p=0.5)
205
  * RandomRotate90(p=0.5)
206
  * Input normalization (mean=0 | std=1):
207
- * norm_means: [105.08, 110.87, 101.82, 106.38, 53.26]
208
- * norm_stds: [52.17, 45.38, 44, 39.69, 79.3]
209
  * Seed: 2022
210
  * Batch size: 10
211
  * Number of epochs : 200
@@ -218,7 +214,7 @@ Statistics of the TRAIN+VALIDATION set :
218
 
219
  #### Speeds, Sizes, Times
220
 
221
- The FLAIR-INC_rgbie_15cl_resnet34-unet model was trained on a HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803).
222
  16 V100 GPUs were used ( 4 nodes, 4 GPUS per node). With this configuration the approximate learning time is 6 minutes per epoch.
223
 
224
  FLAIR-INC_rgbie_15cl_resnet34-unet was obtained for num_epoch=76 with corresponding val_loss=0.56.
@@ -248,37 +244,34 @@ As a result the _Snow_ class is absent from the TEST set.
248
 
249
  #### Metrics
250
 
251
- With the evaluation protocol, the **FLAIR-INC_RVBIE_resnet34_unet_15cl_norm** have been evaluated to **OA= 76.37%** and **mIoU=58.63%**.
252
  The _snow_ class is discarded from the average metrics.
253
 
254
  The following table give the class-wise metrics :
255
 
256
- | Modalities | IoU (%) | Fscore (%) | Precision (%) | Recall (%) |
257
  | ----------------------- | ----------|---------|---------|---------|
258
- | building | 82.63 | 90.49 | 90.26 | 90.72 |
259
- | pervious surface | 53.24 | 69.48 | 68.97 | 70.00 |
260
- | impervious surface | 74.17 | 85.17 | 86.28 | 84.09 |
261
- | bare soil | 60.40 | 75.31 | 80.49 | 70.75 |
262
- | water | 87.59 | 93.38 | 93.16 | 93.61 |
263
- | coniferous | 46.35 | 63.34 | 63.52 | 63.16 |
264
- | deciduous | 67.45 | 80.56 | 77.44 | 83.94 |
265
- | brushwood | 30.23 | 46.43 | 63.55 | 36.58 |
266
- | vineyard | 82.93 | 90.67 | 91.35 | 89.99 |
267
- | herbaceous vegetation | 55.03 | 70.99 | 70.59 | 71.40 |
268
- | agricultural land | 52.01 | 68.43 | 59.18 | 81.12 |
269
- | plowed land | 40.84 | 57.99 | 68.28 | 50.40 |
270
- | swimming_pool | 48.44 | 65.27 | 81.62 | 54.37 |
271
- | _snow_ | _00.00_ | _00.00_ | _00.00_ | _00.00_ |
272
- | greenhouse | 39.45 | 56.57 | 45.52 | 74.72 |
273
  | **average** | **58.63** | **72.44** | **74.3** | **72.49** |
274
 
275
 
276
 
277
 
278
 
279
-
280
-
281
-
282
  The following illustration gives the resulting confusion matrix :
283
  * Top : normalised acording to columns, columns sum at 100% and the **precision** is on the diagonal of the matrix
284
  * Bottom : normalised acording to rows, rows sum at 100% and the **recall** is on the diagonal of the matrix
 
91
  <br>
92
 
93
  <div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
94
+ <h1>FLAIR-INC_rgb_15cl_mitb5-unet</h1>
95
+ <p>The general characteristics of this specific model <strong>FLAIR-INC_rgb_15cl_mitb5-unet</strong> are :</p>
96
  <ul style="list-style-type:disc;">
97
  <li>Trained with the FLAIR-INC dataset</li>
98
+ <li>RGB images (true colours)</li>
99
+ <li>U-Net with a mitb5 encoder</li>
100
  <li>15 class nomenclature : [building, pervious surface, impervious surface, bare soil, water, coniferous, deciduous, brushwood, vineyard, herbaceous, agricultural land, plowed land, swimming pool, snow, greenhouse]</li>
101
  </ul>
102
  </div>
 
119
  Consequently, the model’s prediction would improve if the user images are similar to the original ones.
120
 
121
  _**Radiometry of input images**_ :
122
+ The BD ORTHO input images are distributed in 8-bit encoding format per channel. When traning the model, input normalization was performed (see section **Training Details**).
123
  It is recommended that the user apply the same type of input normalization while inferring the model.
124
 
125
  _**Multi-domain model**_ :
126
  The FLAIR-INC dataset that was used for training is composed of 75 radiometric domains. In the case of aerial images, domain shifts are frequent and are mainly due to : the date of acquisition of the aerial survey (from april to november), the spatial domain (equivalent to a french department administrative division) and downstream radiometric processing.
127
  By construction (sampling 75 domains) the model is robust to these shifts, and can be applied to any images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)).
128
 
 
 
 
 
129
 
130
  _**Land Cover classes of prediction**_ :
131
  The orginial class nomenclature of the FLAIR Dataset encompasses 19 classes (See the [FLAIR dataset](https://huggingface.co/datasets/IGNF/FLAIR) page for details).
 
137
  ## Bias, Risks, Limitations and Recommendations
138
 
139
  _**Using the model on input images with other spatial resolution**_ :
140
+ The FLAIR-INC_rgb_15cl_mitb5-unet model was trained with fixed scale conditions. All patches used for training are derived from aerial images with 0.2 meters spatial resolution. Only flip and rotate augmentations were performed during the training process.
141
  No data augmentation method concerning scale change was used during training. The user should pay attention that generalization issues can occur while applying this model to images that have different spatial resolutions.
142
 
143
  _**Using the model for other remote sensing sensors**_ :
144
+ The FLAIR-INC_rgb_15cl_mitb5-unet model was trained with aerial images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)) that encopass very specific radiometric image processing.
145
  Using the model on other type of aerial images or satellite images may imply the use of transfer learning or domain adaptation techniques.
146
 
147
  _**Using the model on other spatial areas**_ :
148
+ The FLAIR-INC_rgb_15cl_mitb5-unet model was trained on patches reprensenting the French Metropolitan territory.
149
  The user should be aware that applying the model to other type of landscapes may imply a drop in model metrics.
150
 
151
  ---
 
162
 
163
  ### Training Data
164
 
165
+ 218 400 patches of 512 x 512 pixels were used to train the **FLAIR-INC_rgb_15cl_mitb5-unet** model.
166
  The train/validation split was performed patchwise to obtain a 80% / 20% distribution between train and validation.
167
  Annotation was performed at the _zone_ level (~100 patches per _zone_). Spatial independancy between patches is guaranted as patches from the same _zone_ were assigned to the same set (TRAIN or VALIDATION).
168
  The following number of patches were used for train and validation :
 
200
  * HorizontalFlip(p=0.5)
201
  * RandomRotate90(p=0.5)
202
  * Input normalization (mean=0 | std=1):
203
+ * norm_means: [105.08, 110.87, 101.82]
204
+ * norm_stds: [52.17, 45.38, 44]
205
  * Seed: 2022
206
  * Batch size: 10
207
  * Number of epochs : 200
 
214
 
215
  #### Speeds, Sizes, Times
216
 
217
+ The FLAIR-INC_rgb_15cl_mitb5-unet model was trained on a HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803).
218
  16 V100 GPUs were used ( 4 nodes, 4 GPUS per node). With this configuration the approximate learning time is 6 minutes per epoch.
219
 
220
  FLAIR-INC_rgbie_15cl_resnet34-unet was obtained for num_epoch=76 with corresponding val_loss=0.56.
 
244
 
245
  #### Metrics
246
 
247
+ With the evaluation protocol, the **FLAIR-INC_rgb_15cl_mitb5-unetm** have been evaluated to **OA= 75.87%** and **mIoU=53.44%**.
248
  The _snow_ class is discarded from the average metrics.
249
 
250
  The following table give the class-wise metrics :
251
 
252
+ | Classes | IoU (%) | Fscore (%) | Precision (%) | Recall (%) |
253
  | ----------------------- | ----------|---------|---------|---------|
254
+ | building | 77.933 | 87.598 | 87.358 | 87.839 |
255
+ | pervious_surface | 55.060 | 71.018 | 73.758 | 68.474 |
256
+ | impervious_surface | 71.639 | 83.477 | 82.315 | 84.671 |
257
+ | bare_soil | 63.670 | 77.803 | 78.465 | 77.152 |
258
+ | water | 85.011 | 91.899 | 90.641 | 93.192 |
259
+ | coniferous | 58.907 | 74.140 | 77.911 | 70.717 |
260
+ | deciduous | 69.909 | 82.290 | 78.461 | 86.511 |
261
+ | brushwood | 29.254 | 45.266 | 59.845 | 36.398 |
262
+ | vineyard | 77.993 | 87.636 | 84.002 | 91.599 |
263
+ | herbaceous | 50.343 | 66.971 | 71.301 | 63.136 |
264
+ | agricultural_land | 58.801 | 74.056 | 66.961 | 82.832 |
265
+ | plowed_land | 42.202 | 59.355 | 65.114 | 54.532 |
266
+ | swimming_pool | 0.000 | 0.000 | 0.000 | 0.000 |
267
+ | snow | _0.000_ | _0.000_ | _0.000_ | _0.000_ |
268
+ | greenhouse | 60.884 | 75.687 | 66.62 | 87.609 |
269
  | **average** | **58.63** | **72.44** | **74.3** | **72.49** |
270
 
271
 
272
 
273
 
274
 
 
 
 
275
  The following illustration gives the resulting confusion matrix :
276
  * Top : normalised acording to columns, columns sum at 100% and the **precision** is on the diagonal of the matrix
277
  * Bottom : normalised acording to rows, rows sum at 100% and the **recall** is on the diagonal of the matrix