PyTorch
jubeku commited on
Commit
5029ebb
1 Parent(s): 4ab2c84

Update readme and add config file

Browse files
Files changed (3) hide show
  1. README.md +113 -0
  2. canopyheight_map.png +0 -0
  3. config.yaml +142 -0
README.md CHANGED
@@ -1,3 +1,116 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Model Card for granite-geospatial-canopy-height
6
+
7
+ <p align="center" width="100%">
8
+ <img src="canopyheight_map.png" width="600">
9
+ </p>
10
+
11
+ The granite-geospatial-canopy-height model is a fine-tuned geospatial foundation model for predicting the canopy height (i.e., height of trees and vegetationon the Earth's surface) using optical satellite imagery.
12
+ Canopy Height is an important component to quantify the carbon cycle and is crucial for estimating crop yields, monitoring forest timber production, and quantifying the carbon sequestered by nature-based actions.
13
+
14
+ The model predicts canopy height from the Harmonized Landsat and Sentinel-2 (HLS) L30 optical satellite imagery and is fine-tuned using training labels from the
15
+ Global Ecosystem Dynamics Investigation (GEDI) L2A. Uniquely, the model has been fine-tuned using HLS and GEDI data collected from 15 biomes across the globe.
16
+ Please see the Model Description below for more details.
17
+
18
+ ## How to Get Started with the Model
19
+
20
+ This model was trained using [Terratorch](https://github.com/IBM/terratorch).
21
+
22
+ We make the weights as well as the configuration file that defines it available.
23
+
24
+ You can use it easily with Terratorch through:
25
+
26
+ ```python
27
+ from terratorch.cli_tools import LightningInferenceModel
28
+
29
+ ckpt_path = hf_hub_download(repo_id="ibm-granite/granite-geospatial-canopyheight", filename="canopyheight_model.ckpt")
30
+ config_path = hf_hub_download(repo_id="ibm-granite/granite-geospatial-canopyheight", filename="config.yaml")
31
+
32
+ model = LightningInferenceModel.from_config(config_path, ckpt_path)
33
+
34
+ inference_results, input_file_names = model.inference_on_dir(<input_directory>)
35
+ ```
36
+
37
+ For more details, check out the [Getting Started Notebook](https://github.com/ibm-granite/granite-geospatial-canopyheight/blob/main/notebooks/agb_getting_started.ipynb) which guides the user through three experiments:
38
+
39
+ 1. Zero-shot for all biomes
40
+ 2. Zero-shot for a single biome
41
+ 3. Few-shot for a single biome
42
+
43
+ ## Model Description
44
+
45
+ The granite-geospatial-canopyheight model is a geospatial foundation model that has been fine-tuned using HLS and GEDI data to perform regression.
46
+
47
+ The base foundation model from which the granite-geospatial-canopyheight model is fine-tuned is similar to that described in this [paper](https://arxiv.org/abs/2310.18660),
48
+ with the exception that the backbone is a Swin-B transformer. We opted for the Swin-B backbone instead of the ViT in the original paper because the Swin-B provides the following advantages:
49
+ - a smaller starting patch size which provides a higher effective resolution
50
+ - windowed attention which provides better computational efficiency
51
+ - hierarchical merging which provides a useful inductive bias
52
+
53
+ The base foundation model was pretrained using SimMIM, a self-supervised learning strategy based on masking large parts of the input (HLS2) data which are then reconstructed by the model. A small decoder composed of a single convolutional layer and a Pixel Shuffle module was added to the Swin-B backbone for the (pretraining) reconstruction task.
54
+
55
+ For fine-tuning, we replaced the small decoder with a UPerNet adapted for pixel-wise regression. We opted for the UPerNet because it provides fusion between transformer blocks, a similar intuition to the Unet which is consistently considered state-of-the-art for regression tasks with earth observation data. As the standard UPerNet implementation using the Swin-B backbone predicts a final feature map 4x smaller than the input, we appended two Pixel Shuffle layers to learn the upscaling. More details on the fine-tuned model can be found in this [paper](https://doi.org/10.1109/IGARSS53475.2024.10640630).
56
+
57
+
58
+ ## Model Releases (along with the branch name where the models are stored):
59
+
60
+ - **tag v1 —** - 04/11/2024
61
+
62
+ - Stay tuned for more models!
63
+
64
+ ### Model Sources
65
+
66
+ - **Repository:** https://github.com/ibm-granite/granite-geospatial-canopyheight/
67
+ - **Paper (canopyheight):** https://doi.org/10.1109/IGARSS53475.2024.10640630
68
+ - **Paper (foundation model):** https://arxiv.org/abs/2310.18660
69
+
70
+ ### External Blogs
71
+ - https://research.ibm.com/blog/img-geospatial-studio-think
72
+
73
+ ## Training Data
74
+
75
+ The model was trained on a collection of datasets provided by NASA:
76
+ - Harmonized Landsat-Sentinel 2 (HLS) L30: https://lpdaac.usgs.gov/products/hlss30v002/
77
+ - Global Ecosystem Dynamics Investigation (GEDI) L4A: https://doi.org/10.3334/ORNLDAAC/1907
78
+
79
+ For training and testing, the model requires a cloud-free snapshot of an area where all pixels are representative of the spectral bands for that location. The approach we used to create the cloud free images was to acquire HLS data during the leaf-on season for each hemisphere, analyze the timeseries, and select pixels that are not contaminated with clouds. We compute the mean value of each cloud-free pixel during the leaf-on season for each spectral band which is then assembled into a composite image representative for that area. The corresponding GEDI L4A canopy height data obtained made during the same leaf-on season are interpolated to the HLS grid (CRS:4326) such that the measured canopy height points are aligned with HLS data. GEDI data is spatially and temporaly sparse so pixels with no corresponding GEDI measurement are filled with a no data value.
80
+
81
+
82
+ <!-- TODO: add citation -->
83
+ ## Citation
84
+ Kindly cite the following paper, if you intend to use our model or its associated architectures/approaches in your work
85
+
86
+ **BibTeX:**
87
+
88
+ ```
89
+ @inproceedings{da2024geospatial,
90
+ title={Geospatial Foundational Model for Canopy Height Estimates Across Kenya’s Ecoregions},
91
+ author={Da Silva, Ademir Ferreira and Zortea, Maciel and Kuehnert, Julian and Atluri, Anjani and Singh, Gurkwandar and Srinivasan, Harini and Klein, Levente J},
92
+ booktitle={IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium},
93
+ pages={2853--2857},
94
+ year={2024},
95
+ organization={IEEE}
96
+ }
97
+ ```
98
+
99
+ **APA:**
100
+ ```
101
+ Da Silva, A. F., Zortea, M., Kuehnert, J., Atluri, A., Singh, G., Srinivasan, H., & Klein, L. J. (2024, July). Geospatial Foundational Model for Canopy Height Estimates Across Kenya’s Ecoregions. In IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium (pp. 2853-2857). IEEE.
102
+ ```
103
+
104
+
105
+ ## Model Card Authors
106
+
107
+ Julian Kuehnert, Levente Klein, Campbell Watson and Thomas Brunschwiler
108
+
109
+
110
+ ## IBM Public Repository Disclosure:
111
+
112
+ All content in this repository including code has been provided by IBM under the associated
113
+ open source software license and IBM is under no obligation to provide enhancements,
114
+ updates, or support. IBM developers produced this code as an
115
+ open source project (not as an IBM product), and IBM makes no assertions as to
116
+ the level of quality nor security, and will not be maintaining this code going forward.
canopyheight_map.png ADDED
config.yaml ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # lightning.pytorch==2.4.0
2
+ seed_everything: 42
3
+
4
+ ### Trainer configuration
5
+ trainer:
6
+ accelerator: auto
7
+ strategy: auto
8
+ devices: auto
9
+ num_nodes: 1
10
+ # precision: 16-mixed
11
+ logger:
12
+ class_path: TensorBoardLogger
13
+ init_args:
14
+ save_dir: ./experiments
15
+ name: finetune_region
16
+ callbacks:
17
+ - class_path: RichProgressBar
18
+ - class_path: LearningRateMonitor
19
+ init_args:
20
+ logging_interval: epoch
21
+ - class_path: EarlyStopping
22
+ init_args:
23
+ monitor: val/loss
24
+ patience: 100
25
+ max_epochs: 300
26
+ check_val_every_n_epoch: 1
27
+ log_every_n_steps: 20
28
+ enable_checkpointing: true
29
+ default_root_dir: ./experiments
30
+
31
+ ### Data configuration
32
+ data:
33
+ class_path: GenericNonGeoPixelwiseRegressionDataModule
34
+ init_args:
35
+ batch_size: 64
36
+ num_workers: 8
37
+ train_transform:
38
+ - class_path: albumentations.HorizontalFlip
39
+ init_args:
40
+ p: 0.5
41
+ - class_path: albumentations.RandomRotate90
42
+ init_args:
43
+ p: 0.5
44
+ - class_path: albumentations.VerticalFlip
45
+ init_args:
46
+ p: 0.5
47
+ - class_path: ToTensorV2
48
+ # Specify all bands which are in the input data.
49
+ # -1 are placeholders for bands that are in the data but that we will discard
50
+ dataset_bands:
51
+ - -1
52
+ - BLUE
53
+ - GREEN
54
+ - RED
55
+ - NIR_NARROW
56
+ - SWIR_1
57
+ - SWIR_2
58
+ - -1
59
+ - -1
60
+ - -1
61
+ - -1
62
+ output_bands: #Specify the bands which are used from the input data.
63
+ - BLUE
64
+ - GREEN
65
+ - RED
66
+ - NIR_NARROW
67
+ - SWIR_1
68
+ - SWIR_2
69
+ rgb_indices:
70
+ - 2
71
+ - 1
72
+ - 0
73
+ # Directory roots to training, validation and test datasplits:
74
+ train_data_root: train_images
75
+ train_label_data_root: train_labels
76
+ val_data_root: val_images
77
+ val_label_data_root: val_labels
78
+ test_data_root: test_images
79
+ test_label_data_root: test_labels
80
+ means: # Mean value of the training dataset per band
81
+ - 556.025024
82
+ - 910.020020
83
+ - 1039.141968
84
+ - 2665.447266
85
+ - 2361.062256
86
+ - 1633.309326
87
+ stds: # Standard deviation of the training dataset per band
88
+ - 413.787903
89
+ - 562.086670
90
+ - 819.830444
91
+ - 816.528381
92
+ - 1120.049438
93
+ - 1072.057861
94
+ # Nodata value in label data
95
+ no_label_replace: -1
96
+ # Nodata value in the input data
97
+ no_data_replace: 0
98
+
99
+ ### Model configuration
100
+ model:
101
+ class_path: terratorch.tasks.PixelwiseRegressionTask
102
+ init_args:
103
+ model_args:
104
+ decoder: UperNetDecoder
105
+ pretrained: false
106
+ backbone: prithvi_swin_B
107
+ backbone_drop_path_rate: 0.3
108
+ decoder_channels: 32
109
+ in_channels: 6
110
+ bands:
111
+ - BLUE
112
+ - GREEN
113
+ - RED
114
+ - NIR_NARROW
115
+ - SWIR_1
116
+ - SWIR_2
117
+ num_frames: 1
118
+ head_dropout: 0.16
119
+ head_final_act: torch.nn.ReLU
120
+ head_learned_upscale_layers: 2
121
+ loss: rmse
122
+ ignore_index: -1
123
+ freeze_backbone: false
124
+ freeze_decoder: false
125
+ model_factory: PrithviModelFactory
126
+ # uncomment this block for tiled inference
127
+ # tiled_inference_parameters:
128
+ # h_crop: 224
129
+ # h_stride: 192
130
+ # w_crop: 224
131
+ # w_stride: 192
132
+ # average_patches: true
133
+ optimizer:
134
+ class_path: torch.optim.AdamW
135
+ init_args:
136
+ lr: 5.0e-05
137
+ weight_decay: 0.3
138
+ lr_scheduler:
139
+ class_path: ReduceLROnPlateau
140
+ init_args:
141
+ monitor: val/loss
142
+ out_dtype: float32