Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,33 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
---
|
4 |
+
|
5 |
+
## ⛳ NeRF-MAE Dataset
|
6 |
+
|
7 |
+
Download the preprocessed datasets here.
|
8 |
+
|
9 |
+
- Pretraining dataset (comprising NeRF radiance and density grids). [Download link](https://s3.amazonaws.com/tri-ml-public.s3.amazonaws.com/github/nerfmae/NeRF-MAE_pretrain.tar.gz)
|
10 |
+
- Finetuning dataset (comprising NeRF radiance and density grids and bounding box/semantic labelling annotations). [3D Object Detection (Provided by NeRF-RPN)](https://drive.google.com/drive/folders/1q2wwLi6tSXu1hbEkMyfAKKdEEGQKT6pj), [3D Semantic Segmentation (Coming Soon)](), [Voxel-Super Resolution (Coming Soon)]()
|
11 |
+
|
12 |
+
|
13 |
+
Extract pretraining and finetuning dataset under ```NeRF-MAE/datasets```. The directory structure should look like this:
|
14 |
+
|
15 |
+
```
|
16 |
+
NeRF-MAE
|
17 |
+
├── pretrain
|
18 |
+
│ ├── features
|
19 |
+
│ └── nerfmae_split.npz
|
20 |
+
└── finetune
|
21 |
+
└── front3d_rpn_data
|
22 |
+
├── features
|
23 |
+
├── aabb
|
24 |
+
└── obb
|
25 |
+
```
|
26 |
+
|
27 |
+
**For more details, dataloaders and how to use this dataset**: see our Github repo: https://github.com/zubair-irshad/NeRF-MAE
|
28 |
+
|
29 |
+
Note: The above datasets are all you need to train and evaluate our method. Bonus: we will be releasing our multi-view rendered posed RGB images from FRONT3D, HM3D and Hypersim as well as Instant-NGP trained checkpoints soon (these comprise over 1.6M+ images and 3200+ NeRF checkpoints)
|
30 |
+
|
31 |
+
Please note that our dataset was generated using the instruction from [NeRF-RPN]([NeRF-RPN](https://github.com/lyclyc52/NeRF_RPN)) and [3D-CLR](https://vis-www.cs.umass.edu/3d-clr/). Please consider citing our work, NeRF-RPN and 3D-CLR if you find this dataset useful in your research.
|
32 |
+
|
33 |
+
Please also note that our dataset uses [Front3D](https://arxiv.org/abs/2011.09127), [Habitat-Matterport3D](https://arxiv.org/abs/2109.08238), [HyperSim](https://github.com/apple/ml-hypersim) and [ScanNet](https://www.scan-net.org/) as the base version of the dataset i.e. we train a NeRF per scene and extract radiance and desnity grid as well as aligned NeRF-grid 3D annotations. Please read the term of use for each dataset if you want to utilize the posed multi-view images for each of these datasets.
|