remove error dollar symbol in readme
Browse files- README.md +8 -2
- configs/metadata.json +3 -2
- docs/README.md +8 -2
README.md
CHANGED
@@ -38,7 +38,7 @@ Datasets used in this work were provided by [Activ Surgical](https://www.activsu
|
|
38 |
Since datasets are private, existing public datasets like [EndoVis 2017](https://endovissub2017-roboticinstrumentsegmentation.grand-challenge.org/Data/) can be used to train a similar model.
|
39 |
|
40 |
### Preprocessing
|
41 |
-
When using EndoVis or any other dataset, it should be divided into "train", "valid" and "test" folders. Samples in each folder would better be images and converted to jpg format. Otherwise, "images", "labels", "val_images" and "val_labels" parameters in `configs/train.json` and "datalist" in `configs/inference.json` should be modified to fit given dataset. After that, "dataset_dir" parameter in `configs/train.json` and `configs/inference.json` should be changed to root folder which contains
|
42 |
|
43 |
Please notice that loading data operation in this bundle is adaptive. If images and labels are not in the same format, it may lead to a mismatching problem. For example, if images are in jpg format and labels are in npy format, PIL and Numpy readers will be used separately to load images and labels. Since these two readers have their own way to parse file's shape, loaded labels will be transpose of the correct ones and incur a missmatching problem.
|
44 |
|
@@ -52,7 +52,7 @@ The training as performed with the following:
|
|
52 |
|
53 |
### Memory Consumption Warning
|
54 |
|
55 |
-
If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range
|
56 |
|
57 |
### Input
|
58 |
A three channel video frame
|
@@ -107,6 +107,12 @@ For more details usage instructions, visit the [MONAI Bundle Configuration Page]
|
|
107 |
python -m monai.bundle run --config_file configs/train.json
|
108 |
```
|
109 |
|
|
|
|
|
|
|
|
|
|
|
|
|
110 |
#### Override the `train` config to execute multi-GPU training:
|
111 |
|
112 |
```
|
|
|
38 |
Since datasets are private, existing public datasets like [EndoVis 2017](https://endovissub2017-roboticinstrumentsegmentation.grand-challenge.org/Data/) can be used to train a similar model.
|
39 |
|
40 |
### Preprocessing
|
41 |
+
When using EndoVis or any other dataset, it should be divided into "train", "valid" and "test" folders. Samples in each folder would better be images and converted to jpg format. Otherwise, "images", "labels", "val_images" and "val_labels" parameters in `configs/train.json` and "datalist" in `configs/inference.json` should be modified to fit given dataset. After that, "dataset_dir" parameter in `configs/train.json` and `configs/inference.json` should be changed to root folder which contains "train", "valid" and "test" folders.
|
42 |
|
43 |
Please notice that loading data operation in this bundle is adaptive. If images and labels are not in the same format, it may lead to a mismatching problem. For example, if images are in jpg format and labels are in npy format, PIL and Numpy readers will be used separately to load images and labels. Since these two readers have their own way to parse file's shape, loaded labels will be transpose of the correct ones and incur a missmatching problem.
|
44 |
|
|
|
52 |
|
53 |
### Memory Consumption Warning
|
54 |
|
55 |
+
If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range [0, 1] to minimize the System RAM requirements.
|
56 |
|
57 |
### Input
|
58 |
A three channel video frame
|
|
|
107 |
python -m monai.bundle run --config_file configs/train.json
|
108 |
```
|
109 |
|
110 |
+
Please note that if the default dataset path is not modified with the actual path in the bundle config files, you can also override it by using `--dataset_dir`:
|
111 |
+
|
112 |
+
```
|
113 |
+
python -m monai.bundle run --config_file configs/train.json --dataset_dir <actual dataset path>
|
114 |
+
```
|
115 |
+
|
116 |
#### Override the `train` config to execute multi-GPU training:
|
117 |
|
118 |
```
|
configs/metadata.json
CHANGED
@@ -1,7 +1,8 @@
|
|
1 |
{
|
2 |
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
|
3 |
-
"version": "0.5.
|
4 |
"changelog": {
|
|
|
5 |
"0.5.2": "remove the CheckpointLoader from the train.json",
|
6 |
"0.5.1": "add RAM warning",
|
7 |
"0.5.0": "update TensorRT descriptions",
|
@@ -23,7 +24,7 @@
|
|
23 |
"0.1.0": "complete the first version model package",
|
24 |
"0.0.1": "initialize the model package structure"
|
25 |
},
|
26 |
-
"monai_version": "1.2.
|
27 |
"pytorch_version": "1.13.1",
|
28 |
"numpy_version": "1.22.2",
|
29 |
"optional_packages_version": {
|
|
|
1 |
{
|
2 |
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
|
3 |
+
"version": "0.5.3",
|
4 |
"changelog": {
|
5 |
+
"0.5.3": "remove error dollar symbol in readme",
|
6 |
"0.5.2": "remove the CheckpointLoader from the train.json",
|
7 |
"0.5.1": "add RAM warning",
|
8 |
"0.5.0": "update TensorRT descriptions",
|
|
|
24 |
"0.1.0": "complete the first version model package",
|
25 |
"0.0.1": "initialize the model package structure"
|
26 |
},
|
27 |
+
"monai_version": "1.2.0rc6",
|
28 |
"pytorch_version": "1.13.1",
|
29 |
"numpy_version": "1.22.2",
|
30 |
"optional_packages_version": {
|
docs/README.md
CHANGED
@@ -31,7 +31,7 @@ Datasets used in this work were provided by [Activ Surgical](https://www.activsu
|
|
31 |
Since datasets are private, existing public datasets like [EndoVis 2017](https://endovissub2017-roboticinstrumentsegmentation.grand-challenge.org/Data/) can be used to train a similar model.
|
32 |
|
33 |
### Preprocessing
|
34 |
-
When using EndoVis or any other dataset, it should be divided into "train", "valid" and "test" folders. Samples in each folder would better be images and converted to jpg format. Otherwise, "images", "labels", "val_images" and "val_labels" parameters in `configs/train.json` and "datalist" in `configs/inference.json` should be modified to fit given dataset. After that, "dataset_dir" parameter in `configs/train.json` and `configs/inference.json` should be changed to root folder which contains
|
35 |
|
36 |
Please notice that loading data operation in this bundle is adaptive. If images and labels are not in the same format, it may lead to a mismatching problem. For example, if images are in jpg format and labels are in npy format, PIL and Numpy readers will be used separately to load images and labels. Since these two readers have their own way to parse file's shape, loaded labels will be transpose of the correct ones and incur a missmatching problem.
|
37 |
|
@@ -45,7 +45,7 @@ The training as performed with the following:
|
|
45 |
|
46 |
### Memory Consumption Warning
|
47 |
|
48 |
-
If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range
|
49 |
|
50 |
### Input
|
51 |
A three channel video frame
|
@@ -100,6 +100,12 @@ For more details usage instructions, visit the [MONAI Bundle Configuration Page]
|
|
100 |
python -m monai.bundle run --config_file configs/train.json
|
101 |
```
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
#### Override the `train` config to execute multi-GPU training:
|
104 |
|
105 |
```
|
|
|
31 |
Since datasets are private, existing public datasets like [EndoVis 2017](https://endovissub2017-roboticinstrumentsegmentation.grand-challenge.org/Data/) can be used to train a similar model.
|
32 |
|
33 |
### Preprocessing
|
34 |
+
When using EndoVis or any other dataset, it should be divided into "train", "valid" and "test" folders. Samples in each folder would better be images and converted to jpg format. Otherwise, "images", "labels", "val_images" and "val_labels" parameters in `configs/train.json` and "datalist" in `configs/inference.json` should be modified to fit given dataset. After that, "dataset_dir" parameter in `configs/train.json` and `configs/inference.json` should be changed to root folder which contains "train", "valid" and "test" folders.
|
35 |
|
36 |
Please notice that loading data operation in this bundle is adaptive. If images and labels are not in the same format, it may lead to a mismatching problem. For example, if images are in jpg format and labels are in npy format, PIL and Numpy readers will be used separately to load images and labels. Since these two readers have their own way to parse file's shape, loaded labels will be transpose of the correct ones and incur a missmatching problem.
|
37 |
|
|
|
45 |
|
46 |
### Memory Consumption Warning
|
47 |
|
48 |
+
If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range [0, 1] to minimize the System RAM requirements.
|
49 |
|
50 |
### Input
|
51 |
A three channel video frame
|
|
|
100 |
python -m monai.bundle run --config_file configs/train.json
|
101 |
```
|
102 |
|
103 |
+
Please note that if the default dataset path is not modified with the actual path in the bundle config files, you can also override it by using `--dataset_dir`:
|
104 |
+
|
105 |
+
```
|
106 |
+
python -m monai.bundle run --config_file configs/train.json --dataset_dir <actual dataset path>
|
107 |
+
```
|
108 |
+
|
109 |
#### Override the `train` config to execute multi-GPU training:
|
110 |
|
111 |
```
|