Datasets:
hidehisa-arai
commited on
Commit
•
8c24bc5
1
Parent(s):
c1046f6
Upload 2 files
Browse files- README.md +2 -3
- image_downloader.py +19 -5
README.md
CHANGED
@@ -43,7 +43,6 @@ A data point has five fields as below.
|
|
43 |
To access the images, you need to retrieve the images from the URLs listed in the `url` field. The image labels are in the `category` field.
|
44 |
All the images in this dataset are licensed under CC-BY-2.0、CC-BY-NC-2.0、Public Domain Mark 1.0, or Public Domain Dedication, so you can collect and save them to your local environment to use them for evaluating your image classification model.
|
45 |
However, please note that CC-BY-NC-2.0 prohibits commercial use. Also, please note that CC-BY-2.0, CC-BY-NC-2.0, and Public Domain Mark 1.0 prohibit sublicensing, so the collected image data cannot be published.
|
46 |
-
|
47 |
## Disclaimer
|
48 |
-
|
49 |
-
-
|
|
|
43 |
To access the images, you need to retrieve the images from the URLs listed in the `url` field. The image labels are in the `category` field.
|
44 |
All the images in this dataset are licensed under CC-BY-2.0、CC-BY-NC-2.0、Public Domain Mark 1.0, or Public Domain Dedication, so you can collect and save them to your local environment to use them for evaluating your image classification model.
|
45 |
However, please note that CC-BY-NC-2.0 prohibits commercial use. Also, please note that CC-BY-2.0, CC-BY-NC-2.0, and Public Domain Mark 1.0 prohibit sublicensing, so the collected image data cannot be published.
|
|
|
46 |
## Disclaimer
|
47 |
+
- ㈱リクルートは、本データセット利用による成果に関し、正確性、有用性、確実性、違法性の確認及び何らの保証および補償を行わないものとし、また、データセット利用によって利用者に生じた損害および第三者との間における紛争について㈱リクルートは一切責任を負いません。
|
48 |
+
- To use this dataset, you are required to download the images yourself. There may be cases where you are unable to download certain images due to broken links or other reasons.
|
image_downloader.py
CHANGED
@@ -9,8 +9,18 @@ wait_time = 1.0
|
|
9 |
|
10 |
if __name__ == "__main__":
|
11 |
parser = argparse.ArgumentParser("Image Downloader")
|
12 |
-
parser.add_argument(
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
args = parser.parse_args()
|
15 |
|
16 |
output_dir = Path(args.output)
|
@@ -18,11 +28,15 @@ if __name__ == "__main__":
|
|
18 |
|
19 |
with open(args.csv, "r") as f:
|
20 |
reader = csv.reader(f)
|
|
|
21 |
for i, row in enumerate(reader):
|
22 |
url = row[3]
|
23 |
id_ = row[0]
|
|
|
24 |
try:
|
25 |
-
urlretrieve(url, output_dir / id_)
|
26 |
-
|
27 |
-
|
|
|
|
|
28 |
time.sleep(wait_time)
|
|
|
9 |
|
10 |
if __name__ == "__main__":
|
11 |
parser = argparse.ArgumentParser("Image Downloader")
|
12 |
+
parser.add_argument(
|
13 |
+
"--csv",
|
14 |
+
type=str,
|
15 |
+
required=True,
|
16 |
+
help="Path to CSV file of images to download"
|
17 |
+
)
|
18 |
+
parser.add_argument(
|
19 |
+
"--output",
|
20 |
+
type=str,
|
21 |
+
required=True,
|
22 |
+
help="Path to output directory"
|
23 |
+
)
|
24 |
args = parser.parse_args()
|
25 |
|
26 |
output_dir = Path(args.output)
|
|
|
28 |
|
29 |
with open(args.csv, "r") as f:
|
30 |
reader = csv.reader(f)
|
31 |
+
next(reader)
|
32 |
for i, row in enumerate(reader):
|
33 |
url = row[3]
|
34 |
id_ = row[0]
|
35 |
+
img_format = url.split(".")[-1]
|
36 |
try:
|
37 |
+
urlretrieve(url, output_dir / f"{id_}.{img_format}")
|
38 |
+
if (i + 1) % 100 == 0:
|
39 |
+
print(f"Downloaded {i + 1} images")
|
40 |
+
except Exception as e:
|
41 |
+
print(f"{e} : Failed to download {url}")
|
42 |
time.sleep(wait_time)
|