Brett Renfer
commited on
Commit
·
acc4eb3
1
Parent(s):
1c3f728
Updates to Readme about image data
Browse files
README.md
CHANGED
@@ -38,6 +38,13 @@ The writers of these guidelines thank the [The Museum of Modern Art](https://www
|
|
38 |
For more details on how to use images of artworks in The Metropolitan Museum of Art’s collection, please visit our [Open Access](http://www.metmuseum.org/about-the-met/policies-and-documents/image-resources) page.
|
39 |
|
40 |
---------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
## Updating or recreating the CSV + images
|
43 |
Right now, this is a manual process. This will eventually be automated.
|
@@ -56,7 +63,7 @@ Right now, this is a manual process. This will eventually be automated.
|
|
56 |
5. Install [img2dataset](https://github.com/rom1504/img2dataset)
|
57 |
* ```pip install img2dataset```
|
58 |
6. Run ```img2dataset``` with the following options:
|
59 |
-
* ```img2dataset --processes_count 10 --thread_count 64 --url_list "cleaned_metadata_images.csv.gz" --input_format "csv.gz" --output_format "
|
60 |
* See img2dataset's docs for details on the above. You may want to remove the ```disable_all_reencoding``` option... As-is, it does not downsize or compress images at all
|
61 |
* This will take some time
|
62 |
-
7. Voila! You should have a large data folder with many json
|
|
|
38 |
For more details on how to use images of artworks in The Metropolitan Museum of Art’s collection, please visit our [Open Access](http://www.metmuseum.org/about-the-met/policies-and-documents/image-resources) page.
|
39 |
|
40 |
---------------------
|
41 |
+
## Notes on HuggingFace-specific Data
|
42 |
+
* This dataset includes images in the ```url``` column, and additional data generated by [img2dataset](https://github.com/rom1504/img2dataset)
|
43 |
+
* We include all data, including rows that do *not* have images
|
44 |
+
* You can filter by "Is Public Domain=True" or is "url" blank
|
45 |
+
* These images are the ```primaryImageSmall``` field via our API, i.e., they are not full-res, and have some compression
|
46 |
+
* See below and our [Collction API](https://metmuseum.github.io/) if you would like to recreate the data and include larger images (```primaryImage```) or additional views (```additionalImages```)
|
47 |
+
* This would require edits to ```add_images.py```
|
48 |
|
49 |
## Updating or recreating the CSV + images
|
50 |
Right now, this is a manual process. This will eventually be automated.
|
|
|
63 |
5. Install [img2dataset](https://github.com/rom1504/img2dataset)
|
64 |
* ```pip install img2dataset```
|
65 |
6. Run ```img2dataset``` with the following options:
|
66 |
+
* ```img2dataset --processes_count 10 --thread_count 64 --url_list "cleaned_metadata_images.csv.gz" --input_format "csv.gz" --output_format "parquet" --output_folder "data/train" --url_col "primaryImageSmall" --disable_all_reencoding "True" --max_shard_retry 10 --retries 10 --save_additional_columns "['Is Highlight', 'Is Timeline Work', 'Is Public Domain', 'Object ID', 'Gallery Number', 'Department', 'AccessionYear', 'Object Name', 'Title', 'Culture', 'Period', 'Dynasty', 'Reign', 'Portfolio', 'Constituent ID', 'Artist Role', 'Artist Prefix', 'Artist Display Name', 'Artist Display Bio', 'Artist Suffix', 'Artist Alpha Sort', 'Artist Nationality', 'Artist Begin Date', 'Artist End Date', 'Artist Gender', 'Artist ULAN URL', 'Artist Wikidata URL', 'Object Date', 'Object Begin Date', 'Object End Date', 'Medium', 'Dimensions', 'Credit Line', 'Geography Type', 'City', 'State', 'County', 'Country', 'Region', 'Subregion', 'Locale', 'Locus', 'Excavation', 'River', 'Classification', 'Rights and Reproduction', 'Link Resource', 'Object Wikidata URL', 'Metadata Date', 'Repository', 'Tags', 'Tags AAT URL', 'Tags Wikidata URL']"```
|
67 |
* See img2dataset's docs for details on the above. You may want to remove the ```disable_all_reencoding``` option... As-is, it does not downsize or compress images at all
|
68 |
* This will take some time
|
69 |
+
7. Voila! You should have a large data folder with many json and parquet files. You should be able to load this in the huggingface client library as a dataset, etc.
|