Datasets:
Deduplication steps
Is my understanding correct, that there's no way to download the de-duplicated / tokenized version of the dataset from HuggingFace and rather it's on the end-user to download and de-dupe that dataset using either the instructions presented on the GitHub page or their own methodology. This would allow the end-user to use the quality signals, dedupe links and minhash signatures to curate the dataset as they see fit?
Hi @ilyayudkovich , this is correct -- the huggingface loader downloads the raw samples (ie, the pure text and quality signals). The user needs to subsample / dedupe the dataset as they see fit.
Would it be a useful feature for you to have a flag in the samples returned from the datalaoder that indicate whether or not the document is a duplicate?
Hey @mauriceweber , I like that idea. Would the original sample indicate that it's a duplicate via that flag to mitigate the need of keeping track of hashes/doc_ids to ensure you've got a duplicate?
yes exactly -- and it would be a global deduplication flag so you get unique documents across all commoncrawl snapshots.
Ah that'd be perfect if you could implement that. What do you think the timeline looks like for that?
Hello, have you already implemented it
@mauriceweber
?
Thanks
Hi @joaomiguel26 -- sorry for the late response, I am working on it now, should have it ready early next week at the latest.
Thanks @mauriceweber !
Thanks @mauriceweber !
@mauriceweber another question about de-duplication, do we know if documents are unique per snapshot, or there may duplicates per snapshot as well?
the deduplication is global across all snapshot -- so that means if you're only interested in only a single snapshot, there will be more documents discarded than strictly necessary (but you get unique ones).
awesome thanks for the clarification @mauriceweber , looking forward to utilizing the global dedupe
@ilyayudkovich
I implemented the dedupe flag now in the deduplicate_flag
branch as an additional field is_duplicated
in the quality_signals
dict. You can tes it using
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="default", snapshots=["2022-49"], languages=["en"], streaming=True, revision="2cae54f422505166bf6529ab2d2835fe2e17231")
for sample in ds["train"]:
print(sample)
break
which should return:
{'raw_content': "Forum Start ...", 'doc_id': '2022-49/0000/en_head.json.gz/0', ...'quality_signals': '{ ... , "is_duplicate": true}'}
this is incredible, thank you Maurice!
Hey @mauriceweber ,
Im trying to run the sample code above and am getting an issue utilizing that revision.
The way that I've got it set up to pull the data:
datasets.set_caching_enabled(False)
ds = datasets.load_dataset("togethercomputer/RedPajama-Data-V2", name="default", snapshots=["2022-49"], languages=["en"], streaming=True, revision="2cae54f422505166bf6529ab2d2835fe2e1723b1")
The error that I'm getting is:
Using the latest cached version of the module from /Users/XXX/.cache/huggingface/modules/datasets_modules/datasets/togethercomputer--RedPajama-Data-V2/4db97fc3d3261d52f9b87fcab64d3a258ca823adc36138396a5b5c7da789550e (last modified on Mon Nov 13 17:45:31 2023) since it couldn't be found locally at togethercomputer/RedPajama-Data-V2., or remotely on the Hugging Face Hub.
Not sure if i need to do anything on my end, maybe update the datasets package or something else. Let me know!
oh this is a typo on my end in the revision tag -- I fixed it now, so the above code should run now!
texas sized 10-4, can see the new flag there!
Im not sure about the process, but will i always need to provide the revision when downloading the dataset, or does at some point it get merged into "main" branch of the dataset?
To confirm, the quality signal is_duplicate
will also be there when downloading the dataset rather than streaming as well
Im not sure about the process, but will i always need to provide the revision when downloading the dataset, or does at some point it get merged into "main" branch of the dataset?
I merged it into main now, so you can drop the revision parameter in the future.
To confirm, the quality signal is_duplicate will also be there when downloading the dataset rather than streaming as well
Yes -- I made the streaming example just because it's convenient to get quick glimpse at what get's loaded
Hello @mauriceweber , why was this change reverted?
Hi @joaomiguel26 , there are some duplicates files which do not exist (because there are no duplicates in the respective shard; see #18 for context) and this leads to an error during download which is not handled by the download manager. I am working on a more stable implementation currently.
Note that you can still stream the dataset in the deduplicate_flag
branch -- with streaming enabled, these errors get handled.
Do you have an idea of when the more stable implementation would be ready?
@joaomiguel26 sorry for the delay here -- I just updated the dataloader script with a more stable version. It essentially skips everything in a blacklist of files which don't exist so that the dl_manager doesn't throw an error.
Thanks @mauriceweber !