--- dataset_info: features: - name: id dtype: int64 - name: content dtype: string - name: language dtype: string - name: pii list: - name: context dtype: string - name: end dtype: int64 - name: start dtype: int64 - name: tag dtype: string - name: value dtype: string - name: assignment_id dtype: string splits: - name: train num_bytes: 17215712 num_examples: 7878 - name: validation num_bytes: 7302111 num_examples: 4000 download_size: 10754489 dataset_size: 24517823 extra_gated_prompt: |- ## Terms of Use for the dataset This is an annotated dataset for Personal Identifiable Information (PII) in code. We ask that you read and agree to the following Terms of Use before using the dataset and fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSfiWKyBB8-PxOCLo-KMsLlYNyQNJEzxJw0gcUAUHT3UY848qA/viewform): 1. You agree that you will not use the PII dataset for any purpose other than training or evaluating models for PII removal from datasets. 2. You agree that you will not share the PII dataset or any modified versions for whatever purpose. 3. Unless required by applicable law or agreed to in writing, the dataset is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using the dataset, and assume any risks associated with your exercise of permissions under these Terms of Use. 4. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET OR THE USE OR OTHER DEALINGS IN THE DATASET. extra_gated_fields: Email: text I have read the License and agree with its terms: checkbox --- --- # Bigcode PII Training Dataset ## Dataset Description This is the dataset used for the training of [bigcode-pii-model](https://huggingface.co./bigcode/bigcode-pii-model) (after training on pseudo-labeled data). It is a concatenation of an early version of [bigcode-pii-dataset](https://huggingface.co./datasets/bigcode/bigcode-pii-dataset) which had less samples, and [pii-for-code](https://huggingface.co./datasets/bigcode/pii-for-code-v2) (a dataset with 400 files we annotated in a previous iteration: MORE INFO TO BE ADDED). Files with `AMBIGUOUS` and `ID` were excluded. Each PII subtype was remaped to it supertype. ## Statistics The dataset consists of **11878** files in 31 programming languages. More statistics and information about the original annotated dataset can be found at the dataset card of: [bigcode-pii-dataset](https://huggingface.co./datasets/bigcode/bigcode-pii-dataset). We provide the training and test splits we used for the training and evaluation of the [bigcode-pii-model](https://huggingface.co./bigcode/bigcode-pii-model). Below is the distribution of PII entoties in each split. | Entity type | Train | Validation | |--------------|-------|------------| | EMAIL | 4721 | 1742 | | NAME | 3847 | 1298 | | IP_ADDRESS | 1941 | 521 | | USERNAME | 1320 | 346 | | PASSWORD | 390 | 148 | | KEY | 171 | 118 | # How to use ```python from datasets import load_dataset ds = load_dataset("bigcode/bigcode-pii-dataset-training") ``` ``` DatasetDict({ train: Dataset({ features: ['id', 'content', 'language', 'pii', 'assignment_id'], num_rows: 7878 }) validation: Dataset({ features: ['id', 'content', 'language', 'pii', 'assignment_id'], num_rows: 4000 }) }) ``` # Considerations for Using the Data When using this dataset, please be mindful of the data governance risks that come with handling personally identifiable information (PII). Despite sourcing the data from open, permissive GitHub repositories and having it annotated by fairly paid crowd-workers, it does contain sensitive details such as names, usernames, keys, emails, passwords, and IP addresses. To ensure responsible use for research within the open-source community, access to the dataset will be provided through a gated mechanism. We expect researchers and developers working with the dataset to adhere to the highest ethical standards and employ robust data protection measures. To assist users in effectively detecting and masking PII, we've also released a PII model trained on this dataset. Our goal in providing access to both the dataset and the PII model is to foster the development of privacy-preserving AI technologies while minimizing potential risks related to handling PII.