claudios commited on
Commit
a4ee805
·
verified ·
1 Parent(s): 5e68fbe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +221 -0
README.md CHANGED
@@ -1,4 +1,28 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: all
4
  features:
@@ -267,4 +291,201 @@ configs:
267
  path: ruby/test-*
268
  - split: validation
269
  path: ruby/validation-*
 
 
 
 
 
 
 
 
270
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - machine-generated
6
+ language:
7
+ - code
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - multilingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ - 10K<n<100K
15
+ - 1M<n<10M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - text-generation
20
+ - fill-mask
21
+ task_ids:
22
+ - language-modeling
23
+ - masked-language-modeling
24
+ paperswithcode_id: codesearchnet
25
+ pretty_name: CodeSearchNet
26
  dataset_info:
27
  - config_name: all
28
  features:
 
291
  path: ruby/test-*
292
  - split: validation
293
  path: ruby/validation-*
294
+ config_names:
295
+ - all
296
+ - go
297
+ - java
298
+ - javascript
299
+ - php
300
+ - python
301
+ - ruby
302
  ---
303
+
304
+ # CodeSearchNet
305
+
306
+ This is an *unofficial* reupload of the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset in the `parquet` format. I have also removed the columns `func_code_tokens`, `func_documentation_tokens`, and `split_name` as they are not relevant. The original repository relies on a Python module that is downloaded and executed to unpack the dataset, which is a potential security risk but importantly raises an annoying warning. As a plus, parquets load faster.
307
+
308
+ Original model card:
309
+
310
+ ---
311
+
312
+ # Dataset Card for CodeSearchNet corpus
313
+
314
+ ## Table of Contents
315
+ - [Dataset Description](#dataset-description)
316
+ - [Dataset Summary](#dataset-summary)
317
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
318
+ - [Languages](#languages)
319
+ - [Dataset Structure](#dataset-structure)
320
+ - [Data Instances](#data-instances)
321
+ - [Data Fields](#data-fields)
322
+ - [Data Splits](#data-splits)
323
+ - [Dataset Creation](#dataset-creation)
324
+ - [Curation Rationale](#curation-rationale)
325
+ - [Source Data](#source-data)
326
+ - [Annotations](#annotations)
327
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
328
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
329
+ - [Social Impact of Dataset](#social-impact-of-dataset)
330
+ - [Discussion of Biases](#discussion-of-biases)
331
+ - [Other Known Limitations](#other-known-limitations)
332
+ - [Additional Information](#additional-information)
333
+ - [Dataset Curators](#dataset-curators)
334
+ - [Licensing Information](#licensing-information)
335
+ - [Citation Information](#citation-information)
336
+ - [Contributions](#contributions)
337
+
338
+ ## Dataset Description
339
+ - **Homepage:** https://wandb.ai/github/CodeSearchNet/benchmark
340
+ - **Repository:** https://github.com/github/CodeSearchNet
341
+ - **Paper:** https://arxiv.org/abs/1909.09436
342
+ - **Data:** https://doi.org/10.5281/zenodo.7908468
343
+ - **Leaderboard:** https://wandb.ai/github/CodeSearchNet/benchmark/leaderboard
344
+
345
+ ### Dataset Summary
346
+
347
+ CodeSearchNet corpus is a dataset of 2 milllion (comment, code) pairs from opensource libraries hosted on GitHub. It contains code and documentation for several programming languages.
348
+
349
+ CodeSearchNet corpus was gathered to support the [CodeSearchNet challenge](https://wandb.ai/github/CodeSearchNet/benchmark), to explore the problem of code retrieval using natural language.
350
+
351
+ ### Supported Tasks and Leaderboards
352
+
353
+ - `language-modeling`: The dataset can be used to train a model for modelling programming languages, which consists in building language models for programming languages.
354
+
355
+ ### Languages
356
+
357
+ - Go **programming** language
358
+ - Java **programming** language
359
+ - Javascript **programming** language
360
+ - PHP **programming** language
361
+ - Python **programming** language
362
+ - Ruby **programming** language
363
+
364
+ ## Dataset Structure
365
+
366
+ ### Data Instances
367
+
368
+ A data point consists of a function code along with its documentation. Each data point also contains meta data on the function, such as the repository it was extracted from.
369
+ ```
370
+ {
371
+ 'id': '0',
372
+ 'repository_name': 'organisation/repository',
373
+ 'func_path_in_repository': 'src/path/to/file.py',
374
+ 'func_name': 'func',
375
+ 'whole_func_string': 'def func(args):\n"""Docstring"""\n [...]',
376
+ 'language': 'python',
377
+ 'func_code_string': '[...]',
378
+ 'func_code_tokens': ['def', 'func', '(', 'args', ')', ...],
379
+ 'func_documentation_string': 'Docstring',
380
+ 'func_documentation_string_tokens': ['Docstring'],
381
+ 'split_name': 'train',
382
+ 'func_code_url': 'https://github.com/<org>/<repo>/blob/<hash>/src/path/to/file.py#L111-L150'
383
+ }
384
+ ```
385
+ ### Data Fields
386
+
387
+ - `id`: Arbitrary number
388
+ - `repository_name`: name of the GitHub repository
389
+ - `func_path_in_repository`: tl;dr: path to the file which holds the function in the repository
390
+ - `func_name`: name of the function in the file
391
+ - `whole_func_string`: Code + documentation of the function
392
+ - `language`: Programming language in whoch the function is written
393
+ - `func_code_string`: Function code
394
+ - `func_code_tokens`: Tokens yielded by Treesitter
395
+ - `func_documentation_string`: Function documentation
396
+ - `func_documentation_string_tokens`: Tokens yielded by Treesitter
397
+ - `split_name`: Name of the split to which the example belongs (one of train, test or valid)
398
+ - `func_code_url`: URL to the function code on Github
399
+
400
+ ### Data Splits
401
+
402
+ Three splits are available:
403
+ - train
404
+ - test
405
+ - valid
406
+
407
+ ## Dataset Creation
408
+
409
+ ### Curation Rationale
410
+
411
+ [More Information Needed]
412
+
413
+ ### Source Data
414
+
415
+ #### Initial Data Collection and Normalization
416
+
417
+ All information can be retrieved in the [original technical review](https://arxiv.org/pdf/1909.09436.pdf)
418
+
419
+ **Corpus collection**:
420
+
421
+ Corpus has been collected from publicly available open-source non-fork GitHub repositories, using libraries.io to identify all projects which are used by at least one other project, and sort them by “popularity” as indicated by the number of stars and forks.
422
+
423
+ Then, any projects that do not have a license or whose license does not explicitly permit the re-distribution of parts of the project were removed. Treesitter - GitHub's universal parser - has been used to then tokenize all Go, Java, JavaScript, Python, PHP and Ruby functions (or methods) using and, where available, their respective documentation text using a heuristic regular expression.
424
+
425
+ **Corpus filtering**:
426
+
427
+ Functions without documentation are removed from the corpus. This yields a set of pairs ($c_i$, $d_i$) where ci is some function documented by di. Pairs ($c_i$, $d_i$) are passed through the folllowing preprocessing tasks:
428
+
429
+ - Documentation $d_i$ is truncated to the first full paragraph to remove in-depth discussion of function arguments and return values
430
+ - Pairs in which $d_i$ is shorter than three tokens are removed
431
+ - Functions $c_i$ whose implementation is shorter than three lines are removed
432
+ - Functions whose name contains the substring “test” are removed
433
+ - Constructors and standard extenion methods (eg `__str__` in Python or `toString` in Java) are removed
434
+ - Duplicates and near duplicates functions are removed, in order to keep only one version of the function
435
+
436
+ #### Who are the source language producers?
437
+
438
+ OpenSource contributors produced the code and documentations.
439
+
440
+ The dataset was gatherered and preprocessed automatically.
441
+
442
+ ### Annotations
443
+
444
+ #### Annotation process
445
+
446
+ [More Information Needed]
447
+
448
+ #### Who are the annotators?
449
+
450
+ [More Information Needed]
451
+
452
+ ### Personal and Sensitive Information
453
+
454
+ [More Information Needed]
455
+
456
+ ## Considerations for Using the Data
457
+
458
+ ### Social Impact of Dataset
459
+
460
+ [More Information Needed]
461
+
462
+ ### Discussion of Biases
463
+
464
+ [More Information Needed]
465
+
466
+ ### Other Known Limitations
467
+
468
+ [More Information Needed]
469
+
470
+ ## Additional Information
471
+
472
+ ### Dataset Curators
473
+
474
+ [More Information Needed]
475
+
476
+ ### Licensing Information
477
+
478
+ Each example in the dataset has is extracted from a GitHub repository, and each repository has its own license. Example-wise license information is not (yet) included in this dataset: you will need to find out yourself which license the code is using.
479
+
480
+ ### Citation Information
481
+
482
+ @article{husain2019codesearchnet,
483
+ title={{CodeSearchNet} challenge: Evaluating the state of semantic code search},
484
+ author={Husain, Hamel and Wu, Ho-Hsiang and Gazit, Tiferet and Allamanis, Miltiadis and Brockschmidt, Marc},
485
+ journal={arXiv preprint arXiv:1909.09436},
486
+ year={2019}
487
+ }
488
+
489
+ ### Contributions
490
+
491
+ Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset.