File size: 1,529 Bytes
20da331
aab1a7d
 
20da331
 
 
aab1a7d
20da331
 
aab1a7d
20da331
 
 
50bab77
20da331
50bab77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aab1a7d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
language:
- multilingual
tags:
- code
- sentence embedding
license: mit
datasets:
- CodeSearchNet
pipeline_tag: feature-extraction
---

# Model Card for CodeCSE
A simple pre-trained model for code and comment sentence embeddings using contrastive learning. This model was pretrained using [CodeSearchNet](https://huggingface.co./datasets/code_search_net).

Please [**clone the CodeCSE repository**](https://github.com/emu-se/CodeCSE) to get `GraphCodeBERTForCL` and other dependencies to use this pretrained model. https://github.com/emu-se/CodeCSE

Detailed instructions are listed in the repository's README.md. Overall, you will need:

1. GraphCodeBERT (CodeCSE uses GraphCodeBERT's input format for code)
2. GraphCodeBERTForCL defined in [codecse/codecse](https://github.com/emu-se/CodeCSE/tree/main/codecse/codecse)

## Inference example
NL input example: example_nl.json
```json
{
    "original_string": "", 
    "docstring_tokens": ["Save", "model", "to", "a", "pickle", "located", "at", "path"], 
    "url": "https://github.com/openai/baselines/blob/3301089b48c42b87b396e246ea3f56fa4bfc9678/baselines/deepq/deepq.py#L55-L72"
}
```

Code snippet to get the embedding of an NL document ([link to complete code](https://github.com/emu-se/CodeCSE/blob/a04a025c7048204bdfd908fe259fafc55e2df169/inference.py#L105)):
```
nl_json = load_example("example_nl.json")
batch = prepare_inputs(nl_json, tokenizer, args)
nl_inputs = batch[3]
with torch.no_grad():        
    nl_vec = model(input_ids=nl_inputs, sent_emb="nl")[1] 
```