File size: 2,305 Bytes
1c256f5
 
 
 
 
 
 
 
 
21dfbb4
2639b66
4147583
21dfbb4
2639b66
c42fa54
2b1286c
c42fa54
4147583
1c256f5
 
2639b66
1c256f5
6bacea7
1c256f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50a2529
 
1c256f5
 
50a2529
 
1c256f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
license: mit
datasets:
- codeparrot/codeparrot-clean
tags:
- text-generation
- code-generation
- gpt2-large
widget:
- text: >-
    def add(a,b):
  example_title: Example 1
- text: >-
    def get_file_size(filename):
    """
    Return the size of a file.
    """
  example_title: Example 2
inference:
  parameters:
    max_new_tokens: 10
    num_return_sequences: 1
    do_sample: false
---

# Code Generation using GPT2-Large
This is a GPT2-large model that's further fine-tuned on the Codeparrot clean dataset with a custom metric focused on code generation. <br>
I've further trained the tokenizer initialized from the GPT2-large on the same dataset to better align the tokenization for generating code.

## Model description
This Model has the same architecture and Parameters as the GPT2-large model. Please refer to this [link](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) to know more about the model details.

## Intended Use & Limitations
This model is intended to generate code for the required function based on a small description of the output required.<br>

**Note:** The model is primarily trained with an objective of code generation.

## Usage

You can use this model directly to get the summaries:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load Code Generator LLM and tokenizer from checkpoint
tokenizer = AutoTokenizer.from_pretrained("DeathReaper0965/gpt2_large_code_generator")
model = AutoModelForCausalLM.from_pretrained("DeathReaper0965/gpt2_large_code_generator")
model = model.to("cuda" if torch.cuda.is_available() else "cpu")

inputs = tokenizer("def hello_world():", return_tensors="pt").to("cuda" if torch.cuda.is_available() else "cpu")

outputs = model.generate(**inputs,
                         max_new_tokens= 30,
                         num_return_sequences= 1)

print(tokenizer.batch_decode(outputs)[0])

###########OUTPUT###########
def hello_world():
    return "Hello World!"

@app.route("/hello_world")
def hello_world():
    return "Hello World!"
```

> Designed and Developed with <span style="color: #e25555;">&hearts;</span> by [Praneet](https://deathreaper0965.github.io/) | [LinkedIn](http://linkedin.com/in/deathreaper0965) | [GitHub](https://github.com/DeathReaper0965/)