loubnabnl's picture
loubnabnl HF staff
add image source
0c4db1f
raw
history blame
1.06 kB
[InCoder](https://huggingface.co./facebook/incoder-6B) uses a decoder-only Transformer with Causal Masking objective, to train a left-to-right language model to fill in masked token segments.
|Model | # parameters |
| - | - |
| Decoder |1.3B |
| Decoder |6.7B |
[Causal Masking objective](https://arxiv.org/abs/2201.07520) is a hybrid approach of Causal and Masked language models, "it combines the benefit of per-token generation with optional bi-directionality specifically tailored to prompting".
During the training of InCoder, spans of code were randomly masked and moved to the end of each file, which allows for bidirectional context. Figure 1 from InCoder [paper](https://arxiv.org/pdf/2204.05999.pdf) illustrates the training process.
So in addition to program synthesis (via left-to-right generation), InCoder can also perform editing (via infilling). The model gives promising results in some zero-shot code infilling tasks such as type prediction, variable re-naming and comment generation.
In the code generation demo we use InCoder 1.3B.