|
--- |
|
language: |
|
- en |
|
datasets: |
|
- English |
|
tags: |
|
- text generation |
|
- pytorch |
|
- causal-lm |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
--- |
|
|
|
license: cc-by-4.0 |
|
|
|
|
|
# Palmyra-small |
|
|
|
<style> |
|
img { |
|
display: inline; |
|
} |
|
</style> |
|
|
|
|[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-126M-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) |
|
|
|
|
|
## Model Description |
|
|
|
Palmyra was primarily pretrained with English text, there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra is a member of the same family of models that only contain a decoder. As a result, it was pretrained utilizing the objective of self-supervised causal language modeling. Palmyra uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation in accordance with GPT-3. Read the official paper if you want more information about this. |
|
|
|
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model |
|
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 |
|
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as |
|
GPT-2/GPT-3. |
|
|
|
## Training data |
|
|
|
Palmyra-small 128M was trained on |
|
|
|
## Intended Use and Limitations |
|
|
|
Palmyra-small learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. |
|
|
|
### How to use |
|
|
|
This model can be easily loaded using the `AutoModelForCausalLM` functionality: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/palmyra-small") |
|
model = AutoModelForCausalLM.from_pretrained("EleutherAI/palmyra-small") |
|
``` |
|
|
|
### Limitations and Biases |
|
|
|
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. |
|
|
|
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. |
|
|
|
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. |
|
|
|
## Evaluation results |
|
|
|
<figure> |
|
|
|
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |
|
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| |
|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | |
|
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | |
|
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | |
|
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | |
|
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | |
|
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | |
|
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | |
|
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | |
|
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | |
|
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | |
|
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | |
|
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | |
|
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | |
|
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | |
|
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | |
|
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | |
|
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | |
|
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> |
|
|
|
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by |
|
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released |
|
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these |
|
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more |
|
details.</p> |
|
|
|
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not |
|
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> |
|
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) |
|
Thus, evaluation was not attempted.</p> |
|
|
|
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models |
|
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is |
|
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> |
|
|
|
## Citation and Related Information |
|
|
|
### BibTeX entry |
|
|
|
To cite this model: |
|
```bibtex |
|
@misc{gpt-j, |
|
author = {Wang, Ben and Komatsuzaki, Aran}, |
|
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, |
|
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, |
|
year = 2021, |
|
month = May |
|
} |
|
``` |
|
|
|
To cite the codebase that trained this model: |
|
```bibtex |
|
@misc{mesh-transformer-jax, |
|
author = {Wang, Ben}, |
|
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, |
|
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, |
|
year = 2021, |
|
month = May |
|
} |
|
``` |
|
|