CPMAnt
Overview
CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. See more
Tips:
This model was contributed by OpenBMB. The original code can be found here.
βοΈ Training & Inference
- A tutorial on CPM-Live.
CpmAntConfig
class transformers.CpmAntConfig
< source >( vocab_size: int = 30720 hidden_size: int = 4096 num_attention_heads: int = 32 dim_head: int = 128 dim_ff: int = 10240 num_hidden_layers: int = 48 dropout_p: int = 0.0 position_bias_num_buckets: int = 512 position_bias_max_distance: int = 2048 eps: int = 1e-06 init_std: float = 1.0 prompt_types: int = 32 prompt_length: int = 32 segment_types: int = 32 use_cache: bool = True return_dict: bool = True **kwargs )
Parameters
-
vocab_size (
int
, optional, defaults to 30720) — Vocabulary size of the CPMAnt model. Defines the number of different tokens that can be represented by theinput
passed when calling CpmAntModel. - hidden_size (
int
, optional, defaults to 4096) — Dimension of the encoder layers. -
num_attention_heads (
int
, optional, defaults to 32) — Number of attention heads in the Transformer encoder. -
dim_head (
int
, optional, defaults to 128) — Dimension of attention heads for each attention layer in the Transformer encoder. -
dim_ff (
int
, optional, defaults to 10240) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. - num_hidden_layers (
int
, optional, defaults to 48) — Number of layers of the Transformer encoder. -
dropout_p (
float
, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder. -
position_bias_num_buckets (
int
, optional, defaults to 512) — The number of position_bias buckets. -
position_bias_max_distance (
int
, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). -
eps (
float
, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers. -
prompt_types (
int
, optional, defaults to 32) — The type of prompt. -
prompt_length (
int
, optional, defaults to 32) — The length of prompt. -
segment_types (
int
, optional, defaults to 32) — The type of segment. -
use_cache (
bool
, optional, defaults toTrue
) — Whether to use cache. -
init_std (
float
, optional, defaults to 1.0) — Initialize parameters with std = init_std. -
return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a ModelOutput instead of a plain tuple.Example —
This is the configuration class to store the configuration of a CpmAntModel. It is used to instantiate an CPMAnt model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CPMAnt openbmb/cpm-ant-10b architecture.
Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.
>>> from transformers import CpmAntModel, CpmAntConfig
>>> # Initializing a CPMAnt cpm-ant-10b style configuration
>>> configuration = CpmAntConfig()
>>> # Initializing a model from the cpm-ant-10b style configuration
>>> model = CpmAntModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
CpmAntTokenizer
class transformers.CpmAntTokenizer
< source >( vocab_file bod_token = '<d>' eod_token = '</d>' bos_token = '<s>' eos_token = '</s>' pad_token = '<pad>' unk_token = '<unk>' line_token = '</n>' space_token = '</_>' padding_side = 'left' **kwargs )
Parameters
-
vocab_file (
str
) — Path to the vocabulary file. -
bod_token (
str
, optional, defaults to"<d>"
) — The beginning of document token. -
eod_token (
str
, optional, defaults to"</d>"
) — The end of document token. -
bos_token (
str
, optional, defaults to"<s>"
) — The beginning of sequence token. -
eos_token (
str
, optional, defaults to"</s>"
) — The end of sequence token. -
pad_token (
str
, optional, defaults to"<pad>"
) — The token used for padding. -
unk_token (
str
, optional, defaults to"<unk>"
) — The unknown token. -
line_token (
str
, optional, defaults to"</n>"
) — The line token. -
space_token (
str
, optional, defaults to"</_>"
) — The space token.
Construct a CPMAnt tokenizer. Based on byte-level Byte-Pair-Encoding.
build_inputs_with_special_tokens
< source >(
token_ids_0: typing.List[int]
token_ids_1: typing.List[int] = None
)
β
List[int]
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CPMAnt sequence has the following format:
- single sequence:
[BOS] Sequence
.
get_special_tokens_mask
< source >(
token_ids_0: typing.List[int]
token_ids_1: typing.Optional[typing.List[int]] = None
already_has_special_tokens: bool = False
)
β
List[int]
Parameters
-
token_ids_0 (
List[int]
) — List of IDs. -
token_ids_1 (
List[int]
, optional) — Optional second list of IDs for sequence pairs. -
already_has_special_tokens (
bool
, optional, defaults toFalse
) — Whether or not the token list is already formatted with special tokens for the model.
Returns
List[int]
A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer prepare_for_model
method.
CpmAntModel
The bare CPMAnt Model outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters config (~CpmAntConfig): Model configuration class with all the parameters of the Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
forward
< source >(
input_ids: typing.Optional[torch.Tensor] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None
use_cache: typing.Optional[bool] = None
return_dict: typing.Optional[bool] = None
**kwargs
)
β
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.Tensor
of shape(batch_size, seq_len)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using
CPMAntTokenizer
. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (seepast_key_values
input) to speed up sequential decoding. -
use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (CpmAntConfig) and inputs.
-
last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) β Sequence of hidden-states at the output of the last layer of the model.If
past_key_values
is used only the last hidden-state of the sequences of shape(batch_size, 1, hidden_size)
is output. -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) β Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
) and optionally ifconfig.is_encoder_decoder=True
2 additional tensors of shape(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if
config.is_encoder_decoder=True
in the cross-attention blocks) that can be used (seepast_key_values
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The CpmAntModel forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> from transformers import AutoTokenizer, CpmAntModel
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-ant-10b")
>>> model = CpmAntModel.from_pretrained("openbmb/cpm-ant-10b")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)
>>> last_hidden_states = outputs.last_hidden_state
CpmAntForCausalLM
The CPMAnt Model with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
Parameters config (~CpmAntConfig): Model configuration class with all the parameters of the Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
forward
< source >(
input_ids: typing.Optional[torch.Tensor] = None
past_key_values: typing.Union[typing.List[typing.Tuple[torch.Tensor, torch.Tensor]], NoneType] = None
use_cache: typing.Optional[bool] = None
output_attentions: typing.Optional[bool] = None
output_hidden_states: typing.Optional[bool] = None
labels: typing.Optional[torch.Tensor] = None
return_dict: typing.Optional[bool] = None
attention_mask: typing.Optional[torch.Tensor] = None
**kwargs
)
β
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
Parameters
-
input_ids (
torch.Tensor
of shape(batch_size, seq_len)
) — Indices of input sequence tokens in the vocabulary.Indices can be obtained using
CPMAntTokenizer
. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) — Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (seepast_key_values
input) to speed up sequential decoding. -
use_cache (
bool
, optional) — If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). -
output_attentions (
bool
, optional) — Whether or not to return the attentions tensors of all attention layers. - output_hidden_states (
bool
, optional) — Whether or not to return the hidden states of all layers. -
return_dict (
bool
, optional) — Whether or not to return a ModelOutput instead of a plain tuple.Args — input_ids (
torch.Tensor
of shape(batch_size, seq_len)
): Indices of input sequence tokens in the vocabulary.Indices can be obtained using
CPMAntTokenizer
. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details.What are input IDs? past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
): Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (seepast_key_values
input) to speed up sequential decoding. use_cache (bool
, optional): If set toTrue
,past_key_values
key value states are returned and can be used to speed up decoding (seepast_key_values
). output_attentions (bool
, optional): Whether or not to return the attentions tensors of all attention layers. output_hidden_states (bool
, optional): Whether or not to return the hidden states of all layers. labels (torch.Tensor
of shape(batch_size, sequence_length)
, optional): Labels for computing the masked language modeling loss. return_dict (bool
, optional): Whether or not to return a ModelOutput instead of a plain tuple. attention_mask (torch.Tensor
of shape(batch_size, sequence_length)
, optional): CPMAnt will process attention mask automatically, this parameter is a dummy parameter for text-generation pipeline.Example —
- Text Generation with CpmAntForCausalLM. —
Returns
transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor)
A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (CpmAntConfig) and inputs.
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) β Language modeling loss (for next-token prediction). -
logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) β Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). -
past_key_values (
tuple(tuple(torch.FloatTensor))
, optional, returned whenuse_cache=True
is passed or whenconfig.use_cache=True
) β Tuple oftuple(torch.FloatTensor)
of lengthconfig.n_layers
, with each tuple having 2 tensors of shape(batch_size, num_heads, sequence_length, embed_size_per_head)
)Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see
past_key_values
input) to speed up sequential decoding. -
hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
-
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
The CpmAntForCausalLM forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example:
>>> import torch
>>> from transformers import AutoTokenizer, CpmAntForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-ant-10b")
>>> model = CpmAntForCausalLM.from_pretrained("openbmb/cpm-ant-10b")
>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs, labels=inputs["input_ids"])
>>> loss = outputs.loss
>>> logits = outputs.logits