Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.weight', 'meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.weight', 'meta-llama_Llama-2-7b-hf.model.norm.weight'}) This happened while the json dataset builder was generating data using hf://datasets/Cheng98/llama-2-mxint8/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/formatted_tensors.json (at revision 5b8af69f3b5a10f6f4786268e2cc58134086f0fc) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast meta-llama_Llama-2-7b-hf.model.embed_tokens.weight: struct<tensor_meta: struct<is_emulated: bool, dtype: string, block_size: int64, block_axis: int64, shape: list<item: int64>>, exp_mantissa: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, block_size: int64, block_axis: int64, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, block_size: int64 child 3, block_axis: int64 child 4, shape: list<item: int64> child 0, item: int64 child 1, exp_mantissa: string meta-llama_Llama-2-7b-hf.model.embed_tokens.output: struct<tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>>, hex: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, shape: list<item: int64> child 0, item: int64 child 1, hex: string meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.input: struct<tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>>, hex: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, shape: list<item: int64> child 0, item: int64 child 1, hex: string meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.weight: struct<tensor_meta: struct<is_emulated: bool, dtype: string, shap ... int64>>, exp_mantissa: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, block_size: int64, block_axis: int64, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, block_size: int64 child 3, block_axis: int64 child 4, shape: list<item: int64> child 0, item: int64 child 1, exp_mantissa: string meta-llama_Llama-2-7b-hf.lm_head.output: struct<tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>>, hex: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, shape: list<item: int64> child 0, item: int64 child 1, hex: string input_ids: struct<tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>>, int: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, shape: list<item: int64> child 0, item: int64 child 1, int: string position_ids: struct<tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>>, int: string> child 0, tensor_meta: struct<is_emulated: bool, dtype: string, shape: list<item: int64>> child 0, is_emulated: bool child 1, dtype: string child 2, shape: list<item: int64> child 0, item: int64 child 1, int: string to {'meta-llama_Llama-2-7b-hf.model.embed_tokens.weight': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'block_size': Value(dtype='int64', id=None), 'block_axis': Value(dtype='int64', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'exp_mantissa': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.model.embed_tokens.output': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'hex': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.input': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'hex': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.output': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'hex': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.input': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'block_size': Value(dtype='int64', id=None), 'block_axis': Value(dtype='int64', id=None), 'shape': Seque ... ': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.lm_head.input': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'block_size': Value(dtype='int64', id=None), 'block_axis': Value(dtype='int64', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'exp_mantissa': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.lm_head.weight': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'block_size': Value(dtype='int64', id=None), 'block_axis': Value(dtype='int64', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'exp_mantissa': Value(dtype='string', id=None)}, 'meta-llama_Llama-2-7b-hf.lm_head.output': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'hex': Value(dtype='string', id=None)}, 'input_ids': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'int': Value(dtype='string', id=None)}, 'position_ids': {'tensor_meta': {'is_emulated': Value(dtype='bool', id=None), 'dtype': Value(dtype='string', id=None), 'shape': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}, 'int': Value(dtype='string', id=None)}} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1412, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 988, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.weight', 'meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.weight', 'meta-llama_Llama-2-7b-hf.model.norm.weight'}) This happened while the json dataset builder was generating data using hf://datasets/Cheng98/llama-2-mxint8/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/formatted_tensors.json (at revision 5b8af69f3b5a10f6f4786268e2cc58134086f0fc) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
meta-llama_Llama-2-7b-hf.model.embed_tokens.weight
dict | meta-llama_Llama-2-7b-hf.model.embed_tokens.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.cos
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.sin
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.query_states
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.key_states
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.other
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights_masked
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights_softmaxed
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.other
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.add1.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.add1.other
dict | meta-llama_Llama-2-7b-hf.model.layers.0.add1.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.act.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.act.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.other
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.output
dict | meta-llama_Llama-2-7b-hf.model.layers.0.add2.input
dict | meta-llama_Llama-2-7b-hf.model.layers.0.add2.other
dict | meta-llama_Llama-2-7b-hf.model.layers.0.add2.output
dict | meta-llama_Llama-2-7b-hf.model.norm.input
dict | meta-llama_Llama-2-7b-hf.model.norm.output
dict | meta-llama_Llama-2-7b-hf.lm_head.input
dict | meta-llama_Llama-2-7b-hf.lm_head.weight
dict | meta-llama_Llama-2-7b-hf.lm_head.output
dict | input_ids
dict | position_ids
dict | meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.weight
dict | meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.weight
dict | meta-llama_Llama-2-7b-hf.model.norm.weight
dict | Cheng98_TinyLlama_v1.1.model.embed_tokens.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.input_layernorm.weight
dict | Cheng98_TinyLlama_v1.1.model.layers.0.post_attention_layernorm.weight
dict | Cheng98_TinyLlama_v1.1.model.norm.weight
dict | Cheng98_TinyLlama_v1.1.lm_head.weight
dict | Cheng98_TinyLlama_v1.1.model.embed_tokens.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.input_layernorm.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.input_layernorm.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.cos
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.sin
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.query_states
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.key_states
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights.other
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights_masked
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights_softmaxed
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_output.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_output.other
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_output.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.add1.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.add1.other
dict | Cheng98_TinyLlama_v1.1.model.layers.0.add1.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.post_attention_layernorm.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.post_attention_layernorm.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.act.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.act.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.mult.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.mult.other
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.mult.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.output
dict | Cheng98_TinyLlama_v1.1.model.layers.0.add2.input
dict | Cheng98_TinyLlama_v1.1.model.layers.0.add2.other
dict | Cheng98_TinyLlama_v1.1.model.layers.0.add2.output
dict | Cheng98_TinyLlama_v1.1.model.norm.input
dict | Cheng98_TinyLlama_v1.1.model.norm.output
dict | Cheng98_TinyLlama_v1.1.lm_head.input
dict | Cheng98_TinyLlama_v1.1.lm_head.output
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
{
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32000,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.embed_tokens.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.embed_tokens.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.cos.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.sin.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.query_states.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.key_states.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32,
1,
128
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -2,
"shape": [
32,
128,
1
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32,
1,
1
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
1
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights_masked.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
1
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights_softmaxed.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32,
1,
1
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -2,
"shape": [
32,
1,
128
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.add1.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.add1.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.add1.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
11008,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
11008,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.act.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.act.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
11008
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
11008
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.add2.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.add2.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.layers.0.add2.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.norm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.model.norm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.lm_head.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32000,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.lm_head.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
32000
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/meta-llama_Llama-2-7b-hf.lm_head.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.int64",
"shape": [
1,
1
]
},
"int": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/input_ids.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.int64",
"shape": [
1,
1
]
},
"int": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-0/position_ids.npy"
} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
{
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32000,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.embed_tokens.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.embed_tokens.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.q_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.k_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.v_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.cos.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.sin.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.query_states.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.rope.key_states.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32,
1,
128
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -2,
"shape": [
32,
128,
1
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32,
1,
1
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
1
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights_masked.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
1
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_weights_softmaxed.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32,
1,
1
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -2,
"shape": [
32,
1,
128
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32,
1,
128
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.attn_output.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.o_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.self_attn.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.add1.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.add1.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.add1.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
11008,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.gate_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
11008,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.up_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.act.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.act.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
11008
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.mult.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
11008
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
4096,
11008
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.mlp.down_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.add2.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.add2.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.add2.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.norm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.norm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.lm_head.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32000,
4096
]
},
"exp_mantissa": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.lm_head.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
32000
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.lm_head.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.int64",
"shape": [
1,
1
]
},
"int": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/input_ids.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.int64",
"shape": [
1,
1
]
},
"int": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/position_ids.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.input_layernorm.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.layers.0.post_attention_layernorm.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
4096
]
},
"hex": "saved_tensors/meta-llama-Llama-2-7b-hf-mxint8/token-1024-pos-1/meta-llama_Llama-2-7b-hf.model.norm.weight.npy"
} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32000,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.embed_tokens.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
2048,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
256,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
256,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
2048,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
5632,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
5632,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
2048,
5632
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.input_layernorm.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.layers.0.post_attention_layernorm.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.model.norm.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32000,
2048
]
},
"hex": "saved_tensors/tinyllama-bf16/Cheng98_TinyLlama_v1.1.lm_head.weight.npy"
} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.int64",
"shape": [
1,
1
]
},
"int": "saved_tensors/tinyllama-mxint8/input_ids.npy"
} | null | null | null | null | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32000,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.embed_tokens.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
2048,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
256,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
256,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
2048,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
5632,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
5632,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.weight.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
2048,
5632
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.weight.npy"
} | null | null | null | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32000,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.lm_head.weight.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.embed_tokens.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.input_layernorm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.input_layernorm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.q_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
256
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.k_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
256
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.v_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
64
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.cos.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
64
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.sin.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
64
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.query_states.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
4,
1,
64
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.rope.key_states.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32,
1,
64
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -2,
"shape": [
32,
64,
1
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32,
1,
1
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
1
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights_masked.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
32,
1,
1
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_weights_softmaxed.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
32,
1,
1
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_output.input.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -2,
"shape": [
32,
1,
64
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_output.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
32,
1,
64
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.attn_output.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.o_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.self_attn.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.add1.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.add1.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.add1.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.post_attention_layernorm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.post_attention_layernorm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.gate_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.up_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.act.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.act.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.mult.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.mult.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
5632
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.mult.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
5632
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.mlp.down_proj.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.add2.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.add2.other.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.layers.0.add2.output.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.norm.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
2048
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.model.norm.output.npy"
} | {
"tensor_meta": {
"is_emulated": true,
"dtype": "emulated_mxint8",
"block_size": 32,
"block_axis": -1,
"shape": [
1,
1,
2048
]
},
"exp_mantissa": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.lm_head.input.npy"
} | {
"tensor_meta": {
"is_emulated": false,
"dtype": "torch.bfloat16",
"shape": [
1,
1,
32000
]
},
"hex": "saved_tensors/tinyllama-mxint8/Cheng98_TinyLlama_v1.1.lm_head.output.npy"
} |
No dataset card yet
- Downloads last month
- 11