Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
rinna
/
nekomata-7b
like
7
Follow
rinna Co., Ltd.
89
Text Generation
Transformers
PyTorch
Safetensors
5 datasets
Japanese
English
qwen
custom_code
arxiv:
2309.16609
arxiv:
2404.01657
License:
tongyi-qianwen-license-agreement
Model card
Files
Files and versions
Community
1
Train
Use this model
9005926
nekomata-7b
3 contributors
History:
10 commits
keisawada
Update README.md
9005926
verified
4 months ago
.gitattributes
1.52 kB
initial commit
11 months ago
LICENSE
6.9 kB
first commit
11 months ago
NOTICE
3.85 kB
first commit
11 months ago
README.md
6.2 kB
Update README.md
4 months ago
cache_autogptq_cuda_256.cpp
8.4 kB
first commit
11 months ago
cache_autogptq_cuda_kernel_256.cu
52 kB
first commit
11 months ago
config.json
912 Bytes
config.json
11 months ago
configuration_qwen.py
2.35 kB
first commit
11 months ago
cpp_kernels.py
1.92 kB
first commit
11 months ago
model-00001-of-00002.safetensors
9.97 GB
LFS
Adding `safetensors` variant of this model (#1)
10 months ago
model-00002-of-00002.safetensors
5.47 GB
LFS
Adding `safetensors` variant of this model (#1)
10 months ago
model.safetensors.index.json
20.6 kB
Adding `safetensors` variant of this model (#1)
10 months ago
modeling_qwen.py
55.6 kB
sync with the latest official code
10 months ago
pytorch_model-00001-of-00002.bin
pickle
Detected Pickle imports (3)
"torch.BFloat16Storage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
9.97 GB
LFS
upload model
11 months ago
pytorch_model-00002-of-00002.bin
pickle
Detected Pickle imports (3)
"torch.BFloat16Storage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
5.47 GB
LFS
upload model
11 months ago
pytorch_model.bin.index.json
19.5 kB
upload model
11 months ago
qwen.tiktoken
2.56 MB
first commit
11 months ago
qwen_generation_utils.py
14.6 kB
first commit
11 months ago
rinna.png
60.3 kB
first commit
11 months ago
tokenization_qwen.py
9.62 kB
first commit
11 months ago
tokenizer_config.json
270 Bytes
first commit
11 months ago