π»ββοΈCOKAL-v1_70Bπ»ββοΈ
Model Details
Model Developers Seungyoo Lee (DopeorNope)
Input Models input text only.
Output Models generate text only.
Model Architecture
COKAL-v1_70B is an auto-regressive 70B language model based on the LLaMA2 transformer architecture.
Base Model
Training Dataset
- SFT training dataset: garage-bAInd/Open-Platypus
Training
I developed the model in an environment with A100 x 8
Implementation Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/COKAL-v1_70B"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
- Downloads last month
- 1,317
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.