Edit model card

Chinese-LLaMA-2-13B

Linly-Chinese-LLaMA2 ๅŸบไบŽ LLaMA2่ฟ›่กŒไธญๆ–‡ๅŒ–่ฎญ็ปƒ๏ผŒไฝฟ็”จ่ฏพ็จ‹ๅญฆไน ๆ–นๆณ•่ทจ่ฏญ่จ€่ฟ็งป๏ผŒ่ฏ่กจ้’ˆๅฏนไธญๆ–‡้‡ๆ–ฐ่ฎพ่ฎก๏ผŒๆ•ฐๆฎๅˆ†ๅธƒๆ›ดๅ‡่กก๏ผŒๆ”ถๆ•›ๆ›ด็จณๅฎšใ€‚

่ฎญ็ปƒ็ป†่Š‚ๅ’ŒbenchmarkๆŒ‡ๆ ‡่ฏฆ่ง ๐Ÿ’ป Github Repo

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Linly-AI/Chinese-LLaMA-2-13B-hf", device_map="cuda:0", torch_dtype=torch.float16, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("Linly-AI/Chinese-LLaMA-2-13B-hf", use_fast=False, trust_remote_code=True)
prompt = "ๅŒ—ไบฌๆœ‰ไป€ไนˆๅฅฝ็Žฉ็š„ๅœฐๆ–น๏ผŸ"

prompt = f"### Instruction:{prompt.strip()}  ### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda:0")
generate_ids = model.generate(inputs.input_ids, do_sample=True, max_new_tokens=2048, top_k=10, top_p=0.85, temperature=1, repetition_penalty=1.15, eos_token_id=2, bos_token_id=1, pad_token_id=0)
response = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
response = response.lstrip(prompt)
Downloads last month
1,558
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Linly-AI/Chinese-LLaMA-2-13B-hf

Quantizations
1 model

Spaces using Linly-AI/Chinese-LLaMA-2-13B-hf 21