|
--- |
|
datasets: |
|
- NovaSky-AI/Sky-T1_data_17k |
|
base_model: |
|
- win10/Phi-4-llama-t1-lora |
|
license: mit |
|
--- |
|
|
|
Full merged 16bit model of [win10/Phi-4-llama-t1-lora](https://huggingface.co./win10/Phi-4-llama-t1-lora), please always thank the original author for all the hardwork!!! All I did is the simple merging work on colab. |
|
|
|
Run with Pytorch |
|
|
|
```python |
|
import transformers |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model="benhaotang/Phi-4-llama-t1-full", |
|
tokenizer=tokenizer, |
|
device_map="auto", |
|
) |
|
messages = [ |
|
{"role": "system", "content": "You are a helpful AI asistent. You always think step by step."}, |
|
{"role": "user", "content": "Give me a short intodcution to renormalization group(RG) flow in physcis?"}, |
|
] |
|
|
|
outputs = pipeline(messages, max_new_tokens=128) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
Or can do static GGUF version of quants: [benhaotang/Phi-4-llama-t1-full](https://huggingface.co./benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF) |
|
|
|
``` |
|
ollama run hf.co/benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF |
|
``` |