ssmits commited on
Commit
66f2665
1 Parent(s): 571eeca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md CHANGED
@@ -47,6 +47,37 @@ dtype: bfloat16
47
 
48
  ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/hXfcozWzFUd8Df7HsaHK-.png)
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ## Direct Use
51
  Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
52
 
 
47
 
48
  ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/hXfcozWzFUd8Df7HsaHK-.png)
49
 
50
+ ```python
51
+ from transformers import AutoTokenizer, AutoModelForCausalLM
52
+ import transformers
53
+ import torch
54
+
55
+ model = "ssmits/Falcon2-5.5B-Danish"
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained(model)
58
+ pipeline = transformers.pipeline(
59
+ "text-generation",
60
+ model=model,
61
+ tokenizer=tokenizer,
62
+ torch_dtype=torch.bfloat16,
63
+ )
64
+ sequences = pipeline(
65
+ "Can you explain the concepts of Quantum Computing?",
66
+ max_length=200,
67
+ do_sample=True,
68
+ top_k=10,
69
+ num_return_sequences=1,
70
+ eos_token_id=tokenizer.eos_token_id,
71
+ )
72
+ for seq in sequences:
73
+ print(f"Result: {seq['generated_text']}")
74
+
75
+ ```
76
+
77
+ 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
78
+
79
+ For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
80
+
81
  ## Direct Use
82
  Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
83