File size: 4,643 Bytes
104598a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
library_name: transformers
license: apache-2.0
base_model:
- upstage/SOLAR-10.7B-v1.0
language:
- en
pipeline_tag: text-generation
---


### **Loading the Model**

Use the following Python code to load the model:

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("nayohan/corningQA-solar-10.7b-v1.0")
model = AutoModelForCausalLM.from_pretrained(
    "nayohan/corningQA-solar-10.7b-v1.0",
    device_map="auto",
    torch_dtype=torch.float16,
)
```

### **Generating Text**

To generate text, use the following Python code:

```python
text = """You will be shown dialogues between Speaker 1 and Speaker 2. Please read and understand given Dialogue Session, then complete the task under the guidance of Task Introduction.\n\n```
```
Context:
{context}
```
```
Dialogue Session:
{diaogues}
```
Task Introduction:
After reading the Dialogue Session, please create an appropriate response in the parts marked ###.
```
Task Result:
"""
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

```
{"input":"You will be shown a dialogues between Speaker 1 and Speaker 2. Please read Context and understand given Dialogue Session, then complete the task under the guidance of Task Introduction.\n\n```\nContext:\npropagation mechanism with line-of-sight, \ufb01rst-order re\ufb02ec-tions and scattering becoming much more dominant. Thismeans shadowing will have severe detrimental effects on theaverage received power. Indeed, channel models developedfor millimeter-wave include a third state, in addition toline-of-sight and non-line-of-sight, to explicitly model an out-age event when received power is too weak to establish alink [3]. Although adaptive beam steering techniques can```\n\n```\nDialogue Session:\nSpeaker 1: How does shadowing affect millimeter-wave channel models?\nSpeaker 2: Shadowing has severe detrimental effects on the average received power and can cause an outage event, leading to a third state in channel models to model this event.\nSpeaker 1: So, what are some techniques that can be used to mitigate the effects of shadowing?\nSpeaker 2: Well, as I mentioned earlier, adaptive beam steering is one approach. By constantly adjusting the direction of the transmit and receive beams, it is possible to maintain a strong enough signal to overcome the effects of shadowing.\nSpeaker 1: That's interesting. But in a highly dynamic environment, like a crowded city, would this technique still be effective?\nSpeaker 2: Good point. In that case, another technique that can be used is diversity reception. By using multiple antennas and taking advantage of the different paths that signals can take, it is possible to improve the signal quality and overcome the effects of shadowing.\nSpeaker 1: Ah, I see. And how does this compare to traditional cellular systems where shadowing is also a concern?\nSpeaker 2: That's a great question. In traditional cellular systems, shadowing is not as severe due to the lower frequencies used. However, in millimeter-wave systems, the wavelength is much smaller, making them more susceptible to shadowing and requiring these specialized techniques for mitigation.\nSpeaker 1: I see. Well, it looks like careful consideration and implementation of these techniques will be crucial for successful millimeter-wave communication. Thank you for explaining this to me.\nSpeaker 2: ###\n```\n\n```\nTask Introduction:\nAfter reading the Dialogue Session, please create an appropriate response in the parts marked ###.\n```\n\nTask Result:","output":" No problem, happy to share my knowledge on the subject.\n"}

```

### **License**
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co./upstage/SOLAR-10.7B-v1.0): apache-2.0
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co./upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
  - Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release fine-tuned model as cc-by-nc-4.0.

```bibtex
@misc{kim2023solar,
      title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling}, 
      author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
      year={2023},
      eprint={2312.15166},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```