Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
import torch
|
2 |
from transformers import AutoModel, AutoTokenizer, LlamaModel
|
3 |
|
@@ -69,3 +82,6 @@ with torch.no_grad():
|
|
69 |
# compute similarity score
|
70 |
score = query_embedding @ passage_embeddings.T
|
71 |
print(score)
|
|
|
|
|
|
|
|
1 |
+
# LLARA-7B-Passage
|
2 |
+
|
3 |
+
This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is 4096.
|
4 |
+
|
5 |
+
## Training Data
|
6 |
+
|
7 |
+
The model is fine-tuned on the training split of [MS MARCO Passage Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch. Please check our paper for details.
|
8 |
+
|
9 |
+
## Usage
|
10 |
+
|
11 |
+
Below is an example to encode a query and a passage, and then compute their similarity using their embedding.
|
12 |
+
|
13 |
+
```python
|
14 |
import torch
|
15 |
from transformers import AutoModel, AutoTokenizer, LlamaModel
|
16 |
|
|
|
82 |
# compute similarity score
|
83 |
score = query_embedding @ passage_embeddings.T
|
84 |
print(score)
|
85 |
+
|
86 |
+
|
87 |
+
```
|