File size: 9,359 Bytes
f7656bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.3
datasets:
- BeIR/nq
- embedding-data/PAQ_pairs
- sentence-transformers/msmarco-hard-negatives
- leminda-ai/s2orc_small
- lucadiliello/triviaqa
- pietrolesci/agnews
- mteb/amazon_reviews_multi
- multiIR/ccnews2016-8multi
- eli5
- gooaq
- quora
- lucadiliello/searchqa
- flax-sentence-embeddings/stackexchange_math_jsonl
- yahoo_answers_qa
- EdinburghNLP/xsum
- wikihow
- rajpurkar/squad_v2
- nixiesearch/amazon-esci
- osunlp/Mind2Web
- derek-thomas/dataset-creator-askreddit
language:
- en
---

# nixie-querygen-v3


A [Mistral-7B-v0.3](https://huggingface.co./mistralai/Mistral-7B-v0.3) fine-tuned on query generation task. Main use cases:

* synthetic query generation for downstream embedding fine-tuning tasks - when you have only documents and no queries/labels. Such task can be done with the [nixietune](https://github.com/nixiesearch/nixietune) toolkit, see the `nixietune.qgen.generate` recipe.
* synthetic dataset expansion for further embedding training - when you DO have query-document pairs, but only a few. You can fine-tune the `nixie-querygen-v3` on existing pairs, and then expand your document corpus with synthetic queries (which are still based on your few real ones). See `nixietune.querygen` recipe.

The idea behind the approach is taken from the [doqT5query](https://github.com/castorini/docTTTTTquery) model. See the original paper [Rodrigo Nogueira and Jimmy Lin. From doc2query to docTTTTTquery.](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)

## Flavours

This repo has multiple versions of the model:

* model-*.safetensors: Pytorch FP16 checkpoint, suitable for down-stream fine-tuning
* *-f16.gguf: GGUF F16 non-quantized [llama-cpp](https://github.com/ggerganov/llama.cpp) checkpoint, for CPU inference
* *-q4.gguf: GGUF Q4_0 quantized [llama-cpp](https://github.com/ggerganov/llama.cpp) checkpoint, for fast (and less precise) CPU inference.

## Prompt formats

The model accepts the followinng Alpaca prompt format:

```
### Instruction:
Write a short query which can be used to search a given document:

### Input:
{document text}

### Response:
[short|medium|long]? [question|regular]? query:
```

Some notes on format:

* `[short|medium|long]` and `[question|regular]` fragments are optional and can be skipped.

## Inference example

### llamacpp

With [llama-cpp](https://github.com/ggerganov/llama.cpp) and Q4 model the inference can be done on a CPU:

```bash
$ cat input.txt
### Instruction:
Write a short query which can be used to search a given document:

### Input:
Google’s greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to “net zero” by 2030 in doubt. The Silicon Valley company’s pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday. Google said the jump highlighted “the challenge of reducing emissions” at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that “the future environmental impact of AI” was “complex and difficult to predict.”

### Response:
short query:

$ ./llama-cli -m ~/models/nixie-querygen-v3/nixie-querygen-v3-q4.gguf -f input.txt -s 1

system_info: n_threads = 16 / 32 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | 
sampling: 
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 32768, n_batch = 2048, n_predict = 128, n_keep = 1


### Instruction:
Write a short query which can be used to search a given document:

### Input:
Google’s greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to “net zero” by 2030 in doubt.
The Silicon Valley company’s pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday.
Google said the jump highlighted “the challenge of reducing emissions” at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that “the future environmental impact of AI” was “complex and difficult to predict.”

### Response:
short query: google carbon footprint [end of text]

llama_print_timings:        load time =    4497.53 ms
llama_print_timings:      sample time =       0.21 ms /     5 runs   (    0.04 ms per token, 23584.91 tokens per second)
llama_print_timings: prompt eval time =    4006.12 ms /   209 tokens (   19.17 ms per token,    52.17 tokens per second)
llama_print_timings:        eval time =     829.37 ms /     4 runs   (  207.34 ms per token,     4.82 tokens per second)
llama_print_timings:       total time =    4839.50 ms /   213 tokens```
```

### Transformers

```python
from transformers import pipeline
import torch

generator = pipeline(task="text-generation", model='<path>', torch_dtype=torch.bfloat16, device_map="auto")
prompt = "### Instruction:\nWrite a short query which can be used to search a given document:\n\n### Input:\n<doc>\n\n### Response:\nshort query:"
result = generator(prompt, return_full_text=True,  max_new_tokens=32, num_return_sequences=1)
```

## Training config

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>

axolotl version: `0.4.1`
```yaml
base_model: mistralai/Mistral-7B-v0.3
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer

load_in_8bit: false
load_in_4bit: true
strict: false
val_set_size: 0.001
datasets:
  - path: json
    split: train
    type: alpaca
    data_files:
      - /home/shutty/data/querygen/alpaca.json

dataset_prepared_path: last_run_prepared
output_dir: ./outputs/qlora-out

adapter: qlora
lora_model_dir:

sequence_len: 512
sample_packing: false
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 1
micro_batch_size: 40
num_epochs: 1
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.00001

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
xformers_attention:
flash_attention: true

logging_steps: 10
warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
  - full_shard
  - auto_wrap
fsdp_config:
  fsdp_limit_all_gathers: true
  fsdp_sync_module_states: true
  fsdp_offload_params: false
  fsdp_use_orig_params: false
  fsdp_cpu_ram_efficient_loading: false
  fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
special_tokens:
# torch_compile: true
# chat_template: chatml
```

</details><br>

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 80
- total_eval_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1

### Training results

| Training Loss | Epoch  | Step  | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| No log        | 0.0000 | 1     | 2.8685          |
| 1.3256        | 0.1000 | 5581  | 1.4044          |
| 1.3539        | 0.2000 | 11162 | 1.3793          |
| 1.3409        | 0.3000 | 16743 | 1.3659          |
| 1.3781        | 0.4000 | 22324 | 1.3552          |
| 1.3909        | 0.5000 | 27905 | 1.3470          |
| 1.4037        | 0.6000 | 33486 | 1.3423          |
| 1.3573        | 0.7000 | 39067 | 1.3383          |
| 1.3088        | 0.8000 | 44648 | 1.3366          |
| 1.3243        | 0.9000 | 50229 | 1.3357          |


### Framework versions

- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1


## License

Apache 2.0