ffgcc
commited on
Commit
·
2a218de
1
Parent(s):
73865de
init
Browse files- README.md +93 -0
- figure/NExtLong_method.png +3 -0
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## NExtLong: Toward Effective Long-Context Training without Long Documents
|
2 |
+
|
3 |
+
This repository contains the code ,models and datasets for our paper [NExtLong: Toward Effective Long-Context Training without Long Documents](https://arxiv.org/pdf/2501.12766).
|
4 |
+
|
5 |
+
[[Github](https://github.com/caskcsg/longcontext/tree/main/NExtLong)]
|
6 |
+
|
7 |
+
## Quick Links
|
8 |
+
|
9 |
+
- [Overview](#overview)
|
10 |
+
- [NExtLong Models](#NExtLong-models)
|
11 |
+
- [NExtLong Datasets](#NExtLong-datasets)
|
12 |
+
- [Datasets list](#datasets-list)
|
13 |
+
- [How to use NExtLong datasets](#dataset-use)
|
14 |
+
- [Bugs or Questions?](#bugs-or-questions)
|
15 |
+
|
16 |
+
|
17 |
+
<a id="overview"></a>
|
18 |
+
|
19 |
+
## Overview
|
20 |
+
|
21 |
+
|
22 |
+
Large language models (LLMs) with extended context windows have made significant strides yet remain a challenge due to the scarcity of long documents. Existing methods tend to synthesize long-context data but lack a clear mechanism to reinforce the long-range dependency modeling. To address this limitation, we propose NExtLong, a novel framework for synthesizing long-context data through Negative document Extension. NExtLong decomposes a document into multiple meta-chunks and extends the context by interleaving hard negative distractors retrieved from pretraining corpora. This approach compels the model to discriminate long-range dependent context from distracting content, enhancing its ability to model long-range dependencies. Extensive experiments demonstrate that NExtLong achieves significant performance improvements on the HELMET and RULER benchmarks compared to existing long-context synthesis approaches and leading models, which are trained on non-synthetic long documents.
|
23 |
+
<div style="text-align: center;">
|
24 |
+
<img src="figure/NExtLong_method.png" width="700" height="350">
|
25 |
+
</div>
|
26 |
+
|
27 |
+
<a id="NExtLong-models"></a>
|
28 |
+
|
29 |
+
## NExtLong Models
|
30 |
+
|
31 |
+
|
32 |
+
Our released models are listed as follows. You can import these models by using [HuggingFace's Transformers](https://github.com/huggingface/transformers). All models are trained on long-context data synthesized by [fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [Cosmopedia v2](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
|
33 |
+
|
34 |
+
| Model | Avg. | Recall | RAG | ICL | Re-rank | LongQA | RULER |
|
35 |
+
|:-------------------------------|:-------:|:------:|:-----:|:-----:|:-------:|:------:|:-------:|
|
36 |
+
| [Llama-3-8B-NExtLong-128K-Base](https://huggingface.co/caskcsg/Llama-3-8B-NExtLong-128K-Base) | 62.58 | 82.56 | 60.91 | 81.76 | 31.47 | 37.30 | 81.50 |
|
37 |
+
| [Llama-3-8B-NExtLong-512K-Base](https://huggingface.co/caskcsg/Llama-3-8B-NExtLong-512K-Base) | 65.76 | 91.58 | 63.68 | 84.08 | 31.27 | 38.42 | 85.52 |
|
38 |
+
|
39 |
+
We released our Instruct model, which is based on our Llama-3-8B-NExtLong-512K-Base model, fine-tuned using the [Magpie-Align/Magpie-Llama-3.3-Pro-1M-v0.1](https://huggingface.co/caskcsg/Llama-3-8B-NExtLong-512K-Base) dataset. We evaluated our model on the Longbench V2 benchmark and achieved the top ranking (2025-01-23) among models of the comparable size (under 10B).
|
40 |
+
|
41 |
+
|
42 |
+
| Model | Overall (%) | Easy (%) | Hard (%) | Short (%) | Medium (%) | Long (%) |
|
43 |
+
|--------------------------------------------|-------------|----------|----------|-----------|------------|----------|
|
44 |
+
| [Llama-3-8B-NExtLong-512K-Instruct](https://huggingface.co/caskcsg/Llama-3-8B-NExtLong-512K-Instruct) | 30.8 | 33.9 | 28.9 | 37.8 | 27.4 | 25.9 |
|
45 |
+
| [Llama-3-8B-NExtLong-512K-Instruct](https://huggingface.co/caskcsg/Llama-3-8B-NExtLong-512K-Instruct) + cot | 32 | 36.5 | 29.3 | 37.2 | 31.2 | 25 |
|
46 |
+
|
47 |
+
In addition, fine-tuning using the [ultrachat](https://huggingface.co/datasets/stingning/ultrachat) dataset can also yield good results, as we reported in Section 5.2 of the [NExtLong paper](https://arxiv.org/pdf/2501.12766).
|
48 |
+
|
49 |
+
<a id="NExtLong-datasets"></a>
|
50 |
+
|
51 |
+
## NExtLong Datasets
|
52 |
+
|
53 |
+
<a id="datasets-list"></a>
|
54 |
+
|
55 |
+
### Datasets list
|
56 |
+
|
57 |
+
Our released datasets are listed as follows. All datasets are synthesized from the short-text datasets [fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [Cosmopedia v2](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
|
58 |
+
|
59 |
+
|
60 |
+
|
61 |
+
| Dataset | Description |
|
62 |
+
|:-------------------------------|:--------|
|
63 |
+
| [NExtLong-64K-dataset](https://huggingface.co/datasets/caskcsg/NExtLong-64K-dataset) | Completely composed of 64K synthetic data. |
|
64 |
+
| [NExtLong-512K-dataset](https://huggingface.co/datasets/caskcsg/NExtLong-512K-dataset) | Completely composed of 512K synthetic data. |
|
65 |
+
| [NExtLong-128K-dataset](https://huggingface.co/datasets/caskcsg/NExtLong-128K-dataset) | Completely composed of 128K synthetic data. The NExtLong-128K-dataset is used to produce the Llama-3-8B-NExtLong-128K-Base model. |
|
66 |
+
| [NExtLong-512K-dataset-subset](https://huggingface.co/datasets/caskcsg/NExtLong-512K-dataset-subset) | A subset randomly selected from the NExtLong-64K-dataset and NExtLong-512K-dataset. It is used to produce the Llama-3-8B-NExtLong-512K-Base model.|
|
67 |
+
| [NExtLong-Instruct-dataset-Magpie-Llama-3.3-Pro-1M-v0.1](https://huggingface.co/datasets/caskcsg/NExtLong-Instruct-dataset-Magpie-Llama-3.3-Pro-1M-v0.1) | We transformed the [Magpie-Align/Magpie-Llama-3.3-Pro-1M-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Llama-3.3-Pro-1M-v0.1) dataset and produce the Llama-3-8B-NExtLong-512K-Instruct model.|
|
68 |
+
|
69 |
+
<a id="dataset-use"></a>
|
70 |
+
|
71 |
+
### How to use NExtLong datasets
|
72 |
+
|
73 |
+
1. Due to differences in model tokenizers, the number of tokens after encoding each piece of data may not meet expectations. Therefore, for data that exceeds the target length after tokenization, truncation is necessary. Additionally, a small portion of the data may fall short of the target length and should be discarded.
|
74 |
+
|
75 |
+
2. Given the abundance of data, we recommend discarding data that does not meet the target length or using document mask techniques for concatenation. The implementation of document mask can be referenced in [ProLong](https://github.com/princeton-nlp/ProLong). If document mask is not used, please avoid randomly concatenating such data.
|
76 |
+
|
77 |
+
3. Since our data is solely sourced from fineweb-edu and Cosmopedia v2, we recommend using 4B NExtLong data for long context training. If a larger volume of data is desired for training, it is advisable to incorporate more data sources to prevent the model from overfitting to these two datasets.
|
78 |
+
|
79 |
+
|
80 |
+
|
81 |
+
<a id="bugs-or-questions"></a>
|
82 |
+
|
83 |
+
## Bugs or questions?
|
84 |
+
|
85 |
+
If you have any questions related to the code or the paper, feel free to email Chaochen (`[email protected]`) and XingWu (`[email protected]`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
|
86 |
+
|
87 |
+
<!-- ## Citation
|
88 |
+
|
89 |
+
Please cite our paper if you use NExtLong in your work:
|
90 |
+
|
91 |
+
```bibtex
|
92 |
+
|
93 |
+
``` -->
|
figure/NExtLong_method.png
ADDED
![]() |
Git LFS Details
|