InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
Abstract
In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce InfiniteHiP, a novel, and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.
Community
🚀 We are thrilled to announce our new paper "InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU"!
📄 Paper: https://huggingface.co./papers/2502.08910
😺 Source code: https://github.com/DeepAuto-AI/hip-attention/
😺 SGLang Integration available now: https://github.com/DeepAuto-AI/sglang/
▶️ Try our Live Demo with DeepSeek 14B at https://chat.deepauto.ai/
🔑 Key features of our proposed method ♾️ InfiniteHiP ♾️:
♾️ 18.95x Speedup in Attention Decoding on 1M Tokens with Efficient Multi-stage Context Pruning
♾️ 7.25× Faster End-to-end Decoding Throughput on a 3M Token Context
♾️ Training-free Out-of-length Generalization Capability with Dynamic RoPE Adjustment
♾️ Efficiently Handle up to 3 Million Tokens on a Single L40s 48GB GPU with Dynamic KV Cache Offloading
Video Demo
Haha, actually that is also in our scope too. Stay tuned :+1:
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Survey on Large Language Model Acceleration based on KV Cache Management (2024)
- FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving (2025)
- MPCache: MPC-Friendly KV Cache Eviction for Efficient Private Large Language Model Inference (2025)
- LCIRC: A Recurrent Compression Approach for Efficient Long-form Context and Query Dependent Modeling in LLMs (2025)
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference (2025)
- Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning (2025)
- Efficient Prompt Compression with Evaluator Heads for Long-Context Transformer Inference (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
will evaluate ppl metric(i think ppl is a important metric to observe how accuracy changes) ?and any comparison with minference where they claim can prefill 1m context on a A100
Hi,
About PPL, we dropped that metric from InfiniteHiP (we did measure PPL in previous papers) because it does not correlate with downstream well once PPL recovered at a certain threshold (+- 0.1 gaps). We could observe that we got significantly poorer long context performance even if we improved PPL (less PPL than FA2). In our GitHub issue, we provided a detailed guide to measure the PPL of our framework (which is obsoleted but still works). I think this issue might be helpful: https://github.com/DeepAuto-AI/hip-attention/issues/20#issuecomment-2517265455
For about MInference, we think Minference does not well match our problem setting. We want to sparsify both prefill+decode (Minference does only on prefill) to speed and reduce GPU memory by offloading (Minfernce cannot; it serves 1M with 80GB, but we serve 1~3M with 48GB). Moreover, since it does not support context extension, we think InfLLM is much closer to our problem setting; therefore, we believe InfLLM is a more direct competitor than MInfernce. However, we may add Minfernce as an appendix in the future.
Thanks for the comment!
Heejun
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper