NOSA: Native and Offloadable Sparse Attention

Boost Decoding Efficiency via High-Locality Offloading

Overview

NOSA is a trainable sparse attention mechanism designed for KV-cache offloading with an explicit locality constraint, paired with an inference system (NOSI) to realize its efficiency. It improves long-context/long-generation quality over prior offloading baselines while boosting decoding throughput by up to 5.04× vs FullAttn, 1.92× vs InfLLMv2, and 1.83× vs ShadowKV on 1B/3B/8B LLMs.

The model was presented in the paper NOSA: Native and Offloadable Sparse Attention.

Models

We train 1B, 3B, and 8B models using FullAttn, InfLLMv2, DMA, and NOSA. The following NOSA models have been released on Hugging Face:

Model Link
NOSA-1B NOSA-1B
NOSA-3B NOSA-3B
NOSA-8B NOSA-8B

Please reach out to the authors if additional baseline models (FullAttn, InfLLMv2, or DMA) are needed. You may open an issue on the GitHub repository or contact the authors directly via email.

Citation

@article{huang2025nosa,
  title={NOSA: Native and Offloadable Sparse Attention},
  author={Huang, Yuxiang and Wang, Pengjie and Han, Jicheng and Zhao, Weilin and Su, Zhou and Sun, Ao and Lyu, Hongya and Zhao, Hengyu and Wang, Yudong and Xiao, Chaojun and Han, Xu and Liu, Zhiyuan},
  journal={arXiv preprint arXiv:2510.13602},
  year={2025}
}
Downloads last month
79
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train openbmb/NOSA-3B

Collection including openbmb/NOSA-3B

Paper for openbmb/NOSA-3B