Papers
arxiv:2602.06694

NanoQuant: Efficient Sub-1-Bit Quantization of Large Language Models

Published on Feb 6
· Submitted by
Hyochan Chong
on Feb 10
Authors:
,

Abstract

NanoQuant enables efficient post-training quantization of large language models to binary and sub-1-bit levels using low-rank binary factorization and ADMM optimization, achieving state-of-the-art accuracy while reducing memory requirements for consumer hardware deployment.

AI-generated summary

Weight-only quantization has become a standard approach for efficiently serving large language models (LLMs). However, existing methods fail to efficiently compress models to binary (1-bit) levels, as they either require large amounts of data and compute or incur additional storage. In this work, we propose NanoQuant, the first post-training quantization (PTQ) method to compress LLMs to both binary and sub-1-bit levels. NanoQuant formulates quantization as a low-rank binary factorization problem, and compresses full-precision weights to low-rank binary matrices and scales. Specifically, it utilizes an efficient alternating direction method of multipliers (ADMM) method to precisely initialize latent binary matrices and scales, and then tune the initialized parameters through a block and model reconstruction process. Consequently, NanoQuant establishes a new Pareto frontier in low-memory post-training quantization, achieving state-of-the-art accuracy even at sub-1-bit compression rates. NanoQuant makes large-scale deployment feasible on consumer hardware. For example, it compresses Llama2-70B by 25.8times in just 13 hours on a single H100, enabling a 70B model to operate on a consumer 8 GB GPU.

Community

Paper author Paper submitter
edited 1 day ago

In this paper, we tackle a hard gap in LLM deployment: post-training methods are good at 3–4 bit weight-only quantization, but they largely break down at true 1-bit because they either need substantial data/compute or silently give back memory via extra metadata. To bridge the divide between binary PTQ and binary QAT, we introduce NanoQuant, a PTQ framework that reframes quantization as low-rank binary factorization and makes it work by precisely initializing latent binary factors and scales with a robust Hessian-aware ADMM procedure, then running a hierarchical block-wise reconstruction followed by lightweight model-level scale calibration for global activation alignment. Crucially, it is explicitly designed to be practical: we use only 128 calibration samples (~0.26M tokens) and 1 GPU.

NanoQuant compresses a 70B model from 138.04 GB to 5.35 GB and running the quantized 70B model on a consumer 8GB GPU at up to 20.11 tokens/s. Across model families, NanoQuant is the only PTQ framework effectively enabling sub-1-bit compression while staying competitive on language modeling quality. We also implement custom binary GEMV/GEMM CUDA kernels that improve throughput, memory footprint, and energy efficiency across datacenter, consumer, and edge GPUs.

did you guys release any checkpoints? im curios in the qwen 8b quanted down to test

·
Paper author

Hi, we are working on open-sourcing the code, so please say tuned!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2602.06694 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2602.06694 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.06694 in a Space README.md to link it from this page.

Collections including this paper 3