Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models
Paper • 2604.26951 • Published • 46
This model was introduced in the paper Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models.
distill-LLaDA2-TIDE_Shared is a 0.6B diffusion language model distilled from LLaDA2.0-mini (16B MoE) into the Qwen3-0.6B-diffusion-bd3lm-v0.1 student in the Cross-Tokenizer (Pipeline A) of the TIDE framework. TIDAL + CompDemo applied within the cross-tokenizer pipeline (non-native ablation).
Qwen3-0.6B-diffusion-bd3lm-v0.1 (BD3LM, block_size=32)inclusionAI/LLaDA2.0-mini--distill_mode alm_taid --use_comp_demo TrueQwen3-0.6B-diffusion-bd3lm-v0.1 base. Pre-tokenized for this teacher in TIDE-dllm/distill_llada2_sft.pip install torch transformers accelerate
This checkpoint is fully compatible with the BD3LM
generate(...)routine published withdllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1— only the model name changes.
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
repo = "TIDE-dllm/distill-LLaDA2-TIDE_Shared"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForMaskedLM.from_pretrained(
repo, dtype=torch.bfloat16, trust_remote_code=True,
).to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
prompts = [
[
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Implement a DFS traversal in Python with clear inline comments."},
],
]
encoded = [tokenizer.apply_chat_template(m, add_generation_prompt=True, tokenize=True, enable_thinking=False) for m in prompts]
# ... use the same `generate()` function as in dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1.
For an interactive demo (visualised iterative denoising), use the script in the TIDE / dLLM repo:
python -u examples/a2d/bd3lm/chat.py \
--model_name_or_path TIDE-dllm/distill-LLaDA2-TIDE_Shared \
--chat_template True --block_size 32 --remasking low_confidence \
--steps 256 --max_new_tokens 256
git clone https://github.com/PKU-YuanGroup/TIDE && cd TIDE
pip install -e . && git submodule update --init --recursive
pip install -e "lm-evaluation-harness[ifeval,math]" && pip install -e "tokenkit[full]"
# Download the pre-tokenized SFT mixture for this teacher
huggingface-cli download TIDE-dllm/distill_llada2_sft --repo-type dataset \
--local-dir data/distill_llada2_sft
bash scripts/distill_llada2.sh \
--data_path data/distill_llada2_sft \
--distill_mode alm_taid --use_comp_demo True \
--num_gpus 8
@misc{zhang2026turningtidecrossarchitecturedistillation,
title={Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models},
author={Gongbo Zhang and Wen Wang and Ye Tian and Li Yuan},
year={2026},
eprint={2604.26951},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.26951},
}
Base model
Qwen/Qwen3-0.6B-Base