distill-LLaDA2-TIDE_Shared

This model was introduced in the paper Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models.

distill-LLaDA2-TIDE_Shared is a 0.6B diffusion language model distilled from LLaDA2.0-mini (16B MoE) into the Qwen3-0.6B-diffusion-bd3lm-v0.1 student in the Cross-Tokenizer (Pipeline A) of the TIDE framework. TIDAL + CompDemo applied within the cross-tokenizer pipeline (non-native ablation).

Model Overview

Installation

pip install torch transformers accelerate

Quick Start

This checkpoint is fully compatible with the BD3LM generate(...) routine published with dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1 — only the model name changes.

import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer

repo = "TIDE-dllm/distill-LLaDA2-TIDE_Shared"
device = "cuda" if torch.cuda.is_available() else "cpu"

model = AutoModelForMaskedLM.from_pretrained(
    repo, dtype=torch.bfloat16, trust_remote_code=True,
).to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)

prompts = [
    [
        {"role": "system", "content": "You are a helpful AI assistant."},
        {"role": "user", "content": "Implement a DFS traversal in Python with clear inline comments."},
    ],
]
encoded = [tokenizer.apply_chat_template(m, add_generation_prompt=True, tokenize=True, enable_thinking=False) for m in prompts]
# ... use the same `generate()` function as in dllm-hub/Qwen3-0.6B-diffusion-bd3lm-v0.1.

Command-Line Interface

For an interactive demo (visualised iterative denoising), use the script in the TIDE / dLLM repo:

python -u examples/a2d/bd3lm/chat.py \
    --model_name_or_path TIDE-dllm/distill-LLaDA2-TIDE_Shared \
    --chat_template True --block_size 32 --remasking low_confidence \
    --steps 256 --max_new_tokens 256

Reproducing this checkpoint

git clone https://github.com/PKU-YuanGroup/TIDE && cd TIDE
pip install -e . && git submodule update --init --recursive
pip install -e "lm-evaluation-harness[ifeval,math]" && pip install -e "tokenkit[full]"

# Download the pre-tokenized SFT mixture for this teacher
huggingface-cli download TIDE-dllm/distill_llada2_sft --repo-type dataset \
    --local-dir data/distill_llada2_sft

bash scripts/distill_llada2.sh \
    --data_path data/distill_llada2_sft \
    --distill_mode alm_taid --use_comp_demo True \
    --num_gpus 8

Citation

@misc{zhang2026turningtidecrossarchitecturedistillation,
      title={Turning the TIDE: Cross-Architecture Distillation for Diffusion Large Language Models},
      author={Gongbo Zhang and Wen Wang and Ye Tian and Li Yuan},
      year={2026},
      eprint={2604.26951},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.26951},
}
Downloads last month
437
Safetensors
Model size
0.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TIDE-dllm/distill-LLaDA2-TIDE_Shared

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(6)
this model

Paper for TIDE-dllm/distill-LLaDA2-TIDE_Shared