Recent advances in diffusion-based video restoration (VR) demonstrate significant improvement in visual quality, yet yield a prohibitive computational cost during inference. While several distillation-based approaches have exhibited the potential of one-step image restoration, extending existing approaches to VR remains challenging and underexplored, due to the limited generation ability and poor temporal consistency, particularly when dealing with high-resolution video in real-world settings. In this work, we propose a one-step diffusion-based VR model, termed as AnonymousVR, which performs adversarial VR training against real data. To handle the challenging high-resolution VR within a single step, we introduce several enhancements to both model architecture and training procedures. Specifically, an adaptive window attention mechanism is proposed, where the window size is dynamically adjusted to fit the output resolutions, avoiding window inconsistency observed under high-resolution VR using window attention with a predefined window size. To stabilize and improve the adversarial post-training towards VR, we further verify the effectiveness of a series of losses, including a proposed feature matching loss without significantly sacrificing training efficiency. Extensive experiments show that AnonymousVR can achieve comparable or even better performance compared with existing VR approaches in a single step.
In this work, we present a one-step Diffusion Transformer (DiT) model designed for generic video restoration (VR) that tackles resolution constraints efficiently. We present an effective adaptive window attention mechanism, enabling efficient high-resolution (i.e., 1080p) restoration in a single forward step with faithful details. With the adversarial post-training framework, we explore effective design improvements specific to video restoration, focusing on the loss function and progressive distillation. Extensive experiments validate the effectiveness of our design, and demonstrate the superiority of our method over existing methods, both quantitatively and qualitatively.
@article{wang2025seedvr2,
title={SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training},
author={Wang, Jianyi and Lin, Shanchuan and Lin, Zhijie and Ren, Yuxi and Wei, Meng and Yue, Zongsheng and Zhou, Shangchen and Chen, Hao and Zhao, Yang and Yang, Ceyuan and Xiao, Xuefeng and Loy, Chen Change and Jiang, Lu},
booktitle={arXiv preprint arXiv:2506.05301},
year={2025}
}