Any Image Restoration via Efficient Spatial-Frequency Degradation Adaptation

1University of Pisa, Italy, 2University of Trento, Italy, 3INSAIT, Sofia University ''St. Kliment Ohridski'', Bulgaria, 4University of Würzburg, Germany, 5ZTH Zürich, Switzerland, 6Taiyuan University of Technology, China, 7University of California, Merced, USA
TMLR 2025
Ours
Ours
Before
Before
Ours
Ours
Before
Before
Ours
Ours
Before
Before
Ours
Ours
Before
Before
Ours
Ours
Before
Before
Ours
Ours
Before
Before

Abstract

Restoring multiple degradations efficiently via just one model has become increasingly significant and impactful, especially with the proliferation of mobile devices. Traditional solutions typically involve training dedicated models per degradation, resulting in inefficiency and redundancy. More recent approaches either introduce additional modules to learn visual prompts, significantly increasing the size of the model, or incorporate cross-modal transfer from large language models trained on vast datasets, adding complexity to the system architecture. In contrast, our approach, termed AnyIR, takes a unified path that leverages inherent similarity across various degradations to enable both efficient and comprehensive restoration through a joint embedding mechanism, without scaling up the model or relying on large language models. Specifically, we examine the sub-latent space of each input, identifying key components and reweighting them first in a gated manner. To unify intrinsic degradation awareness with contextualized attention, we propose a spatial–frequency parallel fusion strategy that strengthens spatially informed local–global interactions and enriches restoration fidelity from the frequency domain. Comprehensive evaluations across four all-in-one restoration benchmarks demonstrate that AnyIR attains state-of-the-art performance while reducing model parameters by 84\% and FLOPs by 80\% relative to the baseline. These results highlight the potential of AnyIR as an effective and lightweight solution for further all-in-one image restoration. Our code is be available via: https://github.com/Amazingren/AnyIR.

Architecture Overview

Motivation

(a) Dense all-in-one restoration methods often inefficiently allocate parameters when handling multiple degradation types.

(b) While recent Mixture-of-Experts (MoE) approaches address this through sparse computation, their rigid routing mechanisms uniformly distribute inputs across experts without considering the natural relationships between degradations.

(c) To overcome these limitations, we introduce Complexity Experts - adaptive processing blocks with size-varying computational units. Our framework dynamically allocates model capacity using a spring-inspired force mechanism that continuously guides routing decisions toward simpler experts when possible, with the force proportional to the complexity of the input degradation. While initially designed for computational efficiency, this approach naturally emerges as a task-discriminative learning framework, assigning degradations to the most suitable experts. This makes it particularly effective for all-in-one restoration methods, where both task-specific processing and cross-degradation knowledge sharing are crucial.

Motivation

Visual Comparison

Restoration Results on Three Degradations
Restoration Results on Five Degradations
Restoration Results on Composited Degradations

BibTeX

@misc{ren2025anyir,
      title={Any Image Restoration via Efficient Spatial-Frequency Degradation Adaptation}, 
      author={Bin Ren and Eduard Zamfir and Zongwei Wu and Yawei Li and Yidi Li and Danda Pani Paudel and Ming-Hsuan Yang and Luc Van Gool and Nicu Sebe},
      year={2025},
      eprint={2503.xxx},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}