Skip to content

[ECCV 2024] Histoformer: Restoring Images in Adverse Weather Conditions via Histogram Transformer

Notifications You must be signed in to change notification settings

sunshangquan/Histoformer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[ECCV 2024] Restoring Images in Adverse Weather Conditions via Histogram Transformer

paper arXiv poster
Closed Issues Open Issues Hits

Cover figure Network structure

Restoring Images in Adverse Weather Conditions via Histogram Transformer
Shangquan Sun, Wenqi Ren, Xinwei Gao, Rui Wang, Xiaochun Cao
European Conference on Computer Vision

Abstract (click to expand) Transformer-based image restoration methods in adverse weather have achieved significant progress. Most of them use self-attention along the channel dimension or within spatially fixed-range blocks to reduce computational load. However, such a compromise results in limitations in capturing long-range spatial features. Inspired by the observation that the weather-induced degradation factors mainly cause similar occlusion and brightness, in this work, we propose an efficient Histogram Transformer (Histoformer) for restoring images affected by adverse weather. It is powered by a mechanism dubbed histogram self-attention, which sorts and segments spatial features into intensity-based bins. Self-attention is then applied across bins or within each bin to selectively focus on spatial features of dynamic range and process similar degraded pixels of the long range together. To boost histogram self-attention, we present a dynamic-range convolution enabling conventional convolution to conduct operation over similar pixels rather than neighbor pixels. We also observe that the common pixel-wise losses neglect linear association and correlation between output and ground-truth. Thus, we propose to leverage the Pearson correlation coefficient as a loss function to enforce the recovered pixels following the identical order as ground-truth. Extensive experiments demonstrate the efficacy and superiority of our proposed method.

📣 Citation

If you use Histoformer, please consider citing:

@article{sun2024restoring,
  title={Restoring Images in Adverse Weather Conditions via Histogram Transformer},
  author={Sun, Shangquan and Ren, Wenqi and Gao, Xinwei and Wang, Rui and Cao, Xiaochun},
  journal={arXiv preprint arXiv:2407.10172},
  year={2024}
}

@InProceedings{10.1007/978-3-031-72670-5_7,
    author="Sun, Shangquan and Ren, Wenqi and Gao, Xinwei and Wang, Rui and Cao, Xiaochun",
    editor="Leonardis, Ale{\v{s}} and Ricci, Elisa and Roth, Stefan and Russakovsky, Olga and Sattler, Torsten and Varol, G{\"u}l",
    title="Restoring Images in Adverse Weather Conditions via Histogram Transformer",
    booktitle="Computer Vision -- ECCV 2024",
    year="2025",
    publisher="Springer Nature Switzerland",
    address="Cham",
    pages="111--129",
    isbn="978-3-031-72670-5"
}

🚀 News

🧩 Datasets

According to [Issue#2] and [Issue#8], many related datasets are unavailable. I have provided them below:

Snow100K Training Snow100K Test Set Snow100K Masks Outdoor-Rain Test1
[Google Drive] [Google Drive] [BaiduYun Disk] [pin: yuia] [Google Drive] [BaiduYun Disk] [pin: hstm] [Google Drive]

😄 Visual Results

All visual results are in Google Drive and Baidu Disk (pin: ps9q). You can also find each of them from the table below.

Examples:

RainDrop Outdoor-Rain
Snow100K-L RealSnow

⚙️ Installation

See INSTALL.md for the installation of dependencies required to run Histoformer.

🛠️ Training

  1. Download Training set or each of them, i.e., Snow100K, Outdoor-Rain, and RainDrop.

Note: The original link for downloading Snow100K has expired, so you could refer to [Issue#2] for alternative download links.

  1. Modify the configurations of dataroot_gt and dataroot_lq for train, val_snow_s, val_snow_l, val_test1 and val_raindrop in Allweather/Options/Allweather_Histoformer.yml

  2. To train Histoformer with default settings, run

cd Histoformer
./train.sh Allweather/Options/Allweather_Histoformer.yml 4321

Note: The above training script uses 4 GPUs by default. To use any other settings, for example 8 GPUs, modify CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 and --nproc_per_node=8 in Histoformer/train.sh and num_gpu: 8 Allweather/Options/Allweather_Histoformer.yml.

⚖️ Evaluation

  1. cd Allweather

  2. Download the pre-trained models and place it in ./pretrained_models/

  3. Download test datasets from each of them, i.e., Snow100K, Outdoor-Rain, and RainDrop.

  4. Test with the replaced argument --input_dir [INPUT_FOLDER]

python test_histoformer.py --input_dir [INPUT_FOLDER] --result_dir result/ --weights pretrained_models/net_g_best.pth --yaml_file Options/Allweather_Histoformer.yml

# for realsnow
python test_histoformer.py --input_dir [INPUT_FOLDER] --result_dir result/ --weights pretrained_models/net_g_real.pth --yaml_file Options/Allweather_Histoformer.yml
  1. Compute PSNR and SSIM by
python compute_psnr.py --path1 [GT-PATH] --path2 [Restored-PATH]

Values may be slightly different because a) the images I upload are in JPG file format for saving space, but the values reported are computed on the PNG-format images; b) some values are reported by previous works like WeatherDiff and may be slightly different from this reproduction.

⚖️ Demo

  1. cd Allweather

  2. Download the pre-trained models and place it in ./pretrained_models/

  3. Test with the replaced argument --input_dir [INPUT_FOLDER]

# for realsnow
python test_histoformer.py --input_dir [INPUT_FOLDER] --result_dir result/ --weights pretrained_models/net_g_real.pth --yaml_file Options/Allweather_Histoformer.yml

📬 Contact

If you have any question, please contact [email protected]

Acknowledgment: This code is based on the WeatherDiff, Restormer, BasicSR toolbox and HINet.