single-jc.php

JACIII Vol.24 No.7 pp. 811-819
doi: 10.20965/jaciii.2020.p0811
(2020)

Paper:

Single Image De-Raining Using Spinning Detail Perceptual Generative Adversarial Networks

Kaizheng Chen, Yaping Dai, Zhiyang Jia, and Kaoru Hirota

School of Automation, Beijing Institute of Technology
No.5 Zhongguancun South Street, Haidian District, Beijing 100081, China

Corresponding author

Received:
October 20, 2020
Accepted:
October 26, 2020
Published:
December 20, 2020
Keywords:
image de-raining, generative adversarial networks, perceptual loss, detail map, self spinning
Abstract

In this paper, Spinning Detail Perceptual Generative Adversarial Networks (SDP-GAN) is proposed for single image de-raining. The proposed method adopts the Generative Adversarial Network (GAN) framework and consists of two following networks: the rain streaks generative network G and the discriminative network D. To reduce the background interference, we propose a rain streaks generative network which not only focuses on the high frequency detail map of rainy image, but also directly reduces the mapping range from input to output. To further improve the perceptual quality of generated images, we modify the perceptual loss by extracting high-level features from discriminative network D, rather than pre-trained networks. Furthermore, we introduce a new training procedure based on the notion of self spinning to improve the final de-raining performance. Extensive experiments on the synthetic and real-world datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods.

Cite this article as:
Kaizheng Chen, Yaping Dai, Zhiyang Jia, and Kaoru Hirota, “Single Image De-Raining Using Spinning Detail Perceptual Generative Adversarial Networks,” J. Adv. Comput. Intell. Intell. Inform., Vol.24, No.7, pp. 811-819, 2020.
Data files:
References
  1. [1] H. Zhang and V. M. Patel, “Convolutional sparse and low-rank coding-based image decomposition,” IEEE Trans. on Image Processing, Vol.27, No.5, pp. 2121-2133, 2018.
  2. [2] J.-H. Kim, J.-Y. Sim, and C.-S. Kim, “Video deraining and desnowing using temporal correlation and low-rank matrix completion,” IEEE Trans. on Image Processing, Vol.24, No.9, pp. 2658-2670, 2015.
  3. [3] V. Santhaseelan and V. K. Asari, “Utilizing local phase information to remove rain from video,” Int. J. of Computer Vision, Vol.112, No.1, pp. 71-89, 2015.
  4. [4] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi, “Adherent raindrop modeling, detection and removal in video,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.38, No.9, pp. 1721-1733, 2015.
  5. [5] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in Neural Information Processing Systems (NIPS 2014), pp. 2672-2680, 2014.
  6. [6] X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for single-image rain removal,” IEEE Trans. on Image Processing, Vol.26, No.6, pp. 2944-2956, 2017.
  7. [7] D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” Proc. of IEEE Int. Conf. on Computer Vision, pp. 633-640, 2013.
  8. [8] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for realtime style transfer and super-resolution,” European Conf. on Computer Vision, pp. 694-711, 2016.
  9. [9] A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks,” Advances in Neural Information Processing Systems (NIPS 2016), pp. 658-666, 2016.
  10. [10] J. Bruna, P. Sprechmann, and Y. LeCun, “Super-resolution with deep convolutional sufficient statistics,” arXiv preprint, arXiv:1511.05666, 2015.
  11. [11] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint, arXiv:1409.1556, 2014.
  12. [12] H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” arXiv preprint, arXiv:1701.05957, 2017.
  13. [13] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2482-2491, 2018.
  14. [14] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” arXiv preprint, arXiv:2004.10934, 2020.
  15. [15] L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. on Image Processing, Vol.21, No.4, pp. 1742-1755, 2011.
  16. [16] J.-H. Kim, C. Lee, J.-Y. Sim, and C.-S. Kim, “Single-image deraining using an adaptive nonlocal means filter,” 2013 IEEE Int. Conf. on Image Processing, pp. 914-917, 2013.
  17. [17] Y.-L. Chen and C.-T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” Proc. of the IEEE Int. Conf. on Computer Vision, pp. 1968-1975, 2013.
  18. [18] H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 695-704, 2018.
  19. [19] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie, “Stacked generative adversarial networks,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 5077-5086, 2017.
  20. [20] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 4681-4690, 2017.
  21. [21] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” Proc. of the IEEE Int. Conf. on Computer Vision, pp. 2223-2232, 2017.
  22. [22] C. Wang, C. Xu, C. Wang, and D. Tao, “Perceptual adversarial networks for image-to-image transformation,” IEEE Trans. on Image Processing, Vol.27, No.8, pp. 4066-4079, 2018.
  23. [23] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.35, No.6, pp. 1397-1409, 2012.
  24. [24] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1125-1134, 2017.
  25. [25] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Processing, Vol.13, No.4, pp. 600-612, 2004.
  26. [26] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 2736-2744, 2016.
  27. [27] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan. “Deep joint rain detection and removal from a single image,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1357-1366, 2017.
  28. [28] X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1715-1723, 2017.
  29. [29] R. Yasarla and V . M. Patel, “Uncertainty guided multi-scale residual learning-using a cycle spinning CNN for single image de-raining,” 2019 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 8405-8414, 2019.
  30. [30] S. Li, Y. Hou, H. Yue, and Z. Guo, “Single Image De-Raining via Generative Adversarial Nets,” 2019 IEEE Int. Conf. on Multimedia and Expo (ICME), pp. 1192-1197, 2019.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021