Remote Sensing, Vol. 15, Pages 685: Infrared and Visible Image Fusion Method Based on a Principal Component Analysis Network and Image Pyramid

1 year ago 27

Remote Sensing, Vol. 15, Pages 685: Infrared and Visible Image Fusion Method Based on a Principal Component Analysis Network and Image Pyramid

Remote Sensing doi: 10.3390/rs15030685

Authors: Shengshi Li Yonghua Zou Guanjun Wang Cong Lin

The aim of infrared (IR) and visible image fusion is to generate a more informative image for human observation or some other computer vision tasks. The activity-level measurement and weight assignment are two key parts in image fusion. In this paper, we propose a novel IR and visible fusion method based on the principal component analysis network (PCANet) and an image pyramid. Firstly, we use the lightweight deep learning network, a PCANet, to obtain the activity-level measurement and weight assignment of IR and visible images. The activity-level measurement obtained by the PCANet has a stronger representation ability for focusing on IR target perception and visible detail description. Secondly, the weights and the source images are decomposed into multiple scales by the image pyramid, and the weighted-average fusion rule is applied at each scale. Finally, the fused image is obtained by reconstruction. The effectiveness of the proposed algorithm was verified by two datasets with more than eighty pairs of test images in total. Compared with nineteen representative methods, the experimental results demonstrate that the proposed method can achieve the state-of-the-art results in both visual quality and objective evaluation metrics.

Read Entire Article