Infrared and Visible Image Fusion Based on Global Energy Features and Improved PCNN
-
-
Abstract
To improve the low clarity, low contrast, and insufficient texture details of infrared and visible image fusion, an image fusion algorithm based on a parameter-adaptive pulse-coupled neural network (PA-PCNN) was proposed. First, the source infrared image was dehazed by a dark channel to enhance the clarity of the image. Then, the source images were decomposed by non-subsampled shearlet transform (NSST), and the low-frequency coefficients were fused by the proposed global energy feature extraction algorithm combined with a modified spatial frequency adaptive weight. Texture energy was used as the external input of the PA-PCNN to fuse the high-frequency coefficients, and the fused gray image was obtained using the inverse NSST. To further enhance the perception of the human eye, a multiresolution color transfer algorithm was used to convert the grayscale image to a color image. The proposed method was compared with seven classical algorithms for two image pairs. The experimental results show that the proposed method is significantly better than the comparison algorithms in terms of evaluation indicators, and improves the clarity and detail information of the fused image, which verifies its effectiveness. The conversion of the fused grayscale images into pseudo-color images further enhances recognition and human eye perception.
-
-