SHEN Yu, LIANG Li, WANG Hailong, YAN Yuan, LIU Guanghui, SONG Jing. Infrared and Visible Image Fusion Based on N-RGAN Model[J]. Infrared Technology , 2023, 45(9): 897-906.
Citation: SHEN Yu, LIANG Li, WANG Hailong, YAN Yuan, LIU Guanghui, SONG Jing. Infrared and Visible Image Fusion Based on N-RGAN Model[J]. Infrared Technology , 2023, 45(9): 897-906.

Infrared and Visible Image Fusion Based on N-RGAN Model

  • At present, infrared and visible image fusion algorithms still have problems such as low applicability to complex scenes, large loss of detail and texture information in fusion images, and low contrast and sharpness of fusion images. In view of the above problems, this study proposes an N-RGAN model that combines a non-subsampled shearlet transform (NSST) and a residual network (ResNet). Infrared and visible images are decomposed into high- and low-frequency sub-bands using NSST. The high-frequency sub-bands are spliced and input into the generator improved by the residual module, and the source infrared image is taken as the decision standard to improve network fusion performance, fusion image detail description, and target-highlighting ability. The salient features of infrared and visible images are extracted, and the low-frequency sub-bands are fused by adaptive weighting to improve image contrast and sharpness. The fusion results of the high- and low-frequency sub-bands are obtained by the NSST inverse transformation. Based on a comparison of various fusion algorithms, the proposed method improves peak signal-to-noise ratio (PSNR), average gradient (AVG), image entropy (IE), spatial frequency (SF), edge strength (ES), and image clarity (IC), thereby improving infrared and visible light image fusion effects in complex scenes, alleviating information loss in image detail texture, and enhancing image contrast and resolution.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return