面向红外与可见光图像融合的改进双鉴别器生成对抗网络算法

An Improved Dual Discriminator Generative Adversarial Network Algorithm for Infrared and Visible Image Fusion

  • 摘要: 针对现有的红外与可见光图像融合算法对全局和多尺度特征提取不充分,对不同模态图像的关键信息提取不精准的问题,提出了基于双鉴别器生成对抗网络的红外与可见光图像融合算法。首先,生成器结合卷积和自注意力机制,捕获多尺度局部特征和全局特征;其次,将注意力机制与跳跃连接结合,充分利用多尺度特征并减少下采样过程中的信息丢失;最后,两个鉴别器引导生成器关注红外图像的前景显著目标和可见光图像的背景纹理信息,使融合图像保留更多关键信息。在公开数据集M3FD和MSRS上的实验结果表明,与对比算法相比,6种评价指标结果显著提高,其中平均梯度(Average Gradient, AG)在两个数据集上相较于次优结果分别提高了27.83%和21.06%。本文算法的融合结果细节丰富,具有较好的视觉效果。

     

    Abstract: An infrared and visible image fusion algorithm, based on a dual-discriminator generative adversarial network, is proposed to address issues, such as the insufficient extraction of global and multiscale features and the imprecise extraction of key information, in existing infrared and visible image fusion algorithms. First, a generator combines convolution and self-attention mechanisms to capture multiscale local and global features. Second, the attention mechanism is combined with skip connections to fully utilize multiscale features and reduce information loss during the downsampling process. Finally, two discriminators guide the generator to focus on the salient targets of the infrared images and background texture information of visible-light images, allowing the fused image to retain more critical information. Experimental results on the public multi-scenario multi-modality (M3FD) and multi-spectral road scenarios (MSRS) datasets show that compared with the baseline algorithms, the results of the six evaluation metrics improved significantly. Specifically, the average gradient (AG) increased by 27.83% and 21.06% on the two datasets, respectively, compared with the second-best results. The fusion results of the proposed algorithm are rich in detail and exhibit superior visual effects.

     

/

返回文章
返回