基于语义损失的红外与可见光图像融合算法

Fusion Algorithm of Infrared and Visible Images Based on Semantic Loss

  • 摘要: 提出了一种基于语义损失的红外与可见光图像融合算法,通过语义损失引导生成图像包含更多语义信息,满足高级视觉任务需求。首先使用预训练的分割网络对融合图像进行分割,分割结果与标签图构成语义损失,在语义损失和内容损失的共同引导下,迫使融合网络在保证融合图像质量的前提下同时兼顾图像语义信息量,融合图像满足高级视觉任务需求。同时本文还设计了一种新的特征提取模块,通过残差密集连接实现特征重用,提高细节描述能力,进一步减轻融合框架,从而提高图像融合的时间效率。实验结果表明,本文算法在主观视觉效果和定量指标方面优于现有融合算法,且融合图像包含更丰富的语义信息。

     

    Abstract: In this study, we propose an infrared and visible image fusion algorithm based on semantic loss, to ensure that the generated images contain more semantic information through semantic loss, thereby satisfying the requirements of advanced vision tasks. First, a pre-trained segmentation network is used to segment the fused image, with the segmentation result and label map determining the semantic loss. Under the joint guidance of semantic and content losses, we force the fusion network to guarantee the quality of the fused image by considering the amount of semantic information in the image, to ensure that the fused image meets the requirements of advanced vision tasks. In addition, a new feature extraction module is designed in this study to achieve feature reuse through a residual dense connection to improve detail description capability while further reducing the fusion framework, which improves the time efficiency of image fusion. The experimental results show that the proposed algorithm outperforms existing fusion algorithms in terms of subjective visual effects and quantitative metrics and that the fused images contain richer semantic information.

     

/

返回文章
返回