基于自适应增强与显著性检测的可见光与红外图像融合算法

A Visible and Infrared Image Fusion Algorithm Based on Adaptive Enhancement and Saliency Detection

  • 摘要: 为解决可见光图像可视性差与如何精确控制可见光与红外图像输入量的问题,本文提出一种结合图像自适应增强与独立性、聚焦度、对象性等显著性检测的可见光与红外图像融合算法。首先在可见光图像中引入自适应增强算法提高图像纹理细节的可见性,并对红外图像归一化处理,其次将处理后的图像利用引导滤波分解为细节层与基础层,利用显著性检测生成细节层的权重图,提高细节层中可见光图像背景信息与红外图像边缘信息的精确融合量,最终将依据权重值融合后的细节层与基础层组合得到最终的融合图像。为验证本文算法的优越性,选取图像熵、平均梯度、边缘强度、空间频率、视觉保真度、平均灰度等6种融合评价指标对融合图像定量分析,并利用YOLO v5(You Only Look Once)网络对各融合算法进行目标检测,结果表明本文算法在融合定性评价、定量评价与目标检测评价指标平均精度中达到最优。

     

    Abstract: This paper proposes a visible and infrared image fusion algorithm to solve the problem of the poor visibility of visible images and control the input volume of visible and infrared images. The proposed method combines image adaptive enhancement with uniqueness (U), focus (F), and object (O) saliency detection. First, an adaptive enhancement algorithm was applied to the visible image to improve the visibility of the textural details and normalize the infrared image. Second, the processed image was decomposed into a detail layer and base layer using guided filtering. A weight map of the detail layer was generated using saliency detection to improve the accuracy of the fusion of the background information of the visible image and the edge information of the infrared image in the detail layer. Finally, the fused image was obtained by combining the detail and base layers. To verify the performance of the proposed algorithm, five fusion evaluation indices: image entropy, average gradient, edge intensity, spatial frequency, and visual fidelity, were selected to quantitatively analyze the fused images. The YOLO v5 network was used to perform target detection for each fusion algorithm. The results show that the proposed algorithm achieved the optimal average accuracy in terms of the qualitative, quantitative, and target detection evaluation indexes of fusion.

     

/

返回文章
返回