Visible and Infrared Image Matching Method Based on Multi-Scale Feature Point Extraction
-
Graphical Abstract
-
Abstract
A visible and infrared image matching method (VIMN) based on multiscale feature point extraction is proposed to address the issues of low matching accuracy and poor applicability, caused by significant differences in image features in visible and infrared image matching tasks. First, to enhance the ability of the VIMN to adapt to geometric image transformations, a deformable convolution layer is introduced into the feature extraction module. A spatial pyramid pooling (SPP) layer is used to complete multiscale feature fusion, considering both low- and high-level semantic information of an image. Second, a joint feature space and channel response score map are constructed on the multiscale fusion feature map to extract robust feature points. Finally, an image patch matching module uses metric learning for visible light and infrared image matching. To verify the superiority of the VIMN matching method, comparative experiments were conducted on matching experimental datasets using scale-invariant feature transform (SIFT), particle swarm optimization (PSO)-SIFT, dual disentanglement network (D2 Net), and contextual multiscale multilevel network (CMM-Net). The qualitative and quantitative results indicate that the VIMN proposed in this study has better matching performance.
-
-