Image fusion model based on autoencoder network gets more attention because it does not need to design fusion rules manually. However, most autoencoder-based fusion networks use two-stream CNNs with the same structure as the encoder, which are unable to extract global features due to the local receptive field of convolutional operations and lack the ability to extract unique features from infrared and visible images. A novel autoencoder-based image fusion network which consist of encoder module, fusion module and decoder module is constructed in this paper. For the encoder module, the CNN and Transformer are combined to capture the local and global feature of the source images simultaneously. In addition, novel contrast and gradient enhance...
To address the problems of edge blur and weak detail resolution when fusing infrared and visible ima...
Although the traditional image fusion method can obtain rich image results, obvious artificial noise...
Infrared images could highlight the semantic areas like pedestrians and be robust to luminance chang...
Abstract Infrared and visible images come from different sensors, and they have their advantages and...
Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging....
Infrared images have good anti-environmental interference ability and can capture hot target informa...
Pixel-level image fusion is an effective way to fully exploit the rich texture information of visibl...
Infrared (IR) images can distinguish targets from their backgrounds based on difference in thermal r...
This paper presents an algorithm for infrared and visible image fusion using significance detection ...
Visible images contain clear texture information and high spatial resolution but are unreliable unde...
Image fusion operation is beneficial to many applications and is also one of the most common and cri...
This paper presents a novel Res2Net-based fusion framework for infrared and visible images. The prop...
In infrared (IR) and visible image fusion, the significant information is extracted from each source...
This paper presents an image fusion network based on a special residual network and attention mechan...
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense netw...
To address the problems of edge blur and weak detail resolution when fusing infrared and visible ima...
Although the traditional image fusion method can obtain rich image results, obvious artificial noise...
Infrared images could highlight the semantic areas like pedestrians and be robust to luminance chang...
Abstract Infrared and visible images come from different sensors, and they have their advantages and...
Infrared and visible image fusion is an effective method to solve the lack of single sensor imaging....
Infrared images have good anti-environmental interference ability and can capture hot target informa...
Pixel-level image fusion is an effective way to fully exploit the rich texture information of visibl...
Infrared (IR) images can distinguish targets from their backgrounds based on difference in thermal r...
This paper presents an algorithm for infrared and visible image fusion using significance detection ...
Visible images contain clear texture information and high spatial resolution but are unreliable unde...
Image fusion operation is beneficial to many applications and is also one of the most common and cri...
This paper presents a novel Res2Net-based fusion framework for infrared and visible images. The prop...
In infrared (IR) and visible image fusion, the significant information is extracted from each source...
This paper presents an image fusion network based on a special residual network and attention mechan...
In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense netw...
To address the problems of edge blur and weak detail resolution when fusing infrared and visible ima...
Although the traditional image fusion method can obtain rich image results, obvious artificial noise...
Infrared images could highlight the semantic areas like pedestrians and be robust to luminance chang...