(Sichuan Institute of Land Survey and Planning, Chengdu, 6 1003 1)
At present, commercial high-resolution image Quickbird can provide 0.6 1m panchromatic band data and 2.44m multispectral data, so how to fuse panchromatic band data and multispectral data to improve image quality is the most critical step in remote sensing image processing. In this paper, five fusion methods are compared from the perspective of spectral quality and spatial information. The comprehensive evaluation results show that the comprehensive ratio variable transformation is most suitable for the fusion of multi-spectral data and panchromatic data of Quickbird images.
Keywords: fast bird; ; Image fusion; Comparative evaluation
In recent 10 years, image fusion technology has developed rapidly and has become an important topic in the field of remote sensing application research. Pohl and Van Genderen comprehensively summarized the concept, methods and applications of remote sensing image fusion [1]. A lot of research work focuses on sharpening images, improving geometric correction accuracy, improving classification accuracy and monitoring changes. Fusion methods widely used in remote sensing field include IHS transform, principal component analysis, Brovey transform, wavelet transform and recently developed and improved synthetic ratio variable transform. At present, there are few systematic and quantitative evaluation and comparison of fusion methods [2]. Therefore, this paper makes a comparative study of various fusion methods from the perspective of quantitative evaluation.
QuickBird-2 satellite is the third in a series of high-resolution commercial satellites launched by Delta-2 rocket by American Digital Global Company in 20061October 5438+01August. Its panchromatic ground resolution is 0.6 1m and its wavelength range is 450 nm ~ 900 nm. The ground resolution of multi-spectral band (sub-satellite point) is 2.44m, and the wavelength ranges are: blue band 450 nm ~ 520 nm, green band 520 nm ~ 600 nm, red band 630 nm ~ 690 nm, and near infrared band 760 nm ~ 900 nm. The revisit period is 1 ~ 6 days [3].
A new round of land and resources survey in China has been launched in an all-round way. This land survey requires a second survey with new technologies and methods from the perspective of cost saving, efficiency improvement and quality improvement. With the development of space technology, the data volume of sub-meter space satellites is becoming more and more easy to obtain, so it is very necessary to conduct a second survey with the help of space remote sensing. The QuickBird remote sensing image of Ami satellite is in good condition, with high ground resolution and clear spatial texture. Therefore, the fused remote sensing images can be used to make the land use renewal survey base map with the scale of 1∶5000, which will promote the renewal of the secondary survey technology.
1 Brief Introduction of Various Fusion Methods of Remote Sensing Data
1. 1 ratio transformation method (brovey) [4]
Brovey transform is a relatively simple fusion method, which normalizes the colors in the multi-spectral bands displayed by RGB images and multiplies the high-resolution panchromatic colors with each other to complete the fusion. Fast bird fusion is calculated by formula (1):
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
1.2 his transformation method [4]
HIS belongs to color space transformation. HIS transformation is widely used because of its flexibility and practicality, and has become a mature standard method of image fusion.
His transformation separates brightness (I) representing spatial information and color (H) and saturation (S) representing spectral information from multi-spectral color composite images. Usually, the high-resolution panchromatic band or other data is used instead of lightness (1) for various processing of spatial information, and the calculation formulas (2) and (3) are used for transformation.
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
Where: I stands for lightness, H stands for color, S stands for saturation, v 1, v2 is an intermediate variable used to calculate H and S, and its inverse transformation formula is:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
1.3 principal component transformation [4]
Principal component analysis (PCA) is a multi-dimensional (multi-band) orthogonal linear transformation based on statistical features, which is mathematically called K-L exchange. In the field of remote sensing application, this method is mainly used for data compression at present, and a few principal components are used to replace multi-band remote sensing information; Image enhancement is to extract the image information with significant physical significance in the spectral feature space and monitor the dynamic change of ground cover. The principal component transformation of remote sensing image data first needs to calculate a standard transformation matrix, and the image data is transformed into a new set of image data-principal component data through the transformation matrix. The conversion formula can be expressed by the following formula:
Y=TX (4)
Where: X is the pixel value vector of P bands of the original image, Y is the pixel value vector of Q principal components generated after transformation, q≤p, and T is the transformation matrix to realize this orthogonal linear transformation. T is calculated from the covariance matrix ∑x of the original image element value vector x, and each row of the T matrix is the characteristic vector of the ∑x matrix. Therefore, the principal component represented by Y is all the components of X, that is, the linear combination of information in each band. The covariance matrix of the generated principal component pixel value vector y is ∑y, and:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
Where: λ 1, λ 2...λ P is the eigenvalue of the covariance short array ∑x of the original image, and λ I (I = 1, 2...p) is arranged in descending order. λ 1, λ 2 ...λ p is the variance of each principal component, and the covariance between any two principal components is 0, which is irrelevant to each other, ensuring that there is no duplication and redundancy of information between principal components.
1.4 synthetic ratio variable transformation method (SVR) [5]
According to the modified and simplified Munechika method, the process is as follows:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
Among them, XSPi represents the fused gray value of I-band, PanH is the high-resolution panchromatic gray value, XSLi is the original gray value of I-band, PanLS is the panchromatic gray value synthesized by multi-spectral band, and φi is the regression coefficient between the high-resolution panchromatic band and XSLi.
Firstly, the regression coefficients of four multi-spectral bands and panchromatic bands of Quickbird4 are calculated, and then the high geometric resolution panchromatic is simulated by combining the regression coefficients and multi-spectral bands. Finally, the fusion of all bands is completed by ratio transformation.
1.5 wavelet transform [3]
Wavelet transform is to filter the original signal with a set of band-pass filters with different scales, and decompose the signal into a series of frequency bands for analysis and processing. Wavelet theory provides a unified framework for spatial scale analysis of images. In remote sensing images, wavelet transform is often discretized (Figure 1).
In Quickbird image, the panchromatic band and multispectral band are decomposed by Daubechies wavelet (D4), and then three panchromatic edge subgraphs are used instead of edge subgraphs and multispectral smoothing subgraphs for inverse transformation, thus completing wavelet fusion of each band.
Graph 1 wavelet decomposition graph
2 Comparison of fusion effects
Choosing and determining which fusion method usually depends on the application purpose, so it is difficult to evaluate the quality of a fusion technology. Generally speaking, the evaluation of remote sensing image fusion effect should comprehensively consider the enhancement of spatial detail information and the maintenance of spectral information, so it can be evaluated from both spectral information and spatial detail information. The following are several evaluation parameters and their characteristic expressions.
2. 1 spatial detail information
In statistical theory, the statistical mean and standard deviation are defined as:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
For an image, n is the total number of pixels, and xi is the gray value of the ith pixel, so the average value is the gray average value of pixels, which is reflected to the human eye as the average brightness. Variance reflects the dispersion degree of gray level relative to gray level mean. The larger the variance, the more dispersed the gray distribution.
Let the gray distribution of the image be p = {P0, p 1, …, PL- 1}, pi is the ratio of the number of pixels with gray value equal to I to the total number of pixels in the image, and L is the total number of gray levels. For the image histogram with gray scale range of {0, 1, …, L- 1}, its information entropy is defined as:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
It is easy to know that 0≤H≤lnL. When a certain PI = 1, H = 0;; When P0 = p1= … = pl-1=1/l, H = LNL.
Image information entropy is an important index to measure the richness of image information. By comparing the information entropy of images, we can compare the detail expression ability of images. Entropy reflects the amount of information carried by an image. The greater the entropy value of the fused image, the more information the fused image carries. If the probability of all gray levels in an image tends to be equal, the amount of information contained tends to be maximum [6].
The average gradient is used to evaluate the improvement of image quality, and the calculation formula is [7]:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
Where m and n are the number of rows and columns of the remote sensing image, and the formula represents the first-order difference of the image function f(x, y) in the x and y directions. Generally speaking, the greater the G, the richer the image layering and the clearer the image.
2.2 Spectral information [6]
The spectral distortion of an image directly reflects the spectral distortion of a multispectral image. The spectral distortion of the k-th spectral component is defined as:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
Where n represents the image size; K represents the number of spectral components in the multispectral image; K represents the k-th spectral component; They are the color gray values of the (i, j) point of the k-th spectral component in the original image and the fused image respectively. The smaller the distortion value of the corresponding band before and after fusion, the smaller the distortion degree before and after fusion.
The deviation index is used to compare the degree of deviation between the fused image and the low-resolution multispectral image. The deviation index of the k-th spectral component is defined as:
Innovation of Land Information Technology and Development of Land Science and Technology: Proceedings of the 2006 Annual Conference of china land science Institution.
Where n represents the image size, k represents the number of spectral components in the multispectral image, and k represents the k-th spectral component, which is the color gray value of the k-th spectral component at point (i, j) in the original image and the fused image respectively. The smaller the deviation index value of the corresponding band before and after fusion, the smaller the deviation degree of the spectral image before and after fusion.
3 example analysis
In this case, the Chengdu Campus of Southwest Jiaotong University is selected for processing. This area contains typical features, such as vegetation, water bodies and buildings. Remote sensing image data are 0.6m panchromatic data and 2.4m multispectral Quickbird data obtained in August 2004. The five most commonly used fusion methods mentioned above are used for fusion respectively. The software used in this image fusion is MATLAB and ERDAS 8.7. As shown in figs. 2 to 8, no spectral and texture enhancement is performed in this fusion process. The fusion of IHS and Brovey is only aimed at the fusion of multispectral band 32 1 and panchromatic band, while PCA, SVR and wavelet transform complete the fusion of all four multispectral bands.
Fig. 2 Original multispectral image (32 1 combination)
Fig. 3 Original panchromatic image
Fig. 4 Ratio transform fusion image (32 1 combination)
Fig. 5 HIS transform fusion image (32 1 combination)
Fig. 6 PCA transform fusion image (32 1 combination)
Fig. 7 SVR transform fusion image (32 1 combination)
Fig. 8 wavelet transform fused image (32 1 combination)
3. 1 Visual evaluation of fusion effect
The original multispectral and panchromatic images and the fused images obtained by five fusion methods are shown in figs. 2 to 8. Visually, it is obvious that the spatial geometric resolutions of the five fused images are roughly equal; As far as spectral color is concerned, the image obtained by SVR fusion is closest to the original multi-spectral image.
3.2 Quantitative evaluation of fusion effect
The average gradient and distortion are selected to quantitatively evaluate the above five fused images:
Table 1 is the average gradient of each band of the high-resolution multispectral image after Quickbird fusion, which reflects the expressive ability of the spatial details of the fused image. Table 2 shows the distortion of the corresponding multi-spectral bands before and after the Quickbird image fusion, which reflects the distortion of the images before and after the fusion.
Table1quickbride average gradient of each band of fused image
Table Average gray difference of corresponding multispectral bands before and after QuickBird image fusion.
According to the data analysis of table 1, among the four bands of image fusion (Brovey transform and HIS transform only have three bands), the average gradient of band 1, band 2 and band 3 is the highest in synthetic variable transformation, and the average gradient of band 4 is the highest in wavelet transformation, followed by synthetic variable transformation.
According to the data analysis in Table 2, among the five image fusion methods, the spectral distortion between the fused image and the original multispectral image is the smallest in all four bands, that is, the image fused by this method inherits the spectral information of the original multispectral image to the greatest extent; Secondly, the effect of commonly used wavelet transform methods.
Comparing the above five fusion methods from two aspects of spectral quality and spatial detail information, the comprehensive ratio variable transformation is the best fusion method to weigh the spectral information with low spatial resolution and the spatial information with high spatial resolution. When making large-scale thematic maps of land use, it is required that the spectrum of image data is not degraded and has high geometric spatial information. It is the best scheme to choose the method of synthetic ratio variable transformation to fuse Quickbird data. At present, there is no mature software to realize image fusion by this method. Based on the principle of SVR image fusion, this paper uses MATLAB to complete image fusion, which is limited to experimental research and cannot be applied to a large number of production practices.
refer to
[1]Chavez P. S, Sides S C, Anderson J.A. Comparison of three different methods of multi-resolution and multi-spectral data fusion Photogrammetry Engineering and Remote Sensing, 199 1, 57:295~303.
Sun Danfeng. Comparative study on the fusion methods of IKONOS panchromatic and multispectral data [j]. Remote sensing technology and application, 2002,17 (1): 41~ 45.
Wu Peizhong. Technical performance and application of Fast Bird-2 satellite [J]. International Space, 2002, (10): 3 ~ 4
[4] Sun. Principle and application of remote sensing [M]. Wuhan: Wuhan University Press. 2003, 162 ~ 168
A new fusion method and its spectral and spatial effects [J].INT J Remote Sensing,1999,20 (10): 2003 ~ 2014.
Li, Wei Jun and Peng. Objective analysis and evaluation of remote sensing image fusion effect [J]. Computer Engineering and Science, 2004,26 (1): 42 ~ 46
[7]Schistad-Solberg A H, Jain A K, Taxt T. Multi-source classification of remote sensing data: fusion of Landsat TM and SAR images [J].IEEE Transactions on Earth Science and Remote Sensing,1994,32 (4): 768 ~ 778.