Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Micron ; 177: 103578, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38113716

RESUMEN

Pansharpening constitutes a category of data fusion techniques designed to enhance the spatial resolution of multispectral (MS) images by integrating spatial details from a high-resolution panchromatic (PAN) image. This process combines the high-spectral data of MS images with the rich spatial information of the PAN image, resulting in a pansharpened output ideal for more effective image analysis, such as object detection and environmental monitoring. Traditionally developed for satellite data, our paper introduces a novel pansharpening approach customized for the fusion of Scanning Electron Microscopy (SEM) and Energy-Dispersive X-ray Spectrometry (EDS) data. The proposed method, grounded in Partial Least Squares regression with Discriminant Analysis (PLS-DA), significantly boosts the spatial resolution of EDS data while preserving spectral details. A key feature of this approach involves partitioning the PAN image into intensity bins and dynamically adapting this division in cases of overlapping compounds with similar average atomic numbers. We evaluate the method's effectiveness using in-house EDS images obtained from both even and uneven sample surfaces. Comparative analysis against existing benchmarks and state-of-the-art pansharpening techniques demonstrates superior performance in both spectral and spatial quality indicators for our method.

2.
PeerJ Comput Sci ; 9: e1488, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37547419

RESUMEN

Pan-sharpening is a fundamental and crucial task in the remote sensing image processing field, which generates a high-resolution multi-spectral image by fusing a low-resolution multi-spectral image and a high-resolution panchromatic image. Recently, deep learning techniques have shown competitive results in pan-sharpening. However, diverse features in the multi-spectral and panchromatic images are not fully extracted and exploited in existing deep learning methods, which leads to information loss in the pan-sharpening process. To solve this problem, a novel pan-sharpening method based on multi-resolution transformer and two-stage feature fusion is proposed in this article. Specifically, a transformer-based multi-resolution feature extractor is designed to extract diverse image features. Then, to fully exploit features with different content and characteristics, a two-stage feature fusion strategy is adopted. In the first stage, a multi-resolution fusion module is proposed to fuse multi-spectral and panchromatic features at each scale. In the second stage, a shallow-deep fusion module is proposed to fuse shallow and deep features for detail generation. Experiments over QuickBird and WorldView-3 datasets demonstrate that the proposed method outperforms current state-of-the-art approaches visually and quantitatively with fewer parameters. Moreover, the ablation study and feature map analysis also prove the effectiveness of the transformer-based multi-resolution feature extractor and the two-stage fusion scheme.

3.
Front Plant Sci ; 14: 1111575, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37152173

RESUMEN

Introduction: Remote sensing using unmanned aerial systems (UAS) are prevalent for phenomics and precision agricultural applications. The high-resolution data for these applications can provide useful spectral characteristics of crops associated with performance traits such as seed yield. With the recent availability of high-resolution satellite imagery, there has been growing interest in using this technology for plot-scale remote sensing applications, particularly those related to breeding programs. This study compared the features extracted from high-resolution satellite and UAS multispectral imagery (visible and near-infrared) to predict the seed yield from two diverse plot-scale field pea yield trials (advanced breeding and variety testing) using the random forest model. Methods: The multi-modal (spectral and textural features) and multi-scale (satellite and UAS) data fusion approaches were evaluated to improve seed yield prediction accuracy across trials and time points. These approaches included both image fusion, such as pan-sharpening of satellite imagery with UAS imagery using intensity-hue-saturation transformation and additive wavelet luminance proportional approaches, and feature fusion, which involved integrating extracted spectral features. In addition, we also compared the image fusion approach to high-definition satellite data with a resolution of 0.15 m/pixel. The effectiveness of each approach was evaluated with data at both individual and combined time points. Results and discussion: The major findings can be summarized as follows: (1) the inclusion of the texture features did not improve the model performance, (2) the performance of the model using spectral features from satellite imagery at its original resolution can provide similar results as UAS imagery, with variation depending on the field pea yield trial under study and the growth stage, (3) the model performance improved after applying multi-scale, multiple time point feature fusion, (4) the features extracted from the pan-sharpened satellite imagery using intensity-hue-saturation transformation (image fusion) showed better model performance than those with original satellite imagery or high definition imagery, and (5) the green normalized difference vegetation index and transformed triangular vegetation index were identified as key features contributing to high model performance across trials and time points. These findings demonstrate the potential of high-resolution satellite imagery and data fusion approaches for plot-scale phenomics applications.

4.
J Imaging ; 9(5)2023 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-37233311

RESUMEN

In recent years, the demand for very high geometric resolution satellite images has increased significantly. The pan-sharpening techniques, which are part of the data fusion techniques, enable the increase in the geometric resolution of multispectral images using panchromatic imagery of the same scene. However, it is not trivial to choose a suitable pan-sharpening algorithm: there are several, but none of these is universally recognized as the best for any type of sensor, in addition to the fact that they can provide different results with regard to the investigated scene. This article focuses on the latter aspect: analyzing pan-sharpening algorithms in relation to different land covers. A dataset of GeoEye-1 images is selected from which four study areas (frames) are extracted: one natural, one rural, one urban and one semi-urban. The type of study area is determined considering the quantity of vegetation included in it based on the normalized difference vegetation index (NDVI). Nine pan-sharpening methods are applied to each frame and the resulting pan-sharpened images are compared by means of spectral and spatial quality indicators. Multicriteria analysis permits to define the best performing method related to each specific area as well as the most suitable one, considering the co-presence of different land covers in the analyzed scene. Brovey transformation fast supplies the best results among the methods analyzed in this study.

5.
Micron ; 163: 103361, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36219986

RESUMEN

Fusion and quality enhancement of the low-resolution Energy Dispersive X-ray Spectroscopy (EDS) maps to Scanning Electron Microscopy (SEM) panchromatic images has been proven effective by various pansharpening algorithms. The present paper aims to target the preprocessing of these maps to enhance the efficiency of the pansharpening process, with as little information loss on the chemical distribution, and as little propagated noise as possible. EDS maps present different noise intensities depending on the flatness of the surface of the analyzed object. The uneven surface maps have limited analytical value due to the noise and have not been resolution-enhanced with pansharpening due to the noise propagation limitation. In this paper, different preprocessing methods are evaluated for enabling uneven-surface particles to pansharpening: background removal, upsampling, and noise filtering. The sequence of applying preprocessing steps is analyzed. The optimal order of preprocessing steps is (i) background removal, (ii) noise filtering, and (iii) interpolation. A methodology for each of these steps is presented in the paper. The best performing pansharpening methodology is chosen to be Affinity for individual map analysis and Wavelet for multi-elemental fusion purposes. Following the methodology results in high-resolution EDS maps, even for uneven-surface particles which are, for the first time in literature, subjected to pansharpening.

6.
Sensors (Basel) ; 21(11)2021 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-34064128

RESUMEN

The spectral mismatch between a multispectral (MS) image and its corresponding panchromatic (PAN) image affects the pansharpening quality, especially for WorldView-2 data. To handle this problem, a pansharpening method based on graph regularized sparse coding (GRSC) and adaptive coupled dictionary is proposed in this paper. Firstly, the pansharpening process is divided into three tasks according to the degree of correlation among the MS and PAN channels and the relative spectral response of WorldView-2 sensor. Then, for each task, the image patch set from the MS channels is clustered into several subsets, and the sparse representation of each subset is estimated through the GRSC algorithm. Besides, an adaptive coupled dictionary pair for each task is constructed to effectively represent the subsets. Finally, the high-resolution image subsets for each task are obtained by multiplying the estimated sparse coefficient matrix by the corresponding dictionary. A variety of experiments are conducted on the WorldView-2 data, and the experimental results demonstrate that the proposed method achieves better performance than the existing pansharpening algorithms in both subjective analysis and objective evaluation.

7.
Sensors (Basel) ; 21(4)2021 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-33578847

RESUMEN

The lack of high-resolution thermal images is a limiting factor in the fusion with other sensors with a higher resolution. Different families of algorithms have been designed in the field of remote sensors to fuse panchromatic images with multispectral images from satellite platforms, in a process known as pansharpening. Attempts have been made to transfer these pansharpening algorithms to thermal images in the case of satellite sensors. Our work analyses the potential of these algorithms when applied to thermal images from unmanned aerial vehicles (UAVs). We present a comparison, by means of a quantitative procedure, of these pansharpening methods in satellite images when they are applied to fuse high-resolution images with thermal images obtained from UAVs, in order to be able to choose the method that offers the best quantitative results. This analysis, which allows the objective selection of which method to use with this type of images, has not been done until now. This algorithm selection is used here to fuse images from thermal sensors on UAVs with other images from different sensors for the documentation of heritage, but it has applications in many other fields.

8.
Sensors (Basel) ; 20(24)2020 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-33322345

RESUMEN

The growing demand for high-quality imaging data and the current technological limitations of imaging sensors require the development of techniques that combine data from different platforms in order to obtain comprehensive products for detailed studies of the environment. To meet the needs of modern remote sensing, the authors present an innovative methodology of combining multispectral aerial and satellite imagery. The methodology is based on the simulation of a new spectral band with a high spatial resolution which, when used in the pansharpening process, yields an enhanced image with a higher spectral quality compared to the original panchromatic band. This is important because spectral quality determines the further processing of the image, including segmentation and classification. The article presents a methodology of simulating new high-spatial-resolution images taking into account the spectral characteristics of the photographed types of land cover. The article focuses on natural objects such as forests, meadows, or bare soils. Aerial panchromatic and multispectral images acquired with a digital mapping camera (DMC) II 230 and satellite multispectral images acquired with the S2A sensor of the Sentinel-2 satellite were used in the study. Cloudless data with a minimal time shift were obtained. Spectral quality analysis of the generated enhanced images was performed using a method known as "consistency" or "Wald's protocol first property". The resulting spectral quality values clearly indicate less spectral distortion of the images enhanced by the new methodology compared to using a traditional approach to the pansharpening process.

9.
Sensors (Basel) ; 20(18)2020 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-32948056

RESUMEN

Pansharpening is a technique that fuses a low spatial resolution multispectral image and a high spatial resolution panchromatic one to obtain a multispectral image with the spatial resolution of the latter while preserving the spectral information of the multispectral image. In this paper we propose a variational Bayesian methodology for pansharpening. The proposed methodology uses the sensor characteristics to model the observation process and Super-Gaussian sparse image priors on the expected characteristics of the pansharpened image. The pansharpened image, as well as all model and variational parameters, are estimated within the proposed methodology. Using real and synthetic data, the quality of the pansharpened images is assessed both visually and quantitatively and compared with other pansharpening methods. Theoretical and experimental results demonstrate the effectiveness, efficiency, and flexibility of the proposed formulation.

10.
Sensors (Basel) ; 20(12)2020 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-32560500

RESUMEN

Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as CFA 3.0 that has 75% white pixels, 12.5% green pixels, and 6.25% of red and blue pixels. We then present algorithms to demosaic this CFA, and demonstrate its performance for normal and low lighting images. In addition, a comparative study was performed to evaluate the demosaicing performance of three CFAs, namely the Bayer pattern (CFA 1.0), the Kodak CFA 2.0, and the proposed CFA 3.0. Using a clean Kodak dataset with 12 images, we emulated low lighting conditions by introducing Poisson noise into the clean images. In our experiments, normal and low lighting images were used. For the low lighting conditions, images with signal-to-noise (SNR) of 10 dBs and 20 dBs were studied. We observed that the demosaicing performance in low lighting conditions was improved when there are more white pixels. Moreover, denoising can further enhance the demosaicing performance for all CFAs. The most important finding is that CFA 3.0 performs better than CFA 1.0, but is slightly inferior to CFA 2.0, in low lighting images.

11.
Sensors (Basel) ; 20(10)2020 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-32408666

RESUMEN

Pulse-coupled neural network (PCNN) and its modified models are suitable for dealing with multi-focus and medical image fusion tasks. Unfortunately, PCNNs are difficult to directly apply to multispectral image fusion, especially when the spectral fidelity is considered. A key problem is that most fusion methods using PCNNs usually focus on the selection mechanism either in the space domain or in the transform domain, rather than a details injection mechanism, which is of utmost importance in multispectral image fusion. Thus, a novel pansharpening PCNN model for multispectral image fusion is proposed. The new model is designed to acquire the spectral fidelity in terms of human visual perception for the fusion tasks. The experimental results, examined by different kinds of datasets, show the suitability of the proposed model for pansharpening.


Asunto(s)
Algoritmos , Diagnóstico por Imagen , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Percepción Visual
12.
Sensors (Basel) ; 20(7)2020 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-32276451

RESUMEN

KOMPSAT-3, a Korean earth observing satellite, provides the panchromatic (PAN) band and four multispectral (MS) bands. They can be fused to obtain a pan-sharpened image of higher resolution in both the spectral and spatial domain, which is more informative and interpretative for visual inspection. In KOMPSAT-3 Advanced Earth Imaging Sensor System (AEISS) uni-focal camera system, the precise sensor alignment is a prerequisite for the fusion of MS and PAN images because MS and PAN Charge-Coupled Device (CCD) sensors are installed with certain offsets. In addition, exterior effects associated with the ephemeris and terrain elevation lead to the geometric discrepancy between MS and PAN images. Therefore, we propose a rigorous co-registration of KOMPSAT-3 MS and PAN images based on physical sensor modeling. We evaluated the impacts of CCD line offsets, ephemeris, and terrain elevation on the difference in image coordinates. The analysis enables precise co-registration modeling between MS and PAN images. An experiment with KOMPSAT-3 images produced negligible geometric discrepancy between MS and PAN images.

13.
J Imaging ; 6(4)2020 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-34460722

RESUMEN

Pansharpening is a method applied for the generation of high-spatial-resolution multi-spectral (MS) images using panchromatic (PAN) and multi-spectral images. A common challenge in pansharpening is to reduce the spectral distortion caused by increasing the resolution. In this paper, we propose a method for reducing the spectral distortion based on the intensity-hue-saturation (IHS) method targeting satellite images. The IHS method improves the resolution of an RGB image by replacing the intensity of the low-resolution RGB image with that of the high-resolution PAN image. The spectral characteristics of the PAN and MS images are different, and this difference may cause spectral distortion in the pansharpened image. Although many solutions for reducing spectral distortion using a modeled spectrum have been proposed, the quality of the outcomes obtained by these approaches depends on the image dataset. In the proposed technique, we model a low-spatial-resolution PAN image according to a relative spectral response graph, and then the corrected intensity is calculated using the model and the observed dataset. Experiments were conducted on three IKONOS datasets, and the results were evaluated using some major quality metrics. This quantitative evaluation demonstrated the stability of the pansharpened images and the effectiveness of the proposed method.

14.
Sensors (Basel) ; 19(23)2019 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-31771304

RESUMEN

In recent years, many techniques of fusion of multi-sensors satellite images have been developed. This article focuses on examining and improvement the usability of pansharpened images for object detection, especially when fusing data with a high GSD ratio. A methodology to improve an interpretative ability of pansharpening results is based on pre-processing of the panchromatic image using Logarithmic-Laplace filtration. The proposed approach was used to examine several different pansharpening methods and data sets with different spatial resolution ratios, i.e., from 1:4 to 1:60. The obtained results showed that the proposed approach significantly improves an object detection of fused images, especially for imagery data with a high-resolution ratio. The interpretative ability was assessed using qualitative method (based on image segmentation) and quantitative method (using an indicator based on the Speeded Up Robust Features (SURF) detector). In the case of combining data acquired with the same sensor the interpretative potential had improved by a dozen or so per cent. However, for data with a high resolution ratio, the improvement was several dozen, or even several hundred per cents, in the case of images blurred after pansharpening by the classic method (with original panchromatic image). Image segmentation showed that it is possible to recognize narrow objects that were originally blurred and difficult to identify. In addition, for panchromatic images acquired by WorldView-2, the proposed approach improved not only object detection but also the spectral quality of the fused image.

15.
Sensors (Basel) ; 19(18)2019 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-31547250

RESUMEN

Deep Learning, and Deep Neural Networks in particular, have established themselves as the new norm in signal and data processing, achieving state-of-the-art performance in image, audio, and natural language understanding. In remote sensing, a large body of research has been devoted to the application of deep learning for typical supervised learning tasks such as classification. Less yet equally important effort has also been allocated to addressing the challenges associated with the enhancement of low-quality observations from remote sensing platforms. Addressing such channels is of paramount importance, both in itself, since high-altitude imaging, environmental conditions, and imaging systems trade-offs lead to low-quality observation, as well as to facilitate subsequent analysis, such as classification and detection. In this paper, we provide a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others. In addition to the detailed analysis and comparison of recently presented approaches, different research avenues which could be explored in the future are also discussed.

16.
J Imaging ; 5(8)2019 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-34460502

RESUMEN

The RGBW color filter arrays (CFA), also known as CFA2.0, contains R, G, B, and white (W) pixels. It is a 4 × 4 pattern that has 8 white pixels, 4 green pixels, 2 red pixels, and 2 blue pixels. The pattern repeats itself over the whole image. In an earlier conference paper, we cast the demosaicing process for CFA2.0 as a pansharpening problem. That formulation is modular and allows us to insert different pansharpening algorithms for demosaicing. New algorithms in interpolation and demosaicing can also be used. In this paper, we propose a new enhancement of our earlier approach by integrating a deep learning-based algorithm into the framework. Extensive experiments using IMAX and Kodak images clearly demonstrated that the new approach improved the demosaicing performance even further.

17.
Sensors (Basel) ; 18(12)2018 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-30551674

RESUMEN

Commonly used image fusion techniques generally produce good results for images obtained from the same sensor, with a standard ratio of spatial resolution (1:4). However, an atypical high ratio of resolution reduces the effectiveness of fusion methods resulting in a decrease in the spectral or spatial quality of the sharpened image. An important issue is the development of a method that allows for maintaining simultaneous high spatial and spectral quality. The authors propose to strengthen the pan-sharpening methods through prior modification of the panchromatic image. Local statistics of the differences between the original panchromatic image and the intensity of the multispectral image are used to detect spatial details. The Euler's number and the distance of each pixel from the nearest pixel classified as a spatial detail determine the weight of the information collected from each integrated image. The research was carried out for several pan-sharpening methods and for data sets with different levels of spectral matching. The proposed solution allows for a greater improvement in the quality of spectral fusion, while being able to identify the same spatial details for most pan-sharpening methods and is mainly dedicated to Intensity-Hue-Saturation based methods for which the following improvements in spectral quality were achieved: about 30% for the urbanized area and about 15% for the non-urbanized area.

18.
Sensors (Basel) ; 18(12)2018 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-30544600

RESUMEN

Image pansharpening can generate a high-resolution hyperspectral (HS) image by combining a high-resolution panchromatic image and a HS image. In this paper, we propose a variational pansharpening method for HS imagery constrained by spectral shape and Gram⁻Schmidt (GS) transformation. The main novelties of the proposed method are the additional spectral and correlation fidelity terms. First, we design the spectral fidelity term, which utilizes the spectral shape feature of the neighboring pixels with a new weight distribution strategy to reduce spectral distortion caused by the change in spatial resolution. Second, we consider that the correlation fidelity term uses the result of GS adaptive (GSA) to constrain the correlation, thereby preventing the low correlation between the pansharpened image and the reference image. Then, the pansharpening is formulized as the minimization of a new energy function, whose solution is the pansharpened image. In comparative trials, the proposed method outperforms GSA, guided filter principal component analysis, modulation transfer function, smoothing filter-based intensity modulation, the classic and the band-decoupled variational methods. Compared with the classic variation pansharpening, our method decreases the spectral angle from 3.9795 to 3.2789, decreases the root-mean-square error from 309.6987 to 228.6753, and also increases the correlation coefficient from 0.9040 to 0.9367.

19.
Sensors (Basel) ; 18(11)2018 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-30360507

RESUMEN

Hyperspectral images with hundreds of spectral bands have been proven to yield high performance in material classification. However, despite intensive advancement in hardware, the spatial resolution is still somewhat low, as compared to that of color and multispectral (MS) imagers. In this paper, we aim at presenting some ideas that may further enhance the performance of some remote sensing applications such as border monitoring and Mars exploration using hyperspectral images. One popular approach to enhancing the spatial resolution of hyperspectral images is pansharpening. We present a brief review of recent image resolution enhancement algorithms, including single super-resolution and multi-image fusion algorithms, for hyperspectral images. Advantages and limitations of the enhancement algorithms are highlighted. Some limitations in the pansharpening process include the availability of high resolution (HR) panchromatic (pan) and/or MS images, the registration of images from multiple sources, the availability of point spread function (PSF), and reliable and consistent image quality assessment. We suggest some proactive ideas to alleviate the above issues in practice. In the event where hyperspectral images are not available, we suggest the use of band synthesis techniques to generate HR hyperspectral images from low resolution (LR) MS images. Several recent interesting applications in border monitoring and Mars exploration using hyperspectral images are presented. Finally, some future directions in this research area are highlighted.

20.
Sensors (Basel) ; 18(4)2018 Mar 31.
Artículo en Inglés | MEDLINE | ID: mdl-29614745

RESUMEN

Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA