Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 29
1.
Article En | MEDLINE | ID: mdl-38568772

The foundation model has recently garnered significant attention due to its potential to revolutionize the field of visual representation learning in a self-supervised manner. While most foundation models are tailored to effectively process RGB images for various visual tasks, there is a noticeable gap in research focused on spectral data, which offers valuable information for scene understanding, especially in remote sensing (RS) applications. To fill this gap, we created for the first time a universal RS foundation model, named SpectralGPT, which is purpose-built to handle spectral RS images using a novel 3D generative pretrained transformer (GPT). Compared to existing foundation models, SpectralGPT 1) accommodates input images with varying sizes, resolutions, time series, and regions in a progressive training fashion, enabling full utilization of extensive RS Big Data; 2) leverages 3D token generation for spatial-spectral coupling; 3) captures spectrally sequential patterns via multi-target reconstruction; 4) trains on one million spectral RS images, yielding models with over 600 million parameters. Our evaluation highlights significant performance improvements with pretrained SpectralGPT models, signifying substantial potential in advancing spectral RS Big Data applications within the field of geoscience across four downstream tasks: single/multi-label scene classification, semantic segmentation, and change detection.

2.
Article En | MEDLINE | ID: mdl-37624719

Anomaly detection is a fundamental task in hyperspectral image (HSI) processing. However, most existing methods rely on pixel feature vectors and overlook the relational structure information between pixels, limiting the detection performance. In this article, we propose a novel approach to hyperspectral anomaly detection that characterizes the HSI data using a vertex-and edge-weighted graph with the pixels as vertices. The constructed graph encodes rich structural information in an affinity matrix. A crucial innovation of our method is the ability to obtain internal relations between pixels at multiple topological scales by processing different powers of the affinity matrix. This power processing is viewed as a graph evolution, which enables anomaly detection using vertex extraction formulated as a quadratic programming problem on graphs of varying topological scales. We also design a hierarchical guided filtering architecture to fuse multiscale detection results derived from graph evolution, which significantly reduces the false alarm rate. Our approach effectively characterizes the topological properties of HSIs, leveraging the structural information between pixels to improve anomaly detection accuracy. Experimental results on four real HSIs demonstrate the superior detection performance of our proposed approach compared to some state-of-the-art hyperspectral anomaly detection methods.

3.
Environ Sci Pollut Res Int ; 30(39): 91216-91225, 2023 Aug.
Article En | MEDLINE | ID: mdl-37474852

In 2019, the Government of Mexico City implemented actions that allowed citizens to approach a free Wi-Fi hotspot, where more than 13000 points have been installed throughout the city. In this work, we present the results of the measurements of personal exposure to Radiofrequency Electromagnetic Fields carried out in Plaza de la Constitución, better known as Zócalo located in the center of Mexico City. The measurements were taken by one of the researchers while walking on a weekday morning and afternoon, in different microenvironments (on the street, on public transport: subway, at the Zócalo, and finally, at home). We also carry out spot measurements in the center of the Zócalo. Subsequently, we carried out a comparative analysis of the different microenvironments, through box plot and violin plot, and we elaborate georeferenced and interpolated maps with intensity levels through the Kriging method, using the Geographic Information System. The Kriging interpolation gives us a good visualization of the spatial distribution of RF-EMF exposure in the study area, showing the highest and lowest intensity levels. The mean values recorded at the measured points in the Zócalo were 326 µW/m2 in the 2.4- to 2.5-GHz Wi-Fi band and 2370 µW/m2 in the 5.15- to 5.85-GHz Wi-Fi band. In the case of the mean values recorded on the street, they were 119 µW/m2 in the 2.4- to 2.5-GHz frequency band and 31.8 µW/m2 in the 5.15- to 5.85-GHz frequency band, like the values recorded at home, 122 µW/m2 and 33.9 µW/m2, respectively. All values are well below the reference levels established by the International Commission on Non-Ionizing Radiation Protection.


Electromagnetic Fields , Environmental Exposure , Environmental Exposure/analysis , Mexico , Radio Waves , Spatial Analysis
4.
Article En | MEDLINE | ID: mdl-37022253

Most existing techniques consider hyperspectral anomaly detection (HAD) as background modeling and anomaly search problems in the spatial domain. In this article, we model the background in the frequency domain and treat anomaly detection as a frequency-domain analysis problem. We illustrate that spikes in the amplitude spectrum correspond to the background, and a Gaussian low-pass filter performing on the amplitude spectrum is equivalent to an anomaly detector. The initial anomaly detection map is obtained by the reconstruction with the filtered amplitude and the raw phase spectrum. To further suppress the nonanomaly high-frequency detailed information, we illustrate that the phase spectrum is critical information to perceive the spatial saliency of anomalies. The saliency-aware map obtained by phase-only reconstruction (POR) is used to enhance the initial anomaly map, which realizes a significant improvement in background suppression. In addition to the standard Fourier transform (FT), we adopt the quaternion FT (QFT) for conducting multiscale and multifeature processing in a parallel way, to obtain the frequency domain representation of the hyperspectral images (HSIs). This helps with robust detection performance. Experimental results on four real HSIs validate the remarkable detection performance and excellent time efficiency of our proposed approach when compared to some state-of-the-art anomaly detection methods.

5.
Sci Total Environ ; 858(Pt 3): 160008, 2023 Feb 01.
Article En | MEDLINE | ID: mdl-36368387

In this work we present the personal exposure levels to Radiofrequency Electromagnetic Fields (RF-EMF) from Wireless Fidelity (Wi-Fi) 2.4 GHz and 5.85 GHz bands in a Spanish university, specifically, at the Faculty of Computer Science Engineering at the University of Castilla-La Mancha (Albacete, Spain). We present results from three years, 2017, 2018 and 2019 in the same study place and points; and measurements carried out in 2022 inside a classroom and inside a professor's office, with the aim to compare the measurements and verify compliance with reference levels established by the International Commission on Non-Ionizing Radiation Protection (ICNIRP). The minimum average was 0.0900 µW/m2 in the 2.4 GHz Wi-Fi, in 2019, and the maximum average was 211 µW/m2 in the 5.85 GHz Wi-Fi in 2017, around the building. Comparing the measurements carried out inside the classroom with students and without students, we identified that the maximum value was 278 µW/m2 (classroom with students, in the 5.85 GHz Wi-Fi band) and the minimum value was 37.9 µW/m2 (classroom without students, in the 5.85 GHz Wi-Fi band). Finally, comparing the results of all the measurements (average values) inside the classroom and inside a professor's office, the maximum value was 205 µW/m2 (in the 5.85 GHz Wi-Fi band) inside the classroom with students, and the minimum value was 0.217 µW/m2 inside a professor's office (in the 2.4 GHz Wi-Fi band). These values in no case exceed the limits established by the International Commission on Non-Ionizing Radiation Protection, 10 W/m2 for general public exposure.


Universities , Humans , Spain
6.
J Environ Manage ; 326(Pt A): 116851, 2023 Jan 15.
Article En | MEDLINE | ID: mdl-36442350

With the development of remote sensing technology, significant progress has been made in the evaluation of the eco-environment. The remote sensing ecological index (RSEI) is one of the most widely used indices for the comprehensive evaluation of eco-environmental quality. This index is entirely based on remote sensing data and can monitor eco-environmental aspects quickly for a large area. However, the use of RSEI has some limitations. For example, its application is generally not uniform, the obtained results are stochastic in nature, and its calculation process cannot consider all ecological elements (especially the water element). In spite of the widespread application of the RSEI, improvements to its limitations are scarce. In this paper, we propose a new index named the remote sensing ecological index considering full elements (RSEIFE). The proposed RSEIFE is compared with commonly used evaluation models such as RSEI and RSEILA (Remote Sensing Ecological Index with Local Adaptability) in several types of study areas to assess the stability and accuracy of our model. The results show that the calculation process of RSEIFE is more stable than those of RSEI and RSEILA, and the results of RSEIFE are consistent with the real eco-environment surface and reveal more details about its features. Meanwhile, compared with RSEI and RSEILA, the results of RSEIFE effectively reveal the ecological benefits of both water bodies themselves and their surrounding environments, which lead to more accurate and comprehensive basis for the implementation of environmental protection policies.


Environmental Policy , Remote Sensing Technology , Policy , Water
7.
IEEE Trans Cybern ; 53(10): 6649-6662, 2023 Oct.
Article En | MEDLINE | ID: mdl-36395126

Spatial-spectral classification (SSC) has become a trend for hyperspectral image (HSI) classification. However, most SSC methods mainly consider local information, so that some correlations may not be effectively discovered when they appear in regions that are not contiguous. Although many SSC methods can acquire spatial-contextual characteristics via spatial filtering, they lack the ability to consider correlations in non-Euclidean spaces. To address the aforementioned issues, we develop a new semisupervised HSI classification approach based on normalized spectral clustering with kernel-based learning (NSCKL), which can aggregate local-to-global correlations to achieve a distinguishable embedding to improve HSI classification performance. In this work, we propose a normalized spectral clustering (NSC) scheme that can learn new features under a manifold assumption. Specifically, we first design a kernel-based iterative filter (KIF) to establish vertices of the undirected graph, aiming to assign initial connections to the nodes associated with pixels. The NSC first gathers local correlations in the Euclidean space and then captures global correlations in the manifold. Even though homogeneous pixels are distributed in noncontiguous regions, our NSC can still aggregate correlations to generate new (clustered) features. Finally, the clustered features and a kernel-based extreme learning machine (KELM) are employed to achieve the semisupervised classification. The effectiveness of our NSCKL is evaluated by using several HSIs. When compared with other state-of-the-art (SOTA) classification approaches, our newly proposed NSCKL demonstrates very competitive performance. The codes will be available at https://github.com/yuanchaosu/TCYB-nsckl.

8.
Rev Environ Health ; 38(1): 193-196, 2023 Mar 28.
Article En | MEDLINE | ID: mdl-35142146

In this letter, we present some comments related to Pall's publication, in which Pall states that the electric field disappears after a few centimeters and that the magnetic field continues progressing within the studied material.


Magnetic Fields , Microwaves , Microwaves/adverse effects , Physics , Biology
10.
IEEE Trans Neural Netw Learn Syst ; 33(2): 747-761, 2022 Feb.
Article En | MEDLINE | ID: mdl-33085622

The problem of effectively exploiting the information multiple data sources has become a relevant but challenging research topic in remote sensing. In this article, we propose a new approach to exploit the complementarity of two data sources: hyperspectral images (HSIs) and light detection and ranging (LiDAR) data. Specifically, we develop a new dual-channel spatial, spectral and multiscale attention convolutional long short-term memory neural network (called dual-channel A3 CLNN) for feature extraction and classification of multisource remote sensing data. Spatial, spectral, and multiscale attention mechanisms are first designed for HSI and LiDAR data in order to learn spectral- and spatial-enhanced feature representations and to represent multiscale information for different classes. In the designed fusion network, a novel composite attention learning mechanism (combined with a three-level fusion strategy) is used to fully integrate the features in these two data sources. Finally, inspired by the idea of transfer learning, a novel stepwise training strategy is designed to yield a final classification result. Our experimental results, conducted on several multisource remote sensing data sets, demonstrate that the newly proposed dual-channel A 3 CLNN exhibits better feature representation ability (leading to more competitive classification performance) than other state-of-the-art methods.

11.
IEEE Trans Pattern Anal Mach Intell ; 44(7): 3523-3542, 2022 07.
Article En | MEDLINE | ID: mdl-33596172

Image segmentation is a key task in computer vision and image processing with important applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among others, and numerous segmentation algorithms are found in the literature. Against this backdrop, the broad success of deep learning (DL) has prompted the development of new image segmentation approaches leveraging DL models. We provide a comprehensive review of this recent literature, covering the spectrum of pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the relationships, strengths, and challenges of these DL-based segmentation models, examine the widely used datasets, compare performances, and discuss promising research directions.


Deep Learning , Robotics , Algorithms , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
12.
IEEE Trans Neural Netw Learn Syst ; 32(1): 376-390, 2021 Jan.
Article En | MEDLINE | ID: mdl-32217488

Recently, many convolutional neural network (CNN) methods have been designed for hyperspectral image (HSI) classification since CNNs are able to produce good representations of data, which greatly benefits from a huge number of parameters. However, solving such a high-dimensional optimization problem often requires a large number of training samples in order to avoid overfitting. In addition, it is a typical nonconvex problem affected by many local minima and flat regions. To address these problems, in this article, we introduce the naive Gabor networks or Gabor-Nets that, for the first time in the literature, design and learn CNN kernels strictly in the form of Gabor filters, aiming to reduce the number of involved parameters and constrain the solution space and, hence, improve the performances of CNNs. Specifically, we develop an innovative phase-induced Gabor kernel, which is trickily designed to perform the Gabor feature learning via a linear combination of local low-frequency and high-frequency components of data controlled by the kernel phase. With the phase-induced Gabor kernel, the proposed Gabor-Nets gains the ability to automatically adapt to the local harmonic characteristics of the HSI data and, thus, yields more representative harmonic features. Also, this kernel can fulfill the traditional complex-valued Gabor filtering in a real-valued manner, hence making Gabor-Nets easily perform in a usual CNN thread. We evaluated our newly developed Gabor-Nets on three well-known HSIs, suggesting that our proposed Gabor-Nets can significantly improve the performance of CNNs, particularly with a small training set.

13.
IEEE Trans Cybern ; 51(7): 3588-3601, 2021 Jul.
Article En | MEDLINE | ID: mdl-33119530

The large data volume and high algorithm complexity of hyperspectral image (HSI) problems have posed big challenges for efficient classification of massive HSI data repositories. Recently, cloud computing architectures have become more relevant to address the big computational challenges introduced in the HSI field. This article proposes an acceleration method for HSI classification that relies on scheduling metaheuristics to automatically and optimally distribute the workload of HSI applications across multiple computing resources on a cloud platform. By analyzing the procedure of a representative classification method, we first develop its distributed and parallel implementation based on the MapReduce mechanism on Apache Spark. The subtasks of the processing flow that can be processed in a distributed way are identified as divisible tasks. The optimal execution of this application on Spark is further formulated as a divisible scheduling framework that takes into account both task execution precedences and task divisibility when allocating the divisible and indivisible subtasks onto computing nodes. The formulated scheduling framework is an optimization procedure that searches for optimized task assignments and partition counts for divisible tasks. Two metaheuristic algorithms are developed to solve this divisible scheduling problem. The scheduling results provide an optimized solution to the automatic processing of HSI big data on clouds, improving the computational efficiency of HSI classification by exploring the parallelism during the parallel processing flow. Experimental results demonstrate that our scheduling-guided approach achieves remarkable speedups by facilitating the automatic processing of HSI classification on Spark, and is scalable to the increasing HSI data volume.

14.
IEEE Trans Neural Netw Learn Syst ; 31(5): 1461-1474, 2020 May.
Article En | MEDLINE | ID: mdl-31295122

This paper proposes a novel end-to-end learning model, called skip-connected covariance (SCCov) network, for remote sensing scene classification (RSSC). The innovative contribution of this paper is to embed two novel modules into the traditional convolutional neural network (CNN) model, i.e., skip connections and covariance pooling. The advantages of newly developed SCCov are twofold. First, by means of the skip connections, the multi-resolution feature maps produced by the CNN are combined together, which provides important benefits to address the presence of large-scale variance in RSSC data sets. Second, by using covariance pooling, we can fully exploit the second-order information contained in such multi-resolution feature maps. This allows the CNN to achieve more representative feature learning when dealing with RSSC problems. Experimental results, conducted using three large-scale benchmark data sets, demonstrate that our newly proposed SCCov network exhibits very competitive or superior classification performance when compared with the current state-of-the-art RSSC techniques, using a much lower amount of parameters. Specifically, our SCCov only needs 10% of the parameters used by its counterparts.

15.
Sensors (Basel) ; 18(11)2018 Oct 25.
Article En | MEDLINE | ID: mdl-30366454

Anomaly detection aims to separate anomalous pixels from the background, and has become an important application of remotely sensed hyperspectral image processing. Anomaly detection methods based on low-rank and sparse representation (LRASR) can accurately detect anomalous pixels. However, with the significant volume increase of hyperspectral image repositories, such techniques consume a significant amount of time (mainly due to the massive amount of matrix computations involved). In this paper, we propose a novel distributed parallel algorithm (DPA) by redesigning key operators of LRASR in terms of MapReduce model to accelerate LRASR on cloud computing architectures. Independent computation operators are explored and executed in parallel on Spark. Specifically, we reconstitute the hyperspectral images in an appropriate format for efficient DPA processing, design the optimized storage strategy, and develop a pre-merge mechanism to reduce data transmission. Besides, a repartitioning policy is also proposed to improve DPA's efficiency. Our experimental results demonstrate that the newly developed DPA achieves very high speedups when accelerating LRASR, in addition to maintaining similar accuracies. Moreover, our proposed DPA is shown to be scalable with the number of computing nodes and capable of processing big hyperspectral images involving massive amounts of data.

16.
Cient. dent. (Ed. impr.) ; 11(2): 83-92, mayo-ago. 2014. ilus
Article Es | IBECS | ID: ibc-126679

Inmediatamente después de la extracción de una pieza dental, comienzan a producirse una serie de cambios que originan una disminución tanto en altura como en anchura de la cresta ósea alveolar. Para minimizar dichos cambios se han ido proponiendo diferentes técnicas de "preservación de cresta alveolar". Esta se define como un procedimiento diseñado para mantener las dimensiones de la cresta ósea alveolar tras la extracción de una pieza dental; con estas técnicas se posibilita la correcta colocación de un implante dental osteointegrado, dismi-nuyendo la necesidad de una posibleregeneración ósea guiada a posteriori y con-siguiendo así los requerimientos estéticosnecesarios en prostodoncia.En esta revisión de la literatura se analiza-ron las diferentes técnicas y procedimientosclínicos para la preservación de la crestaalveolar; se compararán algunos de los dife-rentes tipos de biomateriales utilizados, larealización de exodoncias con o sin eleva-ción de un colgajo, el cierre primario, lautilización de membranas reabsorbibles o la colocación de un implante inmediato; se tratará de llegar a una serie de conclusionessobre el procedimiento más adecuadobasándonos en la evidencia científica (AU)


After tooth extraction the edentulous site begin a series of changes that affect height and width of the socket. To counteract these changes various "ridge preservation" techniques were purposed. It is defined as a advantageous procedure to maintain an acceptable ridge contour, after tooth extraction, this techniques allow the correct placement of an osteointegrated implant, lowering a consequent guided bone regeneration and reaching the esthetical requirements necessary for prosthodontics, In this literature review different ridge preser-vation techniques and clinical procedureshave been analyzed. Different types of bio-material, flap or flapless extractions, primaryclosure , use of resorbable membrane andimplants placed in the fresh extraction soc-ket will be compared; we will try to find aseries of conclusions about the most indicated clinical procedures, basing on thescientific literature (AU)


Humans , Alveolar Process/surgery , Alveoloplasty/methods , Alveolar Ridge Augmentation/methods , Bone Regeneration/physiology , Transplantation, Heterologous/methods , Biocompatible Materials/therapeutic use , Tooth Extraction/adverse effects
17.
IEEE Trans Image Process ; 23(8): 3574-3589, 2014 Aug.
Article En | MEDLINE | ID: mdl-24951694

The binary partition tree (BPT) is a hierarchical region-based representation of an image in a tree structure. The BPT allows users to explore the image at different segmentation scales. Often, the tree is pruned to get a more compact representation and so the remaining nodes conform an optimal partition for some given task. Here, we propose a novel BPT construction approach and pruning strategy for hyperspectral images based on spectral unmixing concepts. Linear spectral unmixing consists of finding the spectral signatures of the materials present in the image (endmembers) and their fractional abundances within each pixel. The proposed methodology exploits the local unmixing of the regions to find the partition achieving a global minimum reconstruction error. Results are presented on real hyperspectral data sets with different contexts and resolutions.


Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Spectrum Analysis/methods , Reproducibility of Results , Sensitivity and Specificity
18.
Acta Oncol ; 48(7): 1044-53, 2009.
Article En | MEDLINE | ID: mdl-19575313

BACKGROUND: Single points placed on Dose-Volume Histograms (DVHs) for treatment plan acceptance are still widely used compared to the Equivalent Uniform Dose (EUD). The aim of this work is to retrospectively measure and compare the ability of both criteria in correctly predicting two clinical outcomes, RTOG grade 2 acute gastrointestinal (GI) and genitourinary (GU) complications in 137 patients treated for prostate cancer. MATERIAL AND METHODS: For both complications,the best predictions have been achieved by fitting the EUD parameter and a tolerance dose (for a varying DVH point) by maximization of the Area Under the Receiver Operating Curve (AUROC). A complementary likelihood fitting of the Lyman's Normal Tissue Complication Probability (NTCP) allowed a graphical comparison between expected and observed frequencies, and to derive the associated parameters. RESULTS AND DISCUSSION: No significant differences were found in the AUROC values obtained by using dose-volume or EUD criteria, but all the results highlighted the role of high doses. Limiting V65 (for grade 2 GI) or V73 (for grade 2 GU) was as predictive as limiting EUD value, with n equal to 0.09 or 0.06 respectively, but in all cases AUROC values were low (< 0.7). Likelihood fitting gave m = 0.195 and TD50=72.5 Gy (fixing n=0.06 for acute GU) and m=0.19 and TD50=66 Gy (fixing n=0.09 for acute GI). Both AUROC and likelihood values revealed a better fit for acute GI than for acute GU. The use of a fractionation correction, new clinical contours or previous risk factors could improve these values.


Gastrointestinal Diseases/etiology , Male Urogenital Diseases/etiology , Prostatic Neoplasms/radiotherapy , Radiotherapy/adverse effects , Acute Disease , Dose Fractionation, Radiation , Humans , Likelihood Functions , Male , Predictive Value of Tests , Probability , ROC Curve , Radiation Injuries/etiology , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/adverse effects , Retrospective Studies , Risk Factors , Treatment Outcome
19.
Sensors (Basel) ; 9(1): 196-218, 2009.
Article En | MEDLINE | ID: mdl-22389595

Hyperspectral imaging is a new remote sensing technique that generates hundreds of images, corresponding to different wavelength channels, for the same area on the surface of the Earth. Supervised classification of hyperspectral image data sets is a challenging problem due to the limited availability of training samples (which are very difficult and costly to obtain in practice) and the extremely high dimensionality of the data. In this paper, we explore the use of multi-channel morphological profiles for feature extraction prior to classification of remotely sensed hyperspectral data sets using support vector machines (SVMs). In order to introduce multi-channel morphological transformations, which rely on ordering of pixel vectors in multidimensional space, several vector ordering strategies are investigated. A reduced implementation which builds the multi-channel morphological profile based on the first components resulting from a dimensional reduction transformation applied to the input data is also proposed. Our experimental results, conducted using three representative hyperspectral data sets collected by NASA's Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) sensor and the German Digital Airborne Imaging Spectrometer (DAIS 7915), reveal that multi-channel morphological profiles can improve single-channel morphological profiles in the task of extracting relevant features for classification of hyperspectral data using small training sets.

20.
Sensors (Basel) ; 9(2): 768-93, 2009.
Article En | MEDLINE | ID: mdl-22399938

In this paper we compare two different methodologies for Fractional Vegetation Cover (FVC) retrieval from Compact High Resolution Imaging Spectrometer (CHRIS) data onboard the European Space Agency (ESA) Project for On-Board Autonomy (PROBA) platform. The first methodology is based on empirical approaches using Vegetation Indices (VIs), in particular the Normalized Difference Vegetation Index (NDVI) and the Variable Atmospherically Resistant Index (VARI). The second methodology is based on the Spectral Mixture Analysis (SMA) technique, in which a Linear Spectral Unmixing model has been considered in order to retrieve the abundance of the different constituent materials within pixel elements, called Endmembers (EMs). These EMs were extracted from the image using three different methods: i) manual extraction using a land cover map, ii) Pixel Purity Index (PPI) and iii) Automated Morphological Endmember Extraction (AMEE). The different methodologies for FVC retrieval were applied to one PROBA/CHRIS image acquired over an agricultural area in Spain, and they were calibrated and tested against in situ measurements of FVC estimated with hemispherical photographs. The results obtained from VIs show that VARI correlates better with FVC than NDVI does, with standard errors of estimation of less than 8% in the case of VARI and less than 13% in the case of NDVI when calibrated using the in situ measurements. The results obtained from the SMA-LSU technique show Root Mean Square Errors (RMSE) below 12% when EMs are extracted from the AMEE method and around 9% when extracted from the PPI method. A RMSE value below 9% was obtained for manual extraction of EMs using a land cover use map.

...