Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Environ Monit Assess ; 196(9): 858, 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39198321

RESUMO

The study presents an analysis of changes in the landscape of the Ostrava-Karviná Mining District (in the Czech Republic) covering the period of more than 170 years. In the area of interest affected by underground coal mining, both areas affected by changes and land cover preserving areas were identified in the study. A detailed assessment of the landscape changes was enabled by using landscape metrics and indices, namely the development index and total landscape change index. The underlying data were obtained from maps of stable cadastre (from the year 1836) and aerial images of the years 1947, 1971, and 2009. Visual photointerpretation of aerial images and interpretation of the maps of stable cadastre made it possible to create land cover maps according to CORINE Land Cover categories. Obtained information on the representation of individual land cover categories were used to identify and to analyze changes in the landscape affected by hard coal mining.


Assuntos
Minas de Carvão , Conservação dos Recursos Naturais , Monitoramento Ambiental , Mineração , República Tcheca , Monitoramento Ambiental/métodos
2.
Sensors (Basel) ; 23(13)2023 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-37447701

RESUMO

In this study, we propose an algorithm to improve the accuracy of tiny object segmentation for precise pothole detection on asphalt pavements. The approach comprises a three-step process: MOED, VAPOR, and Exception Processing, designed to extract pothole edges, validate the results, and manage detected abnormalities. The proposed algorithm addresses the limitations of previous methods and offers several advantages, including wider coverage. We experimentally evaluated the performance of the proposed algorithm by filming roads in various regions of South Korea using a UAV at high altitudes of 30-70 m. The results show that our algorithm outperforms previous methods in terms of instance segmentation performance for small objects such as potholes. Our study offers a practical and efficient solution for pothole detection and contributes to road safety maintenance and monitoring.


Assuntos
Algoritmos , Gases , Hidrocarbonetos , Filmes Cinematográficos
3.
Sensors (Basel) ; 23(13)2023 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-37447757

RESUMO

With the progress of science and technology, artificial intelligence is widely used in various disciplines and has produced amazing results. The research of the target detection algorithm has significantly improved the performance and role of unmanned aerial vehicles (UAVs), and plays an irreplaceable role in preventing forest fires, evacuating crowded people, surveying and rescuing explorers. At this stage, the target detection algorithm deployed in UAVs has been applied to production and life, but making the detection accuracy higher and better adaptability is still the motivation for researchers to continue to study. In aerial images, due to the high shooting height, small size, low resolution and few features, it is difficult to be detected by conventional target detection algorithms. In this paper, the UN-YOLOv5s algorithm can solve the difficult problem of small target detection excellently. The more accurate small target detection (MASD) mechanism is used to greatly improve the detection accuracy of small and medium targets, The multi-scale feature fusion (MCF) path is combined to fuse the semantic information and location information of the image to improve the expression ability of the novel model. The new convolution SimAM residual (CSR) module is introduced to make the network more stable and focused. On the VisDrone dataset, the mean average precision (mAP) of UAV necessity you only look once v5s(UN-YOLOv5s) is 8.4% higher than that of the original algorithm. Compared with the same version, YOLOv5l, the mAP is increased by 2.2%, and the Giga Floating-point Operations Per Second (GFLOPs) is reduced by 65.3%. Compared with the same series of YOLOv3, the mAP is increased by 1.8%, and GFLOPs is reduced by 75.8%. Compared with the same series of YOLOv8s, the detection accuracy of the mAP is improved by 1.1%.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Motivação , Fotografação , Nações Unidas
4.
Environ Sci Technol ; 56(8): 4849-4858, 2022 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-35363471

RESUMO

California's dairy sector accounts for ∼50% of anthropogenic CH4 emissions in the state's greenhouse gas (GHG) emission inventory. Although California dairy facilities' location and herd size vary over time, atmospheric inverse modeling studies rely on decade-old facility-scale geospatial information. For the first time, we apply artificial intelligence (AI) to aerial imagery to estimate dairy CH4 emissions from California's San Joaquin Valley (SJV), a region with ∼90% of the state's dairy population. Using an AI method, we process 316,882 images to estimate the facility-scale herd size across the SJV. The AI approach predicts herd size that strongly (>95%) correlates with that made by human visual inspection, providing a low-cost alternative to the labor-intensive inventory development process. We estimate SJV's dairy enteric and manure CH4 emissions for 2018 to be 496-763 Gg/yr (mean = 624; 95% confidence) using the predicted herd size. We also apply our AI approach to estimate CH4 emission reduction from anaerobic digester deployment. We identify 162 large (90th percentile) farms and estimate a CH4 reduction potential of 83 Gg CH4/yr for these large facilities from anaerobic digester adoption. The results indicate that our AI approach can be applied to characterize the manure system (e.g., use of an anaerobic lagoon) and estimate GHG emissions for other sectors.


Assuntos
Poluentes Atmosféricos , Gases de Efeito Estufa , Poluentes Atmosféricos/análise , Inteligência Artificial , Fazendas , Humanos , Esterco , Metano/análise
5.
Sensors (Basel) ; 22(21)2022 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-36366090

RESUMO

CNN-based object detectors have achieved great success in recent years. The available detectors adopted horizontal bounding boxes to locate various objects. However, in some unique scenarios, objects such as buildings and vehicles in aerial images may be densely arranged and have apparent orientations. Therefore, some approaches extend the horizontal bounding box to the oriented bounding box to better extract objects, usually carried out by directly regressing the angle or corners. However, this suffers from the discontinuous boundary problem caused by angular periodicity or corner order. In this paper, we propose a simple but efficient oriented object detector based on YOLOv4 architecture. We regress the offset of an object's front point instead of its angle or corners to avoid the above mentioned problems. In addition, we introduce the intersection over union (IoU) correction factor to make the training process more stable. The experimental results on two public datasets, DOTA and HRSC2016, demonstrate that the proposed method significantly outperforms other methods in terms of detection speed while maintaining high accuracy. In DOTA, our proposed method achieved the highest mAP for the classes with prominent front-side appearances, such as small vehicles, large vehicles, and ships. The highly efficient architecture of YOLOv4 increases more than 25% detection speed compared to the other approaches.

6.
Sensors (Basel) ; 22(2)2022 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-35062425

RESUMO

In-flight system failure is one of the major safety concerns in the operation of unmanned aerial vehicles (UAVs) in urban environments. To address this concern, a safety framework consisting of following three main tasks can be utilized: (1) Monitoring health of the UAV and detecting failures, (2) Finding potential safe landing spots in case a critical failure is detected in step 1, and (3) Steering the UAV to a safe landing spot found in step 2. In this paper, we specifically look at the second task, where we investigate the feasibility of utilizing object detection methods to spot safe landing spots in case the UAV suffers an in-flight failure. Particularly, we investigate different versions of the YOLO objection detection method and compare their performances for the specific application of detecting a safe landing location for a UAV that has suffered an in-flight failure. We compare the performance of YOLOv3, YOLOv4, and YOLOv5l while training them by a large aerial image dataset called DOTA in a Personal Computer (PC) and also a Companion Computer (CC). We plan to use the chosen algorithm on a CC that can be attached to a UAV, and the PC is used to verify the trends that we see between the algorithms on the CC. We confirm the feasibility of utilizing these algorithms for effective emergency landing spot detection and report their accuracy and speed for that specific application. Our investigation also shows that the YOLOv5l algorithm outperforms YOLOv4 and YOLOv3 in terms of accuracy of detection while maintaining a slightly slower inference speed.


Assuntos
Algoritmos , Dispositivos Aéreos não Tripulados
7.
ISPRS J Photogramm Remote Sens ; 177: 89-102, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34219969

RESUMO

Aerial scene recognition is a fundamental visual task and has attracted an increasing research interest in the last few years. Most of current researches mainly deploy efforts to categorize an aerial image into one scene-level label, while in real-world scenarios, there often exist multiple scenes in a single image. Therefore, in this paper, we propose to take a step forward to a more practical and challenging task, namely multi-scene recognition in single images. Moreover, we note that manually yielding annotations for such a task is extraordinarily time- and labor-consuming. To address this, we propose a prototype-based memory network to recognize multiple scenes in a single image by leveraging massive well-annotated single-scene images. The proposed network consists of three key components: 1) a prototype learning module, 2) a prototype-inhabiting external memory, and 3) a multi-head attention-based memory retrieval module. To be more specific, we first learn the prototype representation of each aerial scene from single-scene aerial image datasets and store it in an external memory. Afterwards, a multi-head attention-based memory retrieval module is devised to retrieve scene prototypes relevant to query multi-scene images for final predictions. Notably, only a limited number of annotated multi-scene images are needed in the training phase. To facilitate the progress of aerial scene recognition, we produce a new multi-scene aerial image (MAI) dataset. Experimental results on variant dataset configurations demonstrate the effectiveness of our network. Our dataset and codes are publicly available.

8.
ISPRS J Photogramm Remote Sens ; 149: 188-199, 2019 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31007387

RESUMO

Aerial image classification is of great significance in the remote sensing community, and many researches have been conducted over the past few years. Among these studies, most of them focus on categorizing an image into one semantic label, while in the real world, an aerial image is often associated with multiple labels, e.g., multiple object-level labels in our case. Besides, a comprehensive picture of present objects in a given high-resolution aerial image can provide a more in-depth understanding of the studied region. For these reasons, aerial image multi-label classification has been attracting increasing attention. However, one common limitation shared by existing methods in the community is that the co-occurrence relationship of various classes, so-called class dependency, is underexplored and leads to an inconsiderate decision. In this paper, we propose a novel end-to-end network, namely class-wise attention-based convolutional and bidirectional LSTM network (CA-Conv-BiLSTM), for this task. The proposed network consists of three indispensable components: (1) a feature extraction module, (2) a class attention learning layer, and (3) a bidirectional LSTM-based sub-network. Particularly, the feature extraction module is designed for extracting fine-grained semantic feature maps, while the class attention learning layer aims at capturing discriminative class-specific features. As the most important part, the bidirectional LSTM-based sub-network models the underlying class dependency in both directions and produce structured multiple object labels. Experimental results on UCM multi-label dataset and DFC15 multi-label dataset validate the effectiveness of our model quantitatively and qualitatively.

9.
Sensors (Basel) ; 19(21)2019 Oct 28.
Artigo em Inglês | MEDLINE | ID: mdl-31661940

RESUMO

In the field of aerial image object detection based on deep learning, it's difficult to extract features because the images are obtained from a top-down perspective. Therefore, there are numerous false detection boxes. The existing post-processing methods mainly remove overlapped detection boxes, but it's hard to eliminate false detection boxes. The proposed dual non-maximum suppression (dual-NMS) combines the density of detection boxes that are generated for each detected object with the corresponding classification confidence to autonomously remove the false detection boxes. With the dual-NMS as a post-processing method, the precision is greatly improved under the premise of keeping recall unchanged. In vehicle detection in aerial imagery (VEDAI) and dataset for object detection in aerial images (DOTA) datasets, the removal rate of false detection boxes is over 50%. Additionally, according to the characteristics of aerial images, the correlation calculation layer for feature channel separation and the dilated convolution guidance structure are proposed to enhance the feature extraction ability of the network, and these structures constitute the correlation network (CorrNet). Compared with you only look once (YOLOv3), the mean average precision (mAP) of the CorrNet for DOTA increased by 9.78%. Commingled with dual-NMS, the detection effect in aerial images is significantly improved.

10.
Sensors (Basel) ; 19(8)2019 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-31018532

RESUMO

In aerial images, corner points can be detected to describe the structural information of buildings for city modeling, geo-localization, and so on. For this specific vision task, the existing generic corner detectors perform poorly, as they are incapable of distinguishing corner points on buildings from those on other objects such as trees and shadows. Recently, fully convolutional networks (FCNs) have been developed for semantic image segmentation that are able to recognize a designated kind of object through a training process with a manually labeled dataset. Motivated by this achievement, an FCN-based approach is proposed in the present work to detect building corners in aerial images. First, a DeepLab model comprised of improved FCNs and fully-connected conditional random fields (CRFs) is trained end-to-end for building region segmentation. The segmentation is then further improved by using a morphological opening operation to increase its accuracy. Corner points are finally detected on the contour curves of building regions by using a scale-space detector. Experimental results show that the proposed building corner detection approach achieves an F-measure of 0.83 in the test image set and outperforms a number of state-of-the-art corner detectors by a large margin.

11.
Sensors (Basel) ; 18(6)2018 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-29865147

RESUMO

Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.

12.
Sensors (Basel) ; 17(12)2017 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-29186756

RESUMO

Vehicle detection in aerial images is an important and challenging task. Traditionally, many target detection models based on sliding-window fashion were developed and achieved acceptable performance, but these models are time-consuming in the detection phase. Recently, with the great success of convolutional neural networks (CNNs) in computer vision, many state-of-the-art detectors have been designed based on deep CNNs. However, these CNN-based detectors are inefficient when applied in aerial image data due to the fact that the existing CNN-based models struggle with small-size object detection and precise localization. To improve the detection accuracy without decreasing speed, we propose a CNN-based detection model combining two independent convolutional neural networks, where the first network is applied to generate a set of vehicle-like regions from multi-feature maps of different hierarchies and scales. Because the multi-feature maps combine the advantage of the deep and shallow convolutional layer, the first network performs well on locating the small targets in aerial image data. Then, the generated candidate regions are fed into the second network for feature extraction and decision making. Comprehensive experiments are conducted on the Vehicle Detection in Aerial Imagery (VEDAI) dataset and Munich vehicle dataset. The proposed cascaded detection model yields high performance, not only in detection accuracy but also in detection speed.

13.
Cogn Res Princ Implic ; 9(1): 17, 2024 03 26.
Artigo em Inglês | MEDLINE | ID: mdl-38530617

RESUMO

Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects' eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers' eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.'s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.


Assuntos
Movimentos Oculares , Percepção Visual , Humanos , Estimulação Luminosa/métodos , Automação , Registros
14.
Sci Rep ; 14(1): 17799, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39090172

RESUMO

Aerial image target detection is essential for urban planning, traffic monitoring, and disaster assessment. However, existing detection algorithms struggle with small target recognition and accuracy in complex environments. To address this issue, this paper proposes an improved model based on YOLOv8, named MPE-YOLO. Initially, a multilevel feature integrator (MFI) module is employed to enhance the representation of small target features, which meticulously moderates information loss during the feature fusion process. For the backbone network of the model, a perception enhancement convolution (PEC) module is introduced to replace traditional convolutional layers, thereby expanding the network's fine-grained feature processing capability. Furthermore, an enhanced scope-C2f (ES-C2f) module is designed, utilizing channel expansion and stacking of multiscale convolutional kernels to enhance the network's ability to capture small target details. After a series of experiments on the VisDrone, RSOD, and AI-TOD datasets, the model has not only demonstrated superior performance in aerial image detection tasks compared to existing advanced algorithms but also achieved a lightweight model structure. The experimental results demonstrate the potential of MPE-YOLO in enhancing the accuracy and operational efficiency of aerial target detection. Code will be available online (https://github.com/zhanderen/MPE-YOLO).

15.
Math Biosci Eng ; 20(8): 13947-13973, 2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37679118

RESUMO

Aerial remote sensing images have complex backgrounds and numerous small targets compared to natural images, so detecting targets in aerial images is more difficult. Resource exploration and urban construction planning need to detect targets quickly and accurately in aerial images. High accuracy is undoubtedly the advantage for detection models in target detection. However, high accuracy often means more complex models with larger computational and parametric quantities. Lightweight models are fast to detect, but detection accuracy is much lower than conventional models. It is challenging to balance the accuracy and speed of the model in remote sensing image detection. In this paper, we proposed a new YOLO model. We incorporated the structures of YOLOX-Nano and slim-neck, then used the SPPF module and SIoU function. In addition, we designed a new upsampling paradigm that combined linear interpolation and attention mechanism, which can effectively improve the model's accuracy. Compared with the original YOLOX-Nano, our model had better accuracy and speed balance while maintaining the model's lightweight. The experimental results showed that our model achieved high accuracy and speed on NWPU VHR-10, RSOD, TGRS-HRRSD and DOTA datasets.

16.
J Imaging ; 9(1)2022 Dec 25.
Artigo em Inglês | MEDLINE | ID: mdl-36662103

RESUMO

In this paper, we propose an aerial images stitching method based on an as-projective-as-possible (APAP) algorithm, aiming at the problem artifacts, distortions, or stitching failure due to fewer feature points for multispectral aerial image with certain parallax. Our method incorporates accelerated nonlinear diffusion algorithm (AKAZE) into APAP algorithm. First, we use the fast and stable AKAZE to extract the feature points of aerial images, and then, based on the registration model of the APAP algorithm, we add line protection constraints, global similarity constraints, and local similarity constraints to protect the image structure information, to produce a panorama. Experimental results on several datasets demonstrate that proposed method is effective when dealing with multispectral aerial images. Our method can suppress artifacts, distortions, and reduce incomplete splicing. Compared with state-of-the-art image stitching methods, including APAP and adaptive as-natural-as-possible image stitching (AANAP), and two of the most popular UAV image stitching tools, Pix4D and OpenDroneMap (ODM), our method achieves them both quantitatively and qualitatively.

17.
Math Biosci Eng ; 18(2): 986-999, 2021 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-33757171

RESUMO

The combination of Unmanned Aerial Vehicle (UAV) technologies and computer vision makes UAV applications more and more popular. Computer vision tasks based on deep learning usually require a large amount of task-related data to train algorithms for specific tasks. Since the commonly used datasets are not designed for specific scenarios, in order to give UAVs stronger computer vision capabilities, large enough aerial image datasets are needed to be collected to meet the training requirements. In this paper, we take low-altitude aerial image object detection as an example to propose a framework to demonstrate how to construct datasets for specific tasks. Firstly, we introduce the existing low-altitude aerial images datasets and analyze the characteristics of low-altitude aerial images. On this basis, we put forward some suggestions on data collection of low-altitude aerial images. Then, we recommend several commonly used image annotation tools and crowdsourcing platforms for data annotation to generate labeled data for model training. In addition, in order to make up the shortage of data, we introduce data augmentation techniques, including traditional data augmentation and data augmentation based on oversampling and generative adversarial networks.

18.
Front Plant Sci ; 12: 774965, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35222449

RESUMO

Manual assessment of flower abundance of different flowering plant species in grasslands is a time-consuming process. We present an automated approach to determine the flower abundance in grasslands from drone-based aerial images by using deep learning (Faster R-CNN) object detection approach, which was trained and evaluated on data from five flights at two sites. Our deep learning network was able to identify and classify individual flowers. The novel method allowed generating spatially explicit maps of flower abundance that met or exceeded the accuracy of the manual-count-data extrapolation method while being less labor intensive. The results were very good for some types of flowers, with precision and recall being close to or higher than 90%. Other flowers were detected poorly due to reasons such as lack of enough training data, appearance changes due to phenology, or flowers being too small to be reliably distinguishable on the aerial images. The method was able to give precise estimates of the abundance of many flowering plant species. In the future, the collection of more training data will allow better predictions for the flowers that are not well predicted yet. The developed pipeline can be applied to any sort of aerial object detection problem.

19.
Plant Methods ; 16: 87, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32549903

RESUMO

BACKGROUND: Rapid non-destructive measurements to predict cassava root yield over the full growing season through large numbers of germplasm and multiple environments is a huge challenge in Cassava breeding programs. As opposed to waiting until the harvest season, multispectral imagery using unmanned aerial vehicles (UAV) are capable of measuring the canopy metrics and vegetation indices (VIs) traits at different time points of the growth cycle. This resourceful time series aerial image processing with appropriate analytical framework is very important for the automatic extraction of phenotypic features from the image data. Many studies have demonstrated the usefulness of advanced remote sensing technologies coupled with machine learning (ML) approaches for accurate prediction of valuable crop traits. Until now, Cassava has received little to no attention in aerial image-based phenotyping and ML model testing. RESULTS: To accelerate image processing, an automated image-analysis framework called CIAT Pheno-i was developed to extract plot level vegetation indices/canopy metrics. Multiple linear regression models were constructed at different key growth stages of cassava, using ground-truth data and vegetation indices obtained from a multispectral sensor. Henceforth, the spectral indices/features were combined to develop models and predict cassava root yield using different Machine learning techniques. Our results showed that (1) Developed CIAT pheno-i image analysis framework was found to be easier and more rapid than manual methods. (2) The correlation analysis of four phenological stages of cassava revealed that elongation (EL) and late bulking (LBK) were the most useful stages to estimate above-ground biomass (AGB), below-ground biomass (BGB) and canopy height (CH). (3) The multi-temporal analysis revealed that cumulative image feature information of EL + early bulky (EBK) stages showed a higher significant correlation (r = 0.77) for Green Normalized Difference Vegetation indices (GNDVI) with BGB than individual time points. Canopy height measured on the ground correlated well with UAV (CHuav)-based measurements (r = 0.92) at late bulking (LBK) stage. Among different image features, normalized difference red edge index (NDRE) data were found to be consistently highly correlated (r = 0.65 to 0.84) with AGB at LBK stage. (4) Among the four ML algorithms used in this study, k-Nearest Neighbours (kNN), Random Forest (RF) and Support Vector Machine (SVM) showed the best performance for root yield prediction with the highest accuracy of R2 = 0.67, 0.66 and 0.64, respectively. CONCLUSION: UAV platforms, time series image acquisition, automated image analytical framework (CIAT Pheno-i), and key vegetation indices (VIs) to estimate phenotyping traits and root yield described in this work have great potential for use as a selection tool in the modern cassava breeding programs around the world to accelerate germplasm and varietal selection. The image analysis software (CIAT Pheno-i) developed from this study can be widely applicable to any other crop to extract phenotypic information rapidly.

20.
Sensors (Basel) ; 9(3): 1541-58, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-22573971

RESUMO

Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA