Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 174
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Conserv Biol ; 38(1): e14145, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37403804

RESUMO

Emerging technology has immense potential to increase the scale and efficiency of marine conservation. One such technology is large-area imaging (LAI), which relies on structure-from-motion photogrammetry to create composite products, including 3-dimensional (3-D) environmental models, that are larger in spatial extent than the individual images used to create them. Use of LAI has become widespread in certain fields of marine science, primarily to measure the 3D structure of benthic ecosystems and track change over time. However, the use of LAI in the field of marine conservation appears limited. We conducted a review of the coral reef literature on the use of LAI to identify research themes and regional trends in applications of this technology. We also surveyed 135 coral reef scientists and conservation practitioners to determine community familiarity with LAI, evaluate barriers practitioners face in using LAI, and identify applications of LAI believed to be most exciting or relevant to coral conservation. Adoption of LAI was limited primarily to researchers at institutions based in advanced economies and was applied infrequently to conservation, although conservation practitioners and survey respondents from emerging economies indicated they expect to use LAI in the future. Our results revealed disconnect between current LAI research topics and conservation priorities identified by practitioners, highlighting the need for more diverse, conservation-relevant research using LAI. We provide recommendations for how early adopters of LAI (typically Global North scientists from well-resourced institutions) can facilitate access to this conservation technology. These recommendations include developing training resources, creating partnerships for data storage and analysis, publishing standard operating procedures for LAI workflows, standardizing methods, developing tools for efficient data extraction from LAI products, and conducting conservation-relevant research using LAI.


Reducción de la brecha entre la investigación actual de imágenes de gran superficie y las necesidades de la conservación marina Resumen Las nuevas tecnologías tienen un enorme potencial para aumentar la escala y la eficiencia de la conservación marina. Una de ellas son las imágenes de gran superficie (IGS), que se basan en la fotogrametría de estructura a partir del movimiento para crear productos compuestos, incluidos modelos ambientales tridimensionales (3D), cuya extensión espacial es mayor que la de las imágenes individuales utilizadas para crearlos. El uso de las IGS se ha generalizado en determinados campos de las ciencias marinas, principalmente para medir la estructura tridimensional de los ecosistemas bentónicos y realizar un seguimiento de los cambios a lo largo del tiempo. Sin embargo, el uso de las IGS en el campo de la conservación marina parece limitado. Realizamos una revisión de la bibliografía sobre el uso de las IGS en los arrecifes de coral para identificar temas de investigación y tendencias regionales en las aplicaciones de esta tecnología. También encuestamos a 135 científicos de arrecifes de coral y profesionales de la conservación para determinar la familiaridad de la comunidad con las IGS, evaluar las barreras a las que se enfrentan los profesionales en el uso de las IGS e identificar sus aplicaciones consideradas como las más interesantes o relevantes para la conservación del coral. La adopción de las IGS se limitó principalmente a los investigadores de las instituciones con sede en las economías avanzadas y se aplicó con poca frecuencia a la conservación, aunque los profesionales de la conservación y los encuestados de las economías emergentes indicaron que esperan utilizar las IGS en el futuro. Nuestros resultados revelaron una desconexión entre los actuales temas de investigación de las IGS y las prioridades de conservación identificadas por los profesionales, lo que subraya la necesidad de una investigación más diversa y relevante para la conservación mediante el uso de las IGS.


Assuntos
Antozoários , Ecossistema , Animais , Conservação dos Recursos Naturais/métodos , Recifes de Corais
2.
Sensors (Basel) ; 24(7)2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38610569

RESUMO

The performance of three-dimensional (3D) point cloud reconstruction is affected by dynamic features such as vegetation. Vegetation can be detected by near-infrared (NIR)-based indices; however, the sensors providing multispectral data are resource intensive. To address this issue, this study proposes a two-stage framework to firstly improve the performance of the 3D point cloud generation of buildings with a two-view SfM algorithm, and secondly, reduce noise caused by vegetation. The proposed framework can also overcome the lack of near-infrared data when identifying vegetation areas for reducing interferences in the SfM process. The first stage includes cross-sensor training, model selection and the evaluation of image-to-image RGB to color infrared (CIR) translation with Generative Adversarial Networks (GANs). The second stage includes feature detection with multiple feature detector operators, feature removal with respect to the NDVI-based vegetation classification, masking, matching, pose estimation and triangulation to generate sparse 3D point clouds. The materials utilized in both stages are a publicly available RGB-NIR dataset, and satellite and UAV imagery. The experimental results indicate that the cross-sensor and category-wise validation achieves an accuracy of 0.9466 and 0.9024, with a kappa coefficient of 0.8932 and 0.9110, respectively. The histogram-based evaluation demonstrates that the predicted NIR band is consistent with the original NIR data of the satellite test dataset. Finally, the test on the UAV RGB and artificially generated NIR with a segmentation-driven two-view SfM proves that the proposed framework can effectively translate RGB to CIR for NDVI calculation. Further, the artificially generated NDVI is able to segment and classify vegetation. As a result, the generated point cloud is less noisy, and the 3D model is enhanced.

3.
Sensors (Basel) ; 24(10)2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38793892

RESUMO

Modern UAVs (unmanned aerial vehicles) equipped with video cameras can provide large-scale high-resolution video data. This poses significant challenges for structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms, as most of them are developed for relatively small-scale and low-resolution scenes. In this paper, we present a video-based SfM method specifically designed for high-resolution large-size UAV videos. Despite the wide range of applications for SfM, performing mainstream SfM methods on such videos poses challenges due to their high computational cost. Our method consists of three main steps. Firstly, we employ a visual SLAM (VSLAM) system to efficiently extract keyframes, keypoints, initial camera poses, and sparse structures from downsampled videos. Next, we propose a novel two-step keypoint adjustment method. Instead of matching new points in the original videos, our method effectively and efficiently adjusts the existing keypoints at the original scale. Finally, we refine the poses and structures using a rotation-averaging constrained global bundle adjustment (BA) technique, incorporating the adjusted keypoints. To enrich the resources available for SLAM or SfM studies, we provide a large-size (3840 × 2160) outdoor video dataset with millimeter-level-accuracy ground control points, which supplements the current relatively low-resolution video datasets. Experiments demonstrate that, compared with other SLAM or SfM methods, our method achieves an average efficiency improvement of 100% on our collected dataset and 45% on the EuRoc dataset. Our method also demonstrates superior localization accuracy when compared with state-of-the-art SLAM or SfM methods.

4.
Sensors (Basel) ; 24(15)2024 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-39123937

RESUMO

In the field of endoscopic imaging, challenges such as low resolution, complex textures, and blurred edges often degrade the quality of 3D reconstructed models. To address these issues, this study introduces an innovative endoscopic image super-resolution and 3D reconstruction technique named Omni-Directional Focus and Scale Resolution (OmDF-SR). This method integrates an Omnidirectional Self-Attention (OSA) mechanism, an Omnidirectional Scale Aggregation Group (OSAG), a Dual-stream Adaptive Focus Mechanism (DAFM), and a Dynamic Edge Adjustment Framework (DEAF) to enhance the accuracy and efficiency of super-resolution processing. Additionally, it employs Structure from Motion (SfM) and Multi-View Stereo (MVS) technologies to achieve high-precision medical 3D models. Experimental results indicate significant improvements in image processing with a PSNR of 38.2902 dB and an SSIM of 0.9746 at a magnification factor of ×2, and a PSNR of 32.1723 dB and an SSIM of 0.9489 at ×4. Furthermore, the method excels in reconstructing detailed 3D models, enhancing point cloud density, mesh quality, and texture mapping richness, thus providing substantial support for clinical diagnosis and surgical planning.

5.
Sensors (Basel) ; 24(7)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38610501

RESUMO

Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual's health conditions. Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras, lack strongly in terms of resolution and image quality compared with state-of-the-art photo cameras. In this article, a new method is demonstrated to superimpose multimodal image data onto a 3D model created by multi-view photogrammetry. While a high-resolution photo camera captures a set of images from varying view angles to reconstruct a detailed 3D model of the scene, low-resolution multimodal camera(s) simultaneously record the scene. All cameras are pre-calibrated and rigidly mounted on a rig, i.e., their imaging properties and relative positions are known. The method was realized in a laboratory setup consisting of a professional photo camera, a thermal camera, and a 12-channel multispectral camera. In our experiments, an accuracy better than one pixel was achieved for the data fusion using multimodal superimposition. Finally, application examples of multimodal 3D digitization are demonstrated, and further steps to system realization are discussed.

6.
Sensors (Basel) ; 24(4)2024 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-38400240

RESUMO

Vehicle exterior inspection is a critical operation for identifying defects and ensuring the overall safety and integrity of vehicles. Visual-based inspection of moving objects, such as vehicles within dynamic environments abounding with reflections, presents significant challenges, especially when time and accuracy are of paramount importance. Conventional exterior inspections of vehicles require substantial labor, which is both costly and prone to errors. Recent advancements in deep learning have reduced labor work by enabling the use of segmentation algorithms for defect detection and description based on simple RGB camera acquisitions. Nonetheless, these processes struggle with issues of image orientation leading to difficulties in accurately differentiating between detected defects. This results in numerous false positives and additional labor effort. Estimating image poses enables precise localization of vehicle damages within a unified 3D reference system, following initial detections in the 2D imagery. A primary challenge in this field is the extraction of distinctive features and the establishment of accurate correspondences between them, a task that typical image matching techniques struggle to address for highly reflective moving objects. In this study, we introduce an innovative end-to-end pipeline tailored for efficient image matching and stitching, specifically addressing the challenges posed by moving objects in static uncalibrated camera setups. Extracting features from moving objects with strong reflections presents significant difficulties, beyond the capabilities of current image matching algorithms. To tackle this, we introduce a novel filtering scheme that can be applied to every image matching process, provided that the input features are sufficient. A critical aspect of this module involves the exclusion of points located in the background, effectively distinguishing them from points that pertain to the vehicle itself. This is essential for accurate feature extraction and subsequent analysis. Finally, we generate a high-quality image mosaic by employing a series of sequential stereo-rectified pairs.

7.
Morphologie ; 108(363): 100793, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38964273

RESUMO

Advances in computer hardware and software permit the reconstruction of physical objects digitally from digital camera images. Given the varying shapes and sizes of human bones, a comprehensive assessment is required to establish the accuracy of digital bone reconstructions from three-dimensional (3D) photogrammetry. Five human bones (femur, radius, scapula, vertebra, patella) were marked with pencil, to establish between 9 and 29 landmarks. The distances between landmarks were measured from the physical bones and digitized from 3D reconstructions. Images used for reconstructions were taken on two separate days, allowing for repeatability to be established. In comparison to physical measurements, the mean (±standard deviation) absolute differences were between 0.2±0.1mm and 0.4±0.2mm. The mean (±standard deviation) absolute differences between reconstructions were between 0.3±<0.1mm and 0.4±0.4mm. The 3D photogrammetry procedures described are accurate and repeatable, permitting quantitative analyses to be conducted from digital reconstructions. Moreover, 3D photogrammetry may be used to capture and preserve anatomical materials for anatomy education.

8.
Ecol Lett ; 26(9): 1497-1509, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37380335

RESUMO

The three-dimensional structure of habitats is a critical component of species' niches driving coexistence in species-rich ecosystems. However, its influence on structuring and partitioning recruitment niches has not been widely addressed. We developed a new method to combine species distribution modelling and structure from motion, and characterized three-dimensional recruitment niches of two ecosystem engineers on Caribbean coral reefs, scleractinian corals and gorgonians. Fine-scale roughness was the most important predictor of suitable habitat for both taxa, and their niches largely overlapped, primarily due to scleractinians' broader niche breadth. Crevices and holes at mm scales on calcareous rock with low coral cover were more suitable for octocorals than for scleractinian recruits, suggesting that the decline in scleractinian corals is facilitating the recruitment of octocorals on contemporary Caribbean reefs. However, the relative abundances of the taxa were independent of the amount of suitable habitat on the reef, emphasizing that niche processes alone do not predict recruitment rates.


Assuntos
Antozoários , Animais , Ecossistema , Recifes de Corais , Região do Caribe
9.
Sensors (Basel) ; 23(19)2023 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-37837087

RESUMO

Transmission pipelines belong to technical infrastructure, the condition of which is subject to periodic monitoring. The research was to verify whether aerial measurement methods, especially UAV laser scanning, could determine the geometric shape of pipelines with a precision similar to that of terrestrial scanning, adopted as a reference method. The test object was a section of a district heating pipeline with two types of surfaces: matte and glossy. The pipeline was measured using four methods: terrestrial scanning, airborne scanning, UAV scanning and the structure from motion method. Then, based on the reference terrestrial scanning data, pipeline models were created, with which all methods were compared. The comparison made it possible to find that only the UAV scanning yielded results consistent with those of the terrestrial scanning for all the pipes. The differences usually did not exceed 10 mm, sometimes reaching 20 mm. The structure from motion method yielded unstable results. For the old, matte pipes, the results were similar to those of the UAV scan; however, for the new, shiny pipes, the differences were up to 60 mm.

10.
Sensors (Basel) ; 23(2)2023 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-36679525

RESUMO

Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion-Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits.


Assuntos
Fotogrametria , Smartphone , Fotogrametria/métodos , Calibragem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA