Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Assunto principal
Tipo de documento
Intervalo de ano de publicação
1.
Mar Environ Res ; 183: 105829, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36495654

RESUMO

The rapidly growing concern of marine microplastic pollution has drawn attentions globally. Microplastic particles are normally subjected to visual characterization prior to more sophisticated chemical analyses. However, the misidentification rate of current visual inspection approaches remains high. This study proposed a state-of-the-art deep learning-based approach, Mask R-CNN, to locate, classify, and segment large marine microplastic particles with various shapes (fiber, fragment, pellet, and rod). A microplastic dataset including 3000 images was established to train and validate this Mask R-CNN algorithm, which was backboned by a Resnet 101 architecture and could be tuned in less than 8 h. The fully trained Mask R-CNN algorithm was compared with U-Net in characterizing microplastics against various backgrounds. The results showed that the algorithm could achieve Precision = 93.30%, Recall = 95.40%, F1 score = 94.34%, APbb (Average precision of bounding box) = 92.7%, and APm (Average precision of mask) = 82.6% in a 250 images test dataset. The algorithm could also achieve a processing speed of 12.5 FPS. The results obtained in this study implied that the Mask R-CNN algorithm is a promising microplastic characterization method that can be potentially used in the future for large-scale surveys.


Assuntos
Aprendizado Profundo , Microplásticos , Plásticos , Poluição Ambiental , Velocidade de Processamento
2.
Sensors (Basel) ; 20(6)2020 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-32183201

RESUMO

Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images' locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building.

3.
Sensors (Basel) ; 18(9)2018 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-30205621

RESUMO

After a disaster strikes an urban area, damage to the façades of a building may produce dangerous falling hazards that jeopardize pedestrians and vehicles. Thus, building façades must be rapidly inspected to prevent potential loss of life and property damage. Harnessing the capacity to use new vision sensors and associated sensing platforms, such as unmanned aerial vehicles (UAVs) would expedite this process and alleviate spatial and temporal limitations typically associated with human-based inspection in high-rise buildings. In this paper, we have developed an approach to perform rapid and accurate visual inspection of building façades using images collected from UAVs. An orthophoto corresponding to any reasonably flat region on the building (e.g., a façade or building side) is automatically constructed using a structure-from-motion (SfM) technique, followed by image stitching and blending. Based on the geometric relationship between the collected images and the constructed orthophoto, high-resolution region-of-interest are automatically extracted from the collected images, enabling efficient visual inspection. We successfully demonstrate the capabilities of the technique using an abandoned building of which a façade has damaged building components (e.g., window panes or external drainage pipes).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA