Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 6329, 2023 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-37072434

RESUMEN

Conventional crop height measurements performed using aerial drone images require 3D reconstruction results of several aerial images obtained through structure from motion. Therefore, they require extensive computation time and their measurement accuracy is not high; if the 3D reconstruction result fails, several aerial photos must be captured again. To overcome these challenges, this study proposes a high-precision measurement method that uses a drone equipped with a monocular camera and real-time kinematic global navigation satellite system (RTK-GNSS) for real-time processing. This method performs high-precision stereo matching based on long-baseline lengths (approximately 1 m) during the flight by linking the RTK-GNSS and aerial image capture points. As the baseline length of a typical stereo camera is fixed, once the camera is calibrated on the ground, it does not need to be calibrated again during the flight. However, the proposed system requires quick calibration in flight because the baseline length is not fixed. A new calibration method that is based on zero-mean normalized cross-correlation and two stages least square method, is proposed to further improve the accuracy and stereo matching speed. The proposed method was compared with two conventional methods in natural world environments. It was observed that error rates reduced by 62.2% and 69.4%, for flight altitudes between 10 and 20 m respectively. Moreover, a depth resolution of 1.6 mm and reduction of 44.4% and 63.0% in the error rates were achieved at an altitude of 4.1 m, and the execution time was 88 ms for images with a size of 5472 × 3468 pixels, which is sufficiently fast for real-time measurement.

2.
Sci Rep ; 10(1): 8132, 2020 05 18.
Artículo en Inglés | MEDLINE | ID: mdl-32424180

RESUMEN

Some neural network can be trained by transfer learning, which uses a pre-trained neural network as the source task, for a small target task's dataset. The performance of the transfer learning depends on the knowledge (i.e., layers) selected from the pre-trained network. At present, this knowledge is usually chosen by humans. The transfer learning method PathNet automatically selects pre-trained modules or adjustable modules in a modular neural network. However, PathNet requires modular neural networks as the pre-trained networks, therefore non-modular pre-trained neural networks are currently unavailable. Consequently, PathNet limits the versatility of the network structure. To address this limitation, we propose Stepwise PathNet, which regards the layers of a non-modular pre-trained neural network as the module in PathNet and selects the layers automatically through training. In an experimental validation of transfer learning from InceptionV3 pre-trained on the ImageNet dataset to networks trained on three other datasets (CIFAR-100, SVHN and Food-101), Stepwise PathNet was up to 8% and 10% more accurate than finely tuned and from-scratch approaches, respectively. Also, some of the selected layers were not supported by the layer functions assumed in PathNet.

3.
Appl Opt ; 56(22): 6043-6048, 2017 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-29047798

RESUMEN

An efficient network for super-resolution, which we refer to as inception learning super-resolution (ILSR), is proposed. We adopt the inception module from GoogLeNet to exploit multiple features from low-resolution images, yet maintain fast training steps. The proposed ILSR network demonstrates low computation time and fast convergence during the training process. It is divided into three parts: feature extraction, mapping, and reconstruction. In feature extraction, we apply the inception module followed by dimensional reduction. Then, we map features using a simple convolutional layer. Finally, we reconstruct the high-resolution component using the inception module and a 1×1 convolutional layer. Experimental results demonstrate that the proposed network can construct sharp edges and clean textures, and reduce computation time by up to three orders of magnitude compared to state-of-the-art methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...