Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(16)2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39204980

RESUMEN

In order to improve the reading efficiency of pointer meter, this paper proposes a reading method based on LinkNet. Firstly, the meter dial area is detected using YOLOv8. Subsequently, the detected images are fed into the improved LinkNet segmentation network. In this network, we replace traditional convolution with partial convolution, which reduces the number of model parameters while ensuring accuracy is not affected. Remove one pair of encoding and decoding modules to further compress the model size. In the feature fusion part of the model, the CBAM (Convolutional Block Attention Module) attention module is added and the direct summing operation is replaced by the AFF (Attention Feature Fusion) module, which enhances the feature extraction capability of the model for the segmented target. In the subsequent rotation correction section, this paper effectively addresses the issue of inaccurate prediction by CNN networks for axisymmetric images within the 0-360° range, by dividing the rotation angle prediction into classification and regression steps. It ensures that the final reading part receives the correct angle of image input, thereby improving the accuracy of the overall reading algorithm. The final experimental results indicate that our proposed reading method has a mean absolute error of 0.20 and a frame rate of 15.

2.
Sensors (Basel) ; 24(14)2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-39065868

RESUMEN

An interpolation method, which estimates unknown values with constrained information, is based on mathematical calculations. In this study, we addressed interpolation from an image-based perspective and expanded the use of image inpainting to estimate values at unknown points. When chemical gas is dispersed through a chemical attack or terrorism, it is possible to determine the concentration of the gas at each location by utilizing the deployed sensors. By interpolating the concentrations, we can obtain the contours of gas concentration. Accurately distinguishing the contours of a contaminated region from a map enables the optimal response to minimize damage. However, areas with an insufficient number of sensors have less accurate contours than other areas. In order to achieve more accurate contour data, an image inpainting-based method is proposed to enhance reliability by erasing and reconstructing low-accuracy areas in the contour. Partial convolution is used as the machine learning approach for image-inpainting, with the modified loss function for optimization. In order to train the model, we developed a gas diffusion simulation model and generated a gas concentration contour dataset comprising 100,000 contour images. The results of the model were compared to those of Kriging interpolation, one of the conventional spatial interpolation methods, finally demonstrating 13.21% higher accuracy. This suggests that interpolation from an image-based perspective can achieve higher accuracy than numerical interpolation on well-trained data. The proposed method was validated using gas concentration contour data from the verified gas dispersion modeling software Nuclear Biological Chemical Reporting And Modeling System (NBC_RAMS), which was developed by the Agency for Defense Development, South Korea.

3.
Sensors (Basel) ; 23(14)2023 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-37514845

RESUMEN

Ship fires are one of the main factors that endanger the safety of ships; because the ship is far away from land, the fire can be difficult to extinguish and could often cause huge losses. The engine room has many pieces of equipment and is the principal place of fire; however, due to its complex internal environment, it can bring many difficulties to the task of fire detection. The traditional detection methods have their own limitations, but fire detection using deep learning technology has the characteristics of high detection speed and accuracy. In this paper, we improve the YOLOv7-tiny model to enhance its detection performance. Firstly, partial convolution (PConv) and coordinate attention (CA) mechanisms are introduced into the model to improve its detection speed and feature extraction ability. Then, SIoU is used as a loss function to accelerate the model's convergence and improve accuracy. Finally, the experimental results on the dataset of the ship engine room fire made by us shows that the mAP@0.5 of the improved model is increased by 2.6%, and the speed is increased by 10 fps, which can meet the needs of engine room fire detection.

4.
Animals (Basel) ; 14(8)2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38672374

RESUMEN

In response to the high breakage rate of pigeon eggs and the significant labor costs associated with egg-producing pigeon farming, this study proposes an improved YOLOv8-PG (real versus fake pigeon egg detection) model based on YOLOv8n. Specifically, the Bottleneck in the C2f module of the YOLOv8n backbone network and neck network are replaced with Fasternet-EMA Block and Fasternet Block, respectively. The Fasternet Block is designed based on PConv (Partial Convolution) to reduce model parameter count and computational load efficiently. Furthermore, the incorporation of the EMA (Efficient Multi-scale Attention) mechanism helps mitigate interference from complex environments on pigeon-egg feature-extraction capabilities. Additionally, Dysample, an ultra-lightweight and effective upsampler, is introduced into the neck network to further enhance performance with lower computational overhead. Finally, the EXPMA (exponential moving average) concept is employed to optimize the SlideLoss and propose the EMASlideLoss classification loss function, addressing the issue of imbalanced data samples and enhancing the model's robustness. The experimental results showed that the F1-score, mAP50-95, and mAP75 of YOLOv8-PG increased by 0.76%, 1.56%, and 4.45%, respectively, compared with the baseline YOLOv8n model. Moreover, the model's parameter count and computational load are reduced by 24.69% and 22.89%, respectively. Compared to detection models such as Faster R-CNN, YOLOv5s, YOLOv7, and YOLOv8s, YOLOv8-PG exhibits superior performance. Additionally, the reduction in parameter count and computational load contributes to lowering the model deployment costs and facilitates its implementation on mobile robotic platforms.

5.
PeerJ Comput Sci ; 10: e2287, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39314731

RESUMEN

In this article, compensation algorithms for zero padding are suggested to enhance the performance of deep convolutional neural networks. By considering the characteristics of convolving filters, the proposed methods efficiently compensate convolutional output errors due to zero padded inputs in a convolutional neural network. Primarily the algorithms are developed for patch based SRResNet for Single Image Super Resolution and the performance comparison is carried out using the SRResNet model but due to generalized nature of the padding algorithms its efficacy is tested in U-Net for Lung CT Image Segmentation. The proposed algorithms show better performance than the existing algorithm called partial convolution based padding (PCP), developed recently.

6.
Sci Rep ; 14(1): 21136, 2024 09 10.
Artículo en Inglés | MEDLINE | ID: mdl-39256414

RESUMEN

The identification and classification of various phenotypic features of Auricularia cornea fruit bodies are crucial for quality grading and breeding efforts. The phenotypic features of Auricularia cornea fruit bodies encompass size, number, shape, color, pigmentation, and damage. These phenotypic features are distributed across various views of the fruit bodies, making the task of achieving both rapid and accurate identification and classification challenging. This paper proposes a novel multi-view multi-label fast network that integrates two different views of the Auricularia cornea fruiting body, enabling rapid and precise identification and classification of six phenotypic features simultaneously. Initially, a multi-view feature extraction model based on partial convolution was constructed. This model incorporates channel attention mechanisms to achieve rapid phenotypic feature extraction of the Auricularia cornea fruiting body. Subsequently, an efficient multi-task classifier was designed, based on class-specific residual attention, to ensure accurate classification of phenotypic features. Finally, task weights were dynamically adjusted based on heteroscedastic uncertainty, reducing the training complexity of the multi-task classification. The proposed network achieved a classification accuracy of 94.66% and an inference speed of 11.9 ms on an image dataset of dried Auricularia cornea fruiting bodies with three views and six labels. The results demonstrate that the proposed network can efficiently and accurately identify and classify all phenotypic features of Auricularia cornea.


Asunto(s)
Fenotipo , Basidiomycota/clasificación , Basidiomycota/fisiología , Cuerpos Fructíferos de los Hongos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación
7.
Int J Mach Learn Cybern ; : 1-17, 2023 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-37360881

RESUMEN

In the last few years, image inpainting methods based on deep learning models had shown obvious advantages compared with existing traditional methods. The former can better generate visually reasonable image structure and texture information. However, the existing premier convolutional neural networks methods usually causes the problems of excessive color difference and image texture loss and distortion phenomenon. The paper has proposed an effective image inpainting method using generative adversarial networks, which is composed of two mutually independent generative confrontation networks. Among them, the image repair network module aims to solve the problem of repairing the irregular missing areas of the image, and its generator is based on a partial convolutional network. The image optimization network module aims to solve the problem of local chromatic aberration in the repaired images, and its generator has based on deep residual networks. Through the synergy of the two network modules, the visual effect and image quality of the images has improved. The experimental results can show that the proposed method (RNON) performs better from comparisons of qualitative and quantitative evaluations with state-of-the-arts in image inpainting quality field.

8.
Comput Methods Programs Biomed ; 211: 106421, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34583228

RESUMEN

BACKGROUND AND OBJECTIVE: During the 3D reconstruction of ultrasound volume from 2D B-scan ultrasound images, holes are usually found in the reconstructed 3D volumes due to the fast scans. This condition will affect the positioning and judgment of the doctor to the lesion. Hence, in this study, we propose to fill the holes by using a novel content loss indexed 3D partial convolution network for 3D freehand ultrasound volume reconstruction. The network can synthesize novel ultrasound volume structures and reconstruct ultrasound volume with missing regions with variable sizes and at arbitrary locations. METHODS: First, the 3D partial convolution is introduced into the convolutional layer, which is masked and renormalized to be conditioned on only valid voxels. Then, the mask in the next layer is automatically updated as a part of the forward pass. To better preserve texture and structure details of the reconstruction results, we couple the adversarial loss of the least squares generative adversarial network (LSGAN) with the innovative content loss, which consists of the context loss, the feature-matching loss and the total variation loss. Thereafter, we introduce a novel spectral-normalized LSGAN by adding spectral normalization (SN) to the generator and discriminator of the LSGAN. The proposed method is simple in formulation, and is stable in training. RESULTS: Experiments on public and in-vivo ultrasound datasets and comparisons with popular algorithms demonstrate that the proposed approach can generate high-quality hole-filling results with preserved perceptual image details. CONCLUSIONS: Considering the high quality of the hole-filling results, the proposed method can effectively fill the missing regions in the reconstructed 3D ultrasound volume from 2D ultrasound image sequences.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Ultrasonografía
9.
Nan Fang Yi Ke Da Xue Xue Bao ; 41(2): 292-298, 2021 Feb 25.
Artículo en Zh | MEDLINE | ID: mdl-33624605

RESUMEN

We propose an algorithm for registration between brain tumor images and normal brain images based on tissue recovery. U-Net is first used in BraTS2018 dataset to segment the brain tumors, and PConv-Net is then used to simulate the generation of missing normal tissues in the tumor region to replace the tumor region. Finally, the normal brain image is registered to the tissue recovery brain image. We evaluated the effectiveness of this method by comparing the registration results of the repaired image and the tumor image corresponding to the surrounding tissues of the tumor area. The experimental results showed that the proposed method could reduce the effect of pathological variation, achieve a high registration accuracy, and effectively simulate and generate normal tissues to replace the tumor regions, thus improving the registration effect between brain tumor images and normal brain images.


Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Algoritmos , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA