Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 14(1): 17075, 2024 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-39048601

RESUMEN

Among the causes of the annually traffic accidents, driving fatigue is the main culprit. In consequence, it is of great practical significance to carry out the research of driving fatigue detection and early warning system. However, there are still two problems in the latest methods of driving fatigue detection: one is that a single information cannot precisely reflect the actual state of the driver in different fatigue phases, another one is the detection effect is not very well or even difficult to detect under abnormal illumination. In this paper, the multi-task cascaded convolutional networks (MTCNN) and infrared-based remote photo-plethysmography (rPPG) theory are used to extract the driver's facial and physiological information, and the multi-modal specific fatigue information is deeply excavated, and the multi-modal feature fusion model is constructed to comprehensively analyze the driver's fatigue variation tendency. Aiming at the matter of low detection accuracy under abnormal illumination, the multi-modal features extracted from visible light images and infrared images are fused by multi-loss reconstruction (MLR) module, and the driving fatigue detection module is established which is based on Bi-LSTM model by utilizing fatigue timing. The experiments were validated under all-weather illumination scenarios and were carried out on the datasets NTHU-DDD, UTA-RLDDD and FAHD. The results show that the multi-modal driving fatigue detection model has better performance than the single-modal model, and the accuracy is improved by 8.1%. In the abnormal illumination such as strong and weak light, the accuracy of the method can reach 91.7% at the highest and 83.6% at the lowest. Meanwhile, in the normal illumination, it can reach 93.2%.

2.
Sci Rep ; 14(1): 17319, 2024 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-39068215

RESUMEN

In this study, we propose a novel method for identifying lithology using an attention mechanism-enhanced graph convolutional neural network (AGCN). The aim of this method is to address the limitations of traditional approaches that evaluate unbalanced lithology by improving the identification of thin layers and small samples, while providing reliable data support for reservoir evaluation. To achieve this goal, we begin by using Principal Component Analysis (PCA) with maximum and minimum distance clustering (Max-min-distance) to correct the logging curves, which compensates for the low resolution of thin layers and enhances the accuracy of stratigraphic representation. Subsequently, we transform the logging data into graph-structured data by connecting distance similarity points and feature similarity points of the logging samples. We then use the graph convolutional network (GCN) to identify lithology, leveraging both labeled and unlabeled data to enhance the ability to identify lithology in small sample datasets. Additionally, our model incorporates a channel and spatial attention mechanism that assigns weights to the graph structure during lithology identification, improving the model's capability to discern differences across samples. To evaluate the performance of our model, we constructed a lithology dataset comprising five wells and conducted experiments. The results indicate that our approach achieves a maximum accuracy of 97.67%, surpassing the performance of a singlestructure model in lithology identification. In conclusion, our proposed method provides a promising and effective approach for unbalanced lithology identification, significantly improving accuracy levels.

3.
Sensors (Basel) ; 24(4)2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38400207

RESUMEN

In recent years, the development of image super-resolution (SR) has explored the capabilities of convolutional neural networks (CNNs). The current research tends to use deeper CNNs to improve performance. However, blindly increasing the depth of the network does not effectively enhance its performance. Moreover, as the network depth increases, more issues arise during the training process, requiring additional training techniques. In this paper, we propose a lightweight image super-resolution reconstruction algorithm (SISR-RFDM) based on the residual feature distillation mechanism (RFDM). Building upon residual blocks, we introduce spatial attention (SA) modules to provide more informative cues for recovering high-frequency details such as image edges and textures. Additionally, the output of each residual block is utilized as hierarchical features for global feature fusion (GFF), enhancing inter-layer information flow and feature reuse. Finally, all these features are fed into the reconstruction module to restore high-quality images. Experimental results demonstrate that our proposed algorithm outperforms other comparative algorithms in terms of both subjective visual effects and objective evaluation quality. The peak signal-to-noise ratio (PSNR) is improved by 0.23 dB, and the structural similarity index (SSIM) reaches 0.9607.

4.
Sensors (Basel) ; 24(2)2024 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-38257560

RESUMEN

Dynamic visual vending machines are rapidly growing in popularity, offering convenience and speed to customers. However, there is a prevalent issue with consumers damaging goods and then returning them to the machine, severely affecting business interests. This paper addresses the issue from the standpoint of defect detection. Although existing industrial defect detection algorithms, such as PatchCore, perform well, they face challenges, including handling goods in various orientations, detection speeds that do not meet real-time monitoring requirements, and complex backgrounds that hinder detection accuracy. These challenges hinder their application in dynamic vending environments. It is crucial to note that efficient visual features play a vital role in memory banks, yet current memory repositories for industrial inspection algorithms do not adequately address the problem of location-specific feature redundancy. To tackle these issues, this paper introduces a novel defect detection algorithm for goods using adaptive subsampling and partitioned memory banks. Firstly, Grad-CAM is utilized to extract deep features, which, in combination with shallow features, mitigate the impact of complex backgrounds on detection accuracy. Next, graph convolutional networks extract rotationally invariant features. The adaptive subsampling partitioned memory bank is then employed to store features of non-defective goods, which reduces memory consumption and enhances training speed. Experimental results on the MVTec AD dataset demonstrate that the proposed algorithm achieves a marked improvement in detection speed while maintaining accuracy that is comparable to state-of-the-art models.

5.
Sensors (Basel) ; 24(2)2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-38257546

RESUMEN

Existing vision-based fatigue detection methods commonly utilize RGB cameras to extract facial and physiological features for monitoring driver fatigue. These features often include single indicators such as eyelid movement, yawning frequency, and heart rate. However, the accuracy of RGB cameras can be affected by factors like varying lighting conditions and motion. To address these challenges, we propose a non-invasive method for multi-modal fusion fatigue detection called RPPMT-CNN-BiLSTM. This method incorporates a feature extraction enhancement module based on the improved Pan-Tompkins algorithm and 1D-MTCNN. This enhances the accuracy of heart rate signal extraction and eyelid features. Furthermore, we use one-dimensional neural networks to construct two models based on heart rate and PERCLOS values, forming a fatigue detection model. To enhance the robustness and accuracy of fatigue detection, the trained model data results are input into the BiLSTM network. This generates a time-fitting relationship between the data extracted from the CNN, allowing for effective dynamic modeling and achieving multi-modal fusion fatigue detection. Numerous experiments validate the effectiveness of the proposed method, achieving an accuracy of 98.2% on the self-made MDAD (Multi-Modal Driver Alertness Dataset). This underscores the feasibility of the algorithm. In comparison with traditional methods, our approach demonstrates higher accuracy and positively contributes to maintaining traffic safety, thereby advancing the field of smart transportation.


Asunto(s)
Memoria a Corto Plazo , Fotopletismografía , Redes Neurales de la Computación , Algoritmos , Párpados
6.
Sensors (Basel) ; 23(5)2023 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-36904643

RESUMEN

As small commodity features are often few in number and easily occluded by hands, the overall detection accuracy is low, and small commodity detection is still a great challenge. Therefore, in this study, a new algorithm for occlusion detection is proposed. Firstly, a super-resolution algorithm with an outline feature extraction module is used to process the input video frames to restore high-frequency details, such as the contours and textures of the commodities. Next, residual dense networks are used for feature extraction, and the network is guided to extract commodity feature information under the effects of an attention mechanism. As small commodity features are easily ignored by the network, a new local adaptive feature enhancement module is designed to enhance the regional commodity features in the shallow feature map to enhance the expression of the small commodity feature information. Finally, a small commodity detection box is generated through the regional regression network to complete the small commodity detection task. Compared to RetinaNet, the F1-score improved by 2.6%, and the mean average precision improved by 2.45%. The experimental results reveal that the proposed method can effectively enhance the expressions of the salient features of small commodities and further improve the detection accuracy for small commodities.

7.
Sensors (Basel) ; 23(5)2023 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-36904989

RESUMEN

Pedestrian dead reckoning (PDR) is the critical component in indoor pedestrian tracking and navigation services. While most of the recent PDR solutions exploit in-built inertial sensors in smartphones for next step estimation, due to measurement errors and sensing drift, the accuracy of walking direction, step detection, and step length estimation cannot be guaranteed, leading to large accumulative tracking errors. In this paper, we propose a radar-assisted PDR scheme, called RadarPDR, which integrates a frequency-modulation continuous-wave (FMCW) radar to assist the inertial sensors-based PDR. We first establish a segmented wall distance calibration model to deal with the radar ranging noise caused by irregular indoor building layouts and fuse wall distance estimation with acceleration and azimuth signals measured by the inertial sensors of a smartphone. We also propose a hierarchical particle filter(PF) together with an extended Kalman filter for position and trajectory adjustment. Experiments have been conducted in practical indoor scenarios. Results demonstrate that the proposed RadarPDR is efficient and stable and outperforms the widely used inertial sensors-based PDR scheme.

8.
Sensors (Basel) ; 23(6)2023 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-36992064

RESUMEN

Aiming at the recognition of intelligent retail dynamic visual container goods, two problems that lead to low recognition accuracy must be addressed; one is the lack of goods features caused by the occlusion of the hand, and the other is the high similarity of goods. Therefore, this study proposes an approach for occluding goods recognition based on a generative adversarial network combined with prior inference to address the two abovementioned problems. With DarkNet53 as the backbone network, semantic segmentation is used to locate the occluded part in the feature extraction network, and simultaneously, the YOLOX decoupling head is used to obtain the detection frame. Subsequently, a generative adversarial network under prior inference is used to restore and expand the features of the occluded parts, and a multi-scale spatial attention and effective channel attention weighted attention mechanism module is proposed to select fine-grained features of goods. Finally, a metric learning method based on von Mises-Fisher distribution is proposed to increase the class spacing of features to achieve the effect of feature distinction, whilst the distinguished features are utilized to recognize goods at a fine-grained level. The experimental data used in this study were all obtained from the self-made smart retail container dataset, which contains a total of 12 types of goods used for recognition and includes four couples of similar goods. Experimental results reveal that the peak signal-to-noise ratio and structural similarity under improved prior inference are 0.7743 and 0.0183 higher than those of the other models, respectively. Compared with other optimal models, mAP improves the recognition accuracy by 1.2% and the recognition accuracy by 2.82%. This study solves two problems: one is the occlusion caused by hands, and the other is the high similarity of goods, thus meeting the requirements of commodity recognition accuracy in the field of intelligent retail and exhibiting good application prospects.

9.
J Cosmet Dermatol ; 21(2): 451-460, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-33759323

RESUMEN

BACKGROUND: "Omics" are usually based on the use of high-throughput analysis methods for global analysis of biological samples and the discovery of biomarkers, and may provide new insights into biological phenomena. Over the last few years, the development of omics technologies has considerably accelerated the pace of dermatological research. AIMS: The purpose of this article was to review the development of omics in recent decades and their application in dermatological research. METHODS: An extensive literature search was conducted on omics technologies since the first research on omics. RESULTS: This article summarizes the history and main research methods of the six omics technologies, including genomics, transcriptomics, proteomics, metabolomics, lipidomics, and microbiomics. Their application in certain skin diseases and cosmetics research and development are also summarized. CONCLUSIONS: This information will help to understand the mechanism of some skin diseases and the discovery of potential biomarkers, and provide new insights for skin health management and cosmetics research and development.


Asunto(s)
Genómica , Proteómica , Biomarcadores , Humanos , Metabolómica
10.
J Cosmet Dermatol ; 21(5): 1920-1930, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-34357681

RESUMEN

BACKGROUND: Epigenetics has recently evolved from a collection of diverse phenomena to a defined and far-reaching field of study. Epigenetic modifications of the genome, such as DNA methylation and histone modifications, have been reported to play a role in some skin diseases or cancer. AIMS: The purpose of this article was to review the development of epigenetic in recent decades and their applications in dermatological research. METHODS: An extensive literature search was conducted on epigenetic modifications since the first research on epigenetic. RESULTS: This article summarizes the concept and development of epigenetics, as well as the process and principle of epigenetic modifications such as DNA methylation, histone modification, and non-coding RNA. Their application in some skin diseases and cosmetic research and development is also summarized. CONCLUSIONS: This information will help to understand the mechanisms of epigenetics and some non-coding RNA, the discovery of the related drugs, and provide new insights for skin health management and cosmetic research and development.


Asunto(s)
Epigenómica , Enfermedades de la Piel , Metilación de ADN , Epigénesis Genética , Humanos , ARN no Traducido/genética , Enfermedades de la Piel/genética , Enfermedades de la Piel/terapia
11.
J Med Imaging (Bellingham) ; 8(Suppl 1): 017504, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34471647

RESUMEN

Purpose: To detect and diagnose coronavirus disease 2019 (COVID-19) better and faster, separable VGG-ResNet (SVRNet) and separable VGG-DenseNet (SVDNet) models are proposed, and a detection system is designed, based on lung x-rays to diagnose whether patients are infected with COVID-19. Approach: Combining deep learning and transfer learning, 1560 lung x-ray images in the COVID-19 x-ray image database (COVID-19 Radiography Database) were used as the experimental data set, and the most representative image classification models, VGG16, ResNet50, InceptionV3, and Xception, were fine-tuned and trained. Then, two new models for lung x-ray detection, SVRNet and SVDNet, were proposed on this basis. Finally, 312 test set images (including 44 COVID-19 and 268 normal images) were used as input to evaluate the classification accuracy, sensitivity, and specificity of SVRNet and SVDNet models. Results: In the classification experiment of lung x-rays that tested positive and negative for COVID-19, the classification accuracy, sensitivity, and specificity of SVRNet and SVDNet are 99.13%, 99.14%, 99.12% and 99.37%, 99.43%, 99.31%, respectively. Compared with the VGG16 network, SVRNet and SVDNet increased by 3.07%, 2.84%, 3.31% and 3.31%, 3.13%, 3.50%, respectively. On the other hand, the parameters of SVRNet and SVDNet are 5.65 × 10 6 and 6.57 × 10 6 , respectively. These are 61.56% and 55.31% less than VGG16, respectively. Conclusions: The SVRNet and SVDNet models proposed greatly reduce the number of parameters, while improving the accuracy and increasing the operating speed, and can accurately and quickly detect lung x-rays containing COVID-19.

12.
Cancer Inform ; 13(Suppl 3): 125-36, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-26309389

RESUMEN

Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA