Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Comput Biol Med ; 179: 108842, 2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38996552

RESUMEN

The fine identification of sleep apnea events is instrumental in Obstructive Sleep Apnea (OSA) diagnosis. The development of sleep apnea event detection algorithms based on polysomnography is becoming a research hotspot in medical signal processing. In this paper, we propose an Inverse-Projection based Visualization System (IPVS) for sleep apnea event detection algorithms. The IPVS consists of a feature dimensionality reduction module and a feature reconstruction module. First, features of blood oxygen saturation and nasal airflow are extracted and used as input data for event analysis. Then, visual analysis is conducted on the feature distribution for apnea events. Next, dimensionality reduction and reconstruction methods are combined to achieve the dynamic visualization of sleep apnea event feature sets and the visual analysis of classifier decision boundaries. Moreover, the decision-making consistency is explored for various sleep apnea event detection classifiers, which provides researchers and users with an intuitive understanding of the detection algorithm. We applied the IPVS to an OSA detection algorithm with an accuracy of 84% and a diagnostic accuracy of 92% on a publicly available dataset. The experimental results show that the consistency between our visualization results and prior medical knowledge provides strong evidence for the practicality of the proposed system. For clinical practice, the IPVS can guide users to focus on samples with higher uncertainty presented by the OSA detection algorithm, reducing the workload and improving the efficiency of clinical diagnosis, which in turn increases the value of trust.

2.
Entropy (Basel) ; 26(6)2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38920489

RESUMEN

In most silent speech research, continuously observing tongue movements is crucial, thus requiring the use of ultrasound to extract tongue contours. Precisely and in real-time extracting ultrasonic tongue contours presents a major challenge. To tackle this challenge, the novel end-to-end lightweight network DAFT-Net is introduced for ultrasonic tongue contour extraction. Integrating the Convolutional Block Attention Module (CBAM) and Attention Gate (AG) module with entropy-based optimization strategies, DAFT-Net establishes a comprehensive attention mechanism with dual functionality. This innovative approach enhances feature representation by replacing traditional skip connection architecture, thus leveraging entropy and information-theoretic measures to ensure efficient and precise feature selection. Additionally, the U-Net's encoder and decoder layers have been streamlined to reduce computational demands. This process is further supported by information theory, thus guiding the reduction without compromising the network's ability to capture and utilize critical information. Ablation studies confirm the efficacy of the integrated attention module and its components. The comparative analysis of the NS, TGU, and TIMIT datasets shows that DAFT-Net efficiently extracts relevant features, and it significantly reduces extraction time. These findings demonstrate the practical advantages of applying entropy and information theory principles. This approach improves the performance of medical image segmentation networks, thus paving the way for real-world applications.

3.
IEEE J Biomed Health Inform ; 26(1): 7-16, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34347609

RESUMEN

The generation-based data augmentation method can overcome the challenge caused by the imbalance of medical image data to a certain extent. However, most of the current research focus on images with unified structure which are easy to learn. What is different is that ultrasound images are structurally inadequate, making it difficult for the structure to be captured by the generative network, resulting in the generated image lacks structural legitimacy. Therefore, a Progressive Generative Adversarial Method for Structurally Inadequate Medical Image Data Augmentation is proposed in this paper, including a network and a strategy. Our Progressive Texture Generative Adversarial Network alleviates the adverse effect of completely truncating the reconstruction of structure and texture during the generation process and enhances the implicit association between structure and texture. The Image Data Augmentation Strategy based on Mask-Reconstruction overcomes data imbalance from a novel perspective, maintains the legitimacy of the structure in the generated data, as well as increases the diversity of disease data interpretably. The experiments prove the effectiveness of our method on data augmentation and image reconstruction on Structurally Inadequate Medical Image both qualitatively and quantitatively. Finally, the weakly supervised segmentation of the lesion is the additional contribution of our method.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía
4.
Math Biosci Eng ; 18(4): 3578-3597, 2021 04 25.
Artículo en Inglés | MEDLINE | ID: mdl-34198402

RESUMEN

In this paper, we propose a Robust Breast Cancer Diagnostic System (RBCDS) based on multimode Magnetic Resonance (MR) images. Firstly, we design a four-mode convolutional neural network (FMS-PCNN) model to detect whether an image contains a tumor. The features of the images generated by different imaging modes are extracted and fused to form the basis of classification. This classification model utilizes both spatial pyramid pooling (SPP) and principal components analysis (PCA). SPP enables the network to process images of different sizes and avoids the loss due to image resizing. PCA can remove redundant information in the fused features of multi-sequence images. The best accuracy of this model achieves 94.6%. After that, we use our optimized U-Net (SU-Net) to segment the tumor from the entire image. The SU-Net achieves a mean dice coefficient (DC) value of 0.867. Finally, the performance of the system is analyzed to prove that this system is superior to the existing schemes.


Asunto(s)
Neoplasias de la Mama , Procesamiento de Imagen Asistido por Computador , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Redes Neurales de la Computación
5.
IEEE Trans Industr Inform ; 17(9): 6510-6518, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37981910

RESUMEN

Due to the fast transmission speed and severe health damage, COVID-19 has attracted global attention. Early diagnosis and isolation are effective and imperative strategies for epidemic prevention and control. Most diagnostic methods for the COVID-19 is based on nucleic acid testing (NAT), which is expensive and time-consuming. To build an efficient and valid alternative of NAT, this article investigates the feasibility of employing computed tomography images of lungs as the diagnostic signals. Unlike normal lungs, parts of the lungs infected with the COVID-19 developed lesions, ground-glass opacity, and bronchiectasis became apparent. Through a public dataset, in this article, we propose an advanced residual learning diagnosis detection (RLDD) scheme for the COVID-19 technique, which is designed to distinguish positive COVID-19 cases from heterogeneous lung images. Besides the advantage of high diagnosis effectiveness, the designed residual-based COVID-19 detection network can efficiently extract the lung features through small COVID-19 samples, which removes the pretraining requirement on other medical datasets. In the test set, we achieve an accuracy of 91.33%, a precision of 91.30%, and a recall of 90%. For the batch of 150 samples, the assessment time is only 4.7 s. Therefore, RLDD can be integrated into the application programming interface and embedded into the medical instrument to improve the detection efficiency of COVID-19.

6.
Sensors (Basel) ; 18(7)2018 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-30041441

RESUMEN

In this paper, a novel imperceptible, fragile and blind watermark scheme is proposed for speech tampering detection and self-recovery. The embedded watermark data for content recovery is calculated from the original discrete cosine transform (DCT) coefficients of host speech. The watermark information is shared in a frames-group instead of stored in one frame. The scheme trades off between the data waste problem and the tampering coincidence problem. When a part of a watermarked speech signal is tampered with, one can accurately localize the tampered area, the watermark data in the area without any modification still can be extracted. Then, a compressive sensing technique is employed to retrieve the coefficients by exploiting the sparseness in the DCT domain. The smaller the tampered the area, the better quality of the recovered signal is. Experimental results show that the watermarked signal is imperceptible, and the recovered signal is intelligible for high tampering rates of up to 47.6%. A deep learning-based enhancement method is also proposed and implemented to increase the SNR of recovered speech signal.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA