Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38471111

RESUMO

RATIONALE: The incidence of clinically undiagnosed obstructive sleep apnea (OSA) is high among the general population due to limited access to polysomnography. Computed tomography (CT) of craniofacial regions obtained for other purposes can be beneficial in predicting OSA and its severity. OBJECTIVES: To predict OSA and its severity based on paranasal CT using a 3-dimensional deep learning algorithm. METHODS: One internal dataset (n=798) and two external datasets (n=135 and 85) were used in this study. In the internal dataset, 92 normal, 159 mild, 201 moderate, and 346 severe OSA participants were enrolled to derive the deep learning model. A multimodal deep learning model was elicited from the connection between a 3-dimensional convolutional neural network (CNN)-based part treating unstructured data (CT images) and a multi-layer perceptron (MLP)-based part treating structured data (age, sex, and body mass index) to predict OSA and its severity. MEASUREMENTS AND MAIN RESULTS: In four-class classification for predicting the severity of OSA, the AirwayNet-MM-H model (multimodal model with airway-highlighting preprocessing algorithm) showed an average accuracy of 87.6% (95% confidence interval [CI] 86.8-88.6) in the internal dataset and 84.0% (95% CI 83.0-85.1) and 86.3% (95% CI 85.3-87.3) in the two external datasets, respectively. In the two-class classification for predicting significant OSA (moderate to severe OSA), The area under the receiver operating characteristics (AUROC), accuracy, sensitivity, specificity, and F1 score were 0.910 (95% CI 0.899-0.922), 91.0% (95% CI 90.1-91.9), 89.9% (95% CI 88.8-90.9), 93.5% (95% CI 92.7-94.3), and 93.2% (95% CI 92.5-93.9), respectively, in the internal dataset. Furthermore, the diagnostic performance of the Airway Net-MM-H model outperformed that of the other six state-of-the-art deep learning models in terms of accuracy for both four- and two-class classifications and AUROC for two-class classification (p<0.001). CONCLUSIONS: A novel deep learning model, including a multimodal deep learning model and an airway-highlighting preprocessing algorithm from CT images obtained for other purposes, can provide significantly precise outcomes for OSA diagnosis.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38190679

RESUMO

Accurate and continuous bladder volume monitoring is crucial for managing urinary dysfunctions. Wearable ultrasound devices offer a solution by enabling non-invasive and real-time monitoring. Previous studies have limitations in power consumption and computation cost or quantitative volume estimation capability. To alleviate this, we present a novel pipeline that effectively integrates conventional feature extraction and deep learning to achieve continuous quantitative bladder volume monitoring efficiently. Particularly, in the proposed pipeline, bladder shape is coarsely estimated by a simple bladder wall detection algorithm in wearable devices, and the bladder wall coordinates are wirelessly transferred to an external server. Subsequently, a roughly estimated bladder shape from the wall coordinates is refined in an external server with a diffusion-based model. With this approach, power consumption and computation costs on wearable devices remained low, while fully harnessing the potential of deep learning for accurate shape estimation. To evaluate the proposed pipeline, we collected a dataset of bladder ultrasound images and RF signals from 250 patients. By simulating data acquisition from wearable devices using the dataset, we replicated real-world scenarios and validated the proposed method within these scenarios. Experimental results exhibit superior improvements, including +9.32% of IoU value in 2D segmentation and -22.06 of RMSE in bladder volume regression compared to state-of-the-art performance from alternative methods, emphasizing the potential of this approach in continuous bladder volume monitoring in clinical settings. Therefore, this study effectively bridges the gap between accurate bladder volume estimation and the practical deployment of wearable ultrasound devices, promising improved patient care and quality of life.

3.
IEEE J Biomed Health Inform ; 27(1): 176-187, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35877797

RESUMO

Fluorescence imaging-based diagnostic systems have been widely used to diagnose skin diseases due to their ability to provide detailed information related to the molecular composition of the skin compared to conventional RGB imaging. In addition, recent advances in smartphones have made them suitable for application in biomedical imaging, and therefore various smartphone-based optical imaging systems have been developed for mobile healthcare. However, an advanced analysis algorithm is required to improve the diagnosis of skin diseases. Various deep learning-based algorithms have recently been developed for this purpose. However, deep learning-based algorithms using only white-light reflectance RGB images have exhibited limited diagnostic performance. In this study, we developed an auxiliary deep learning network called fluorescence-aided amplifying network (FAA-Net) to diagnose skin diseases using a developed multi-modal smartphone imaging system that offers RGB and fluorescence images. FAA-Net is equipped with a meta-learning-based algorithm to solve problems that may occur due to the insufficient number of images acquired by the developed system. In addition, we devised a new attention-based module that can learn the location of skin diseases by itself and emphasize potential disease regions, and incorporated it into FAA-Net. We conducted a clinical trial in a hospital to evaluate the performance of FAA-Net and to compare various evaluation metrics of our developed model and other state-of-the-art models for the diagnosis of skin diseases using our multi-modal system. Experimental results demonstrated that our developed model exhibited an 8.61% and 9.83% improvement in mean accuracy and area under the curve in classifying skin diseases, respectively, compared with other advanced models.


Assuntos
Aprendizado Profundo , Dermatopatias , Humanos , Algoritmos , Diagnóstico por Imagem , Redes Neurais de Computação
4.
Artigo em Inglês | MEDLINE | ID: mdl-35877808

RESUMO

The performance of computer-aided diagnosis (CAD) systems that are based on ultrasound imaging has been enhanced owing to the advancement in deep learning. However, because of the inherent speckle noise in ultrasound images, the ambiguous boundaries of lesions deteriorate and are difficult to distinguish, resulting in the performance degradation of CAD. Although several methods have been proposed to reduce speckle noise over decades, this task remains a challenge that must be improved to enhance the performance of CAD. In this article, we propose a deep content-aware image prior (DCAIP) with a content-aware attention module (CAAM) for superior despeckling of ultrasound images without clean images. For the image prior, we developed a CAAM to deal with the content information in an input image. In this module, super-pixel pooling (SPP) is used to give attention to salient regions in an ultrasound image. Therefore, it can provide more content information regarding the input image when compared to other attention modules. The DCAIP consists of deep learning networks based on this attention module. The DCAIP is validated by applying it as a preprocessing step for breast tumor segmentation in ultrasound images, which is one of the tasks in CAD. Our method improved the segmentation performance by 15.89% in terms of the area under the precision-recall (PR) curve (AUPRC). The results demonstrate that our method enhances the quality of ultrasound images by effectively reducing speckle noise while preserving important information in the image, promising for the design of superior CAD systems.


Assuntos
Algoritmos , Neoplasias da Mama , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Ultrassonografia
5.
JMIR Med Inform ; 9(5): e25869, 2021 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-33858817

RESUMO

BACKGROUND: Federated learning is a decentralized approach to machine learning; it is a training strategy that overcomes medical data privacy regulations and generalizes deep learning algorithms. Federated learning mitigates many systemic privacy risks by sharing only the model and parameters for training, without the need to export existing medical data sets. In this study, we performed ultrasound image analysis using federated learning to predict whether thyroid nodules were benign or malignant. OBJECTIVE: The goal of this study was to evaluate whether the performance of federated learning was comparable with that of conventional deep learning. METHODS: A total of 8457 (5375 malignant, 3082 benign) ultrasound images were collected from 6 institutions and used for federated learning and conventional deep learning. Five deep learning networks (VGG19, ResNet50, ResNext50, SE-ResNet50, and SE-ResNext50) were used. Using stratified random sampling, we selected 20% (1075 malignant, 616 benign) of the total images for internal validation. For external validation, we used 100 ultrasound images (50 malignant, 50 benign) from another institution. RESULTS: For internal validation, the area under the receiver operating characteristic (AUROC) curve for federated learning was between 78.88% and 87.56%, and the AUROC for conventional deep learning was between 82.61% and 91.57%. For external validation, the AUROC for federated learning was between 75.20% and 86.72%, and the AUROC curve for conventional deep learning was between 73.04% and 91.04%. CONCLUSIONS: We demonstrated that the performance of federated learning using decentralized data was comparable to that of conventional deep learning using pooled data. Federated learning might be potentially useful for analyzing medical images while protecting patients' personal information.

6.
Sensors (Basel) ; 21(6)2021 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-33809972

RESUMO

A rotator cuff tear (RCT) is an injury in adults that causes difficulty in moving, weakness, and pain. Only limited diagnostic tools such as magnetic resonance imaging (MRI) and ultrasound Imaging (UI) systems can be utilized for an RCT diagnosis. Although UI offers comparable performance at a lower cost to other diagnostic instruments such as MRI, speckle noise can occur the degradation of the image resolution. Conventional vision-based algorithms exhibit inferior performance for the segmentation of diseased regions in UI. In order to achieve a better segmentation for diseased regions in UI, deep-learning-based diagnostic algorithms have been developed. However, it has not yet reached an acceptable level of performance for application in orthopedic surgeries. In this study, we developed a novel end-to-end fully convolutional neural network, denoted as Segmentation Model Adopting a pRe-trained Classification Architecture (SMART-CA), with a novel integrated on positive loss function (IPLF) to accurately diagnose the locations of RCT during an orthopedic examination using UI. Using the pre-trained network, SMART-CA can extract remarkably distinct features that cannot be extracted with a normal encoder. Therefore, it can improve the accuracy of segmentation. In addition, unlike other conventional loss functions, which are not suited for the optimization of deep learning models with an imbalanced dataset such as the RCT dataset, IPLF can efficiently optimize the SMART-CA. Experimental results have shown that SMART-CA offers an improved precision, recall, and dice coefficient of 0.604% (+38.4%), 0.942% (+14.0%) and 0.736% (+38.6%) respectively. The RCT segmentation from a normal ultrasound image offers the improved precision, recall, and dice coefficient of 0.337% (+22.5%), 0.860% (+15.8%) and 0.484% (+28.5%), respectively, in the RCT segmentation from an ultrasound image with severe speckle noise. The experimental results demonstrated the IPLF outperforms other conventional loss functions, and the proposed SMART-CA optimized with the IPLF showed better performance than other state-of-the-art networks for the RCT segmentation with high robustness to speckle noise.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Manguito Rotador/diagnóstico por imagem , Ultrassonografia
7.
Biomed Opt Express ; 12(12): 7765-7779, 2021 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-35003865

RESUMO

Otitis media (OM) is one of the most common ear diseases in children and a common reason for outpatient visits to medical doctors in primary care practices. Adhesive OM (AdOM) is recognized as a sequela of OM with effusion (OME) and often requires surgical intervention. OME and AdOM exhibit similar symptoms, and it is difficult to distinguish between them using a conventional otoscope in a primary care unit. The accuracy of the diagnosis is highly dependent on the experience of the examiner. The development of an advanced otoscope with less variation in diagnostic accuracy by the examiner is crucial for a more accurate diagnosis. Thus, we developed an intelligent smartphone-based multimode imaging otoscope for better diagnosis of OM, even in mobile environments. The system offers spectral and autofluorescence imaging of the tympanic membrane using a smartphone attached to the developed multimode imaging module. Moreover, it is capable of intelligent analysis for distinguishing between normal, OME, and AdOM ears using a machine learning algorithm. Using the developed system, we examined the ears of 69 patients to assess their performance for distinguishing between normal, OME, and AdOM ears. In the classification of ear diseases, the multimode system based on machine learning analysis performed better in terms of accuracy and F1 scores than single RGB image analysis, RGB/fluorescence image analysis, and the analysis of spectral image cubes only, respectively. These results demonstrate that the intelligent multimode diagnostic capability of an otoscope would be beneficial for better diagnosis and management of OM.

8.
Biomed Opt Express ; 11(6): 2976-2995, 2020 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-32637236

RESUMO

A single-beam acoustic trapping technique has been shown to be very useful for determining the invasiveness of suspended breast cancer cells in an acoustic trap with a manual calcium analysis method. However, for the rapid translation of the technology into the clinic, the development of an efficient/accurate analytical method is needed. We, therefore, develop a fully-automatic deep learning-based calcium image analysis algorithm for determining the invasiveness of suspended breast cancer cells using a single-beam acoustic trapping system. The algorithm allows to segment cells, find trapped cells, and quantify their calcium changes over time. For better segmentation of calcium fluorescent cells even with vague boundaries, a novel deep learning architecture with multi-scale/multi-channel convolution operations (MM-Net) is devised and constructed by a target inversion training method. The MM-Net outperforms other deep learning models in the cell segmentation. Also, a detection/quantification algorithm is developed and implemented to automatically determine the invasiveness of a trapped cell. For the evaluation of the algorithm, it is applied to quantify the invasiveness of breast cancer cells. The results show that the algorithm offers similar performance to the manual calcium analysis method for determining the invasiveness of cancer cells, suggesting that it may serve as a novel tool to automatically determine the invasiveness of cancer cells with high-efficiency.

9.
J Biophotonics ; 13(6): e2452, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32141237

RESUMO

We develop a novel smartphone-based spectral imaging otoscope for telemedicine and examine its capability for the mobile diagnosis of middle ear diseases. The device was applied to perform spectral imaging and analysis of an ear-mimicking phantom and a normal and abnormal tympanic membrane for evaluation of its potential for the mobile diagnosis. Spectral classified images were obtained via online spectral analysis in a remote server. The phantom experimental results showed that it allowed us to distinguish four different fluids located behind a semitransparent membrane. Also, in the spectral classified images of normal ears (n = 3) and an ear with chronic otitis media (n = 1), the normal and abnormal regions in each ear could be quantitatively distinguished with high contrast. These preliminary results thus suggested that it might have the potentials for providing quantitative information for the mobile diagnosis of various middle ear diseases.


Assuntos
Otite Média , Telemedicina , Diagnóstico por Imagem , Humanos , Otite Média/diagnóstico por imagem , Otoscópios , Smartphone
10.
J Nanosci Nanotechnol ; 11(1): 314-7, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21446446

RESUMO

At present, the nano floating gate memory (NFGM) device has shown a great promise as a ultra-dense, high-endurance memory device for low-power applications. As the size of the NFGM reduced, the short channel effect became one of the critical issues in the base Field Effect Transistor (FET). Schottky barrier tunneling transistor (SBTT) can improve the controllability of the short channel effect. In this work, we studied nano floating gate memory based on the SBTT. Erbium silicide was employed instead of the conventional heavily doped S/D. The NFGM device based on the SBTT used Si nanocrystals as charge storages. The subthreshold slope and the threshold voltage of the SBTT-NFGM were 90 mV/dec. and 0.2 V, respectively. The memory window appeared about 4 V after the applied write/erase bias at +/- 11 V for 500 ms. The write/erase speeds of the memory device were 50 ms and 200 ms at +/- 13 V, respectively. We also analyzed the retention characteristics of the Schottky barrier tunneling transistor nonvolatile floating gate memory according to the various side walls.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...