Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Telemed J E Health ; 30(9): 2477-2482, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38934135

RESUMO

Background: Blurry images in teledermatology and consultation increased the diagnostic difficulty for both deep learning models and physicians. We aim to determine the extent of restoration in diagnostic accuracy after blurry images are deblurred by deep learning models. Methods: We used 19,191 skin images from a public skin image dataset that includes 23 skin disease categories, 54 skin images from a public dataset of blurry skin images, and 53 blurry dermatology consultation photos in a medical center to compare the diagnosis accuracy of trained diagnostic deep learning models and subjective sharpness between blurry and deblurred images. We evaluated five different deblurring models, including models for motion blur, Gaussian blur, Bokeh blur, mixed slight blur, and mixed strong blur. Main Outcomes and Measures: Diagnostic accuracy was measured as sensitivity and precision of correct model prediction of the skin disease category. Sharpness rating was performed by board-certified dermatologists on a 4-point scale, with 4 being the highest image clarity. Results: The sensitivity of diagnostic models dropped 0.15 and 0.22 on slightly and strongly blurred images, respectively, and deblurring models restored 0.14 and 0.17 for each group. The sharpness ratings perceived by dermatologists improved from 1.87 to 2.51 after deblurring. Activation maps showed the focus of diagnostic models was compromised by the blurriness but was restored after deblurring. Conclusions: Deep learning models can restore the diagnostic accuracy of diagnostic models for blurry images and increase image sharpness perceived by dermatologists. The model can be incorporated into teledermatology to help the diagnosis of blurry images.


Assuntos
Aprendizado Profundo , Dermatologia , Dermatopatias , Telemedicina , Humanos , Dermatopatias/diagnóstico , Dermatopatias/diagnóstico por imagem , Dermatologia/métodos , Fotografação
2.
J Healthc Inform Res ; 8(4): 619-639, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39463858

RESUMO

Effective skin cancer detection is crucial for early intervention and improved treatment outcomes. Previous studies have primarily focused on enhancing the performance of skin lesion classification models. However, there is a growing need to consider the practical requirements of real-world scenarios, such as portable applications that require lightweight models embedded in devices. Therefore, this study aims to propose a novel method that can address the major-type misclassification problem with a lightweight model. This study proposes an innovative Lightweight Dual Projection-Head Hierarchical contrastive learning (LightDPH) method. We introduce a dual projection-head mechanism to a contrastive learning framework. This mechanism is utilized to train a model with our proposed multi-level contrastive loss (MultiCon Loss), which can effectively learn hierarchical information from samples. Meanwhile, we present a distance-based weight (DBW) function to adjust losses based on hierarchical levels. This unique combination of MultiCon Loss and DBW function in LightDPH tackles the problem of major-type misclassification with lightweight models and enhances the model's sensitivity in skin lesion classification. The experimental results demonstrate that LightDPH significantly reduces the number of parameters by 52.6% and computational complexity by 29.9% in GFLOPs while maintaining high classification performance comparable to state-of-the-art methods. This study also presented a novel evaluation metric, model efficiency score (MES), to evaluate the cost-effectiveness of models with scaling and classification performance. The proposed LightDPH effectively mitigates major-type misclassification and works in a resource-efficient manner, making it highly suitable for clinical applications in resource-constrained environments. To the best of our knowledge, this is the first work that develops an effective lightweight hierarchical classification model for skin lesion detection.

3.
J Hazard Mater ; 480: 136003, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39378597

RESUMO

Chronic exposure to arsenic is linked to the development of cancers in the skin, lungs, and bladder. Arsenic exposure manifests as variegated pigmentation and characteristic pitted keratosis on the hands and feet, which often precede the onset of internal cancers. Traditionally, human arsenic exposure is estimated through arsenic levels in biological tissues; however, these methods are invasive and time-consuming. This study aims to develop a noninvasive approach to predict arsenic exposure using artificial intelligence (AI) to analyze photographs of hands and feet. By incorporating well water consumption data and arsenic concentration levels, we developed an AI algorithm trained on 9988 hand and foot photographs from 2497 subjects. This algorithm correlates visual features of palmoplantar hyperkeratosis with arsenic exposure levels. Four pictures per patient, capturing both ventral and dorsal aspects of hands and feet, were analyzed. The AI model utilized existing arsenic exposure data, including arsenic concentration (AC) and cumulative arsenic exposure (CAE), to make binary predictions of high and low arsenic exposure. The AI model achieved an optimal area under the curve (AUC) values of 0.813 for AC and 0.779 for CAE. Recall and precision metrics were 0.729 and 0.705 for CAE, and 0.750 and 0.763 for AC, respectively. While biomarkers have traditionally been used to assess arsenic exposure, efficient noninvasive methods are lacking. To our knowledge, this is the first study to leverage deep learning for noninvasive arsenic exposure assessment. Despite challenges with binary classification due to imbalanced and sparse data, this approach demonstrates the potential for noninvasive estimation of arsenic concentration. Future studies should focus on increasing data volume and categorizing arsenic concentration statistics to enhance model accuracy. This rapid estimation method could significantly contribute to epidemiological studies and aid physicians in diagnosis.

4.
J Chin Med Assoc ; 87(4): 369-376, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38334988

RESUMO

BACKGROUND: Intensive care unit (ICU) mortality prediction helps to guide therapeutic decision making for critically ill patients. Several scoring systems based on statistical techniques have been developed for this purpose. In this study, we developed a machine-learning model to predict patient mortality in the very early stage of ICU admission. METHODS: This study was performed with data from all patients admitted to the intensive care units of a tertiary medical center in Taiwan from 2009 to 2018. The patients' comorbidities, co-medications, vital signs, and laboratory data on the day of ICU admission were obtained from electronic medical records. We constructed random forest and extreme gradient boosting (XGBoost) models to predict ICU mortality, and compared their performance with that of traditional scoring systems. RESULTS: Data from 12,377 patients was allocated to training (n = 9901) and testing (n = 2476) datasets. The median patient age was 70.0 years; 9210 (74.41%) patients were under mechanical ventilation in the ICU. The areas under receiver operating characteristic curves for the random forest and XGBoost models (0.876 and 0.880, respectively) were larger than those for the Acute Physiology and Chronic Health Evaluation II score (0.738), Sequential Organ Failure Assessment score (0.747), and Simplified Acute Physiology Score II (0.743). The fraction of inspired oxygen on ICU admission was the most important predictive feature across all models. CONCLUSION: The XGBoost model most accurately predicted ICU mortality and was superior to traditional scoring systems. Our results highlight the utility of machine learning for ICU mortality prediction in the Asian population.


Assuntos
Estado Terminal , Unidades de Terapia Intensiva , Humanos , Idoso , Hospitais , Hospitalização , Aprendizado de Máquina
5.
Transl Vis Sci Technol ; 12(11): 1, 2023 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-37910082

RESUMO

Purpose: For this study, we aimed to determine whether a convolutional neural network (CNN)-based method (based on a feature extractor and an identifier) can be applied to monitor the progression of keratitis while managing suspected microbial keratitis (MK). Methods: This multicenter longitudinal cohort study included patients with suspected MK undergoing serial external eye photography at the 5 branches of Chang Gung Memorial Hospital from August 20, 2000, to August 19, 2020. Data were primarily analyzed from January 1 to March 25, 2022. The CNN-based model was evaluated via F1 score and accuracy. The area under the receiver operating characteristic curve (AUROC) was used to measure the precision-recall trade-off. Results: The model was trained using 1456 image pairs from 468 patients. In comparing models via only training the identifier, statistically significant higher accuracy (P < 0.05) in models via training both the identifier and feature extractor (full training) was verified, with 408 image pairs from 117 patients. The full training EfficientNet b3-based model showed 90.2% (getting better) and 82.1% (becoming worse) F1 scores, 87.3% accuracy, and 94.2% AUROC for 505 getting better and 272 becoming worse test image pairs from 452 patients. Conclusions: A CNN-based approach via deep learning applied in suspected MK can monitor the progress/regress during treatment by comparing external eye image pairs. Translational Relevance: The study bridges the gap between the investigation of the state-of-the-art CNN-based deep learning algorithm applied in ocular image analysis and the clinical care of suspected patients with MK.


Assuntos
Ceratite , Humanos , Estudos Longitudinais , Ceratite/diagnóstico , Olho , Redes Neurais de Computação , Algoritmos
6.
Comput Methods Programs Biomed ; 216: 106666, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35124480

RESUMO

BACKGROUND AND OBJECTIVE: The incidence rate of skin cancers is increasing worldwide annually. Using machine learning and deep learning for skin lesion classification is one of the essential research topics. In this study, we formulate a major-type misclassification problem that previous studies did not consider in the multi-class skin lesion classification. Moreover, addressing the major-type misclassification problem is significant for real-world computer-aided diagnosis. METHODS: This study presents a novel method, namely Hierarchy-Aware Contrastive Learning with Late Fusion (HAC-LF), to improve the overall performance of multi-class skin classification. In HAC-LF, we design a new loss function, Hierarchy-Aware Contrastive Loss (HAC Loss), to reduce the impact of the major-type misclassification problem. The late fusion method is applied to balance the major-type and multi-class classification performance. RESULTS: We conduct a series of experiments with the ISIC 2019 Challenges dataset, which consists of three skin lesion datasets, to verify the performance of our methods. The results show that our proposed method surpasses the representative deep learning methods for skin lesion classification in all evaluation metrics used in this study. HAC-LF achieves 0.871, 0.842, 0.889 for accuracy, sensitivity, and specificity in the major-type classification, respectively. With the imbalanced class distribution, HAC-LF outperforms the baseline model regarding the sensitivity of minority classes. CONCLUSIONS: This research formulates a major-type misclassification problem. We propose HAC-LF to deal with it and boost the multi-class skin lesion classification performance. According to the results, the advantage of HAC-LF is that the proposed HAC Loss can beneficially reduce the impact of the major-type misclassification by decreasing the major-type error rate. Besides the medical field HAC-LF is promising to be applied to other domains possessing the data with the hierarchical structure.


Assuntos
Dermatopatias , Neoplasias Cutâneas , Benchmarking , Diagnóstico por Computador , Humanos , Aprendizado de Máquina , Neoplasias Cutâneas/diagnóstico
7.
Diagnostics (Basel) ; 12(12)2022 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-36552954

RESUMO

This investigation aimed to explore deep learning (DL) models' potential for diagnosing Pseudomonas keratitis using external eye images. In the retrospective research, the images of bacterial keratitis (BK, n = 929), classified as Pseudomonas (n = 618) and non-Pseudomonas (n = 311) keratitis, were collected. Eight DL algorithms, including ResNet50, DenseNet121, ResNeXt50, SE-ResNet50, and EfficientNets B0 to B3, were adopted as backbone models to train and obtain the best ensemble 2-, 3-, 4-, and 5-DL models. Five-fold cross-validation was used to determine the ability of single and ensemble models to diagnose Pseudomonas keratitis. The EfficientNet B2 model had the highest accuracy (71.2%) of the eight single-DL models, while the best ensemble 4-DL model showed the highest accuracy (72.1%) among the ensemble models. However, no statistical difference was shown in the area under the receiver operating characteristic curve and diagnostic accuracy among these single-DL models and among the four best ensemble models. As a proof of concept, the DL approach, via external eye photos, could assist in identifying Pseudomonas keratitis from BK patients. All the best ensemble models can enhance the performance of constituent DL models in diagnosing Pseudomonas keratitis, but the enhancement effect appears to be limited.

8.
J Dermatol ; 48(3): 310-316, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33211346

RESUMO

Skin cancer is among the 10 most common cancers. Recent research revealed the superiority of artificial intelligence (AI) over dermatologists to diagnose skin cancer from predesignated and cropped images. However, there remain several uncertainties for AI in diagnosing skin cancers, including lack of testing for consistency, lack of pathological proof or ambiguous comparisons. Hence, to develop a reliable, feasible and user-friendly platform to facilitate the automatic diagnostic algorithm is important. The aim of this study was to build a light-weight skin cancer classification model based on deep learning methods for aiding first-line medical care. The developed model can be deployed on cloud platforms as well as mobile devices for remote diagnostic applications. We reviewed the medical records and clinical images of patients who received a histological diagnosis of basal cell carcinoma, squamous cell carcinoma, melanoma, seborrheic keratosis and melanocytic nevus in 2006-2017 in the Department of Dermatology in Kaohsiung Chang Gung Memorial Hospital (KCGMH). We used the deep learning models to identify skin cancers and benign skin tumors in the manner of binary classification and multi-class classification in the KCGMH and HAM10000 datasets to construct a skin cancer classification model. The accuracy reached 89.5% for the binary classifications (benign vs malignant) in the KCGMH dataset; the accuracy was 85.8% in the HAM10000 dataset in seven-class classification and 72.1% in the KCGMH dataset in five-class classification. Our results demonstrate that our skin cancer classification model based on deep learning methods is a highly promising aid for the clinical diagnosis and early identification of skin cancers and benign tumors.


Assuntos
Aprendizado Profundo , Melanoma , Neoplasias Cutâneas , Inteligência Artificial , Dermatologistas , Humanos , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico
9.
PLoS One ; 16(1): e0245992, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33507982

RESUMO

BACKGROUND: Identification of vertebral fractures (VFs) is critical for effective secondary fracture prevention owing to their association with the increasing risks of future fractures. Plain abdominal frontal radiographs (PARs) are a common investigation method performed for a variety of clinical indications and provide an ideal platform for the opportunistic identification of VF. This study uses a deep convolutional neural network (DCNN) to identify the feasibility for the screening, detection, and localization of VFs using PARs. METHODS: A DCNN was pretrained using ImageNet and retrained with 1306 images from the PARs database obtained between August 2015 and December 2018. The accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were evaluated. The visualization algorithm gradient-weighted class activation mapping (Grad-CAM) was used for model interpretation. RESULTS: Only 46.6% (204/438) of the VFs were diagnosed in the original PARs reports. The algorithm achieved 73.59% accuracy, 73.81% sensitivity, 73.02% specificity, and an AUC of 0.72 in the VF identification. CONCLUSION: Computer driven solutions integrated with the DCNN have the potential to identify VFs with good accuracy when used opportunistically on PARs taken for a variety of clinical purposes. The proposed model can help clinicians become more efficient and economical in the current clinical pathway of fragile fracture treatment.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Fraturas da Coluna Vertebral/diagnóstico por imagem , Algoritmos , Bases de Dados Factuais , Humanos , Interpretação de Imagem Assistida por Computador , Radiografia , Sensibilidade e Especificidade
10.
Sci Rep ; 11(1): 24227, 2021 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-34930952

RESUMO

Bacterial keratitis (BK), a painful and fulminant bacterial infection of the cornea, is the most common type of vision-threatening infectious keratitis (IK). A rapid clinical diagnosis by an ophthalmologist may often help prevent BK patients from progression to corneal melting or even perforation, but many rural areas cannot afford an ophthalmologist. Thanks to the rapid development of deep learning (DL) algorithms, artificial intelligence via image could provide an immediate screening and recommendation for patients with red and painful eyes. Therefore, this study aims to elucidate the potentials of different DL algorithms for diagnosing BK via external eye photos. External eye photos of clinically suspected IK were consecutively collected from five referral centers. The candidate DL frameworks, including ResNet50, ResNeXt50, DenseNet121, SE-ResNet50, EfficientNets B0, B1, B2, and B3, were trained to recognize BK from the photo toward the target with the greatest area under the receiver operating characteristic curve (AUROC). Via five-cross validation, EfficientNet B3 showed the most excellent average AUROC, in which the average percentage of sensitivity, specificity, positive predictive value, and negative predictive value was 74, 64, 77, and 61. There was no statistical difference in diagnostic accuracy and AUROC between any two of these DL frameworks. The diagnostic accuracy of these models (ranged from 69 to 72%) is comparable to that of the ophthalmologist (66% to 74%). Therefore, all these models are promising tools for diagnosing BK in first-line medical care units without ophthalmologists.


Assuntos
Diagnóstico por Computador/métodos , Infecções Oculares Bacterianas/diagnóstico por imagem , Ceratite/diagnóstico por imagem , Ceratite/microbiologia , Fotografação/métodos , Algoritmos , Área Sob a Curva , Córnea/diagnóstico por imagem , Córnea/microbiologia , Aprendizado Profundo , Progressão da Doença , Humanos , Oftalmologistas , Oftalmologia , Valor Preditivo dos Testes , Linguagens de Programação , Curva ROC , Reprodutibilidade dos Testes , Pesquisa Translacional Biomédica
11.
Sci Rep ; 10(1): 14424, 2020 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-32879364

RESUMO

Fungal keratitis (FK) is the most devastating and vision-threatening microbial keratitis, but clinical diagnosis a great challenge. This study aimed to develop and verify a deep learning (DL)-based corneal photograph model for diagnosing FK. Corneal photos of laboratory-confirmed microbial keratitis were consecutively collected from a single referral center. A DL framework with DenseNet architecture was used to automatically recognize FK from the photo. The diagnoses of FK via corneal photograph for comparing DL-based models were made in the Expert and NCS-Oph group through a majority decision of three non-corneal specialty ophthalmologist and three corneal specialists, respectively. The average percentage of sensitivity, specificity, positive predictive value, and negative predictive value was approximately 71, 68, 60, and 78. The sensitivity was higher than that of the NCS-Oph (52%, P < .01), whereas the specificity was lower than that of the NCS-Oph (83%, P < .01). The average accuracy of around 70% was comparable with that of the NCS-Oph. Therefore, the sensitive DL-based diagnostic model is a promising tool for improving first-line medical care at rural area in early identification of FK.


Assuntos
Córnea/diagnóstico por imagem , Úlcera da Córnea/diagnóstico por imagem , Aprendizado Profundo , Infecções Oculares Fúngicas/diagnóstico por imagem , Imagem Óptica/métodos , Fotografação/métodos , Córnea/patologia , Úlcera da Córnea/microbiologia , Úlcera da Córnea/patologia , Infecções Oculares Fúngicas/patologia , Humanos , Imagem Óptica/normas , Fotografação/normas , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA