Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Diagnostics (Basel) ; 14(12)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38928624

RESUMO

Screening for osteoporosis is crucial for early detection and prevention, yet it faces challenges due to the low accuracy of calcaneal quantitative ultrasound (QUS) and limited access to dual-energy X-ray absorptiometry (DXA) scans. Recent advances in AI offer a promising solution through opportunistic screening using existing medical images. This study aims to utilize deep learning techniques to develop a model that analyzes chest X-ray (CXR) images for osteoporosis screening. This study included the AI model development stage and the clinical validation stage. In the AI model development stage, the combined dataset of 5122 paired CXR images and DXA reports from the patients aged 20 to 98 years at a medical center was collected. The images were enhanced and filtered for hardware retention such as pedicle screws, bone cement, artificial intervertebral discs or severe deformity in target level of T12 and L1. The dataset was then separated into training, validating, and testing datasets for model training and performance validation. In the clinical validation stage, we collected 440 paired CXR images and DXA reports from both the TCVGH and Joy Clinic, including 304 pared data from TCVGH and 136 paired data from Joy Clinic. The pre-clinical test yielded an area under the curve (AUC) of 0.940, while the clinical validation showed an AUC of 0.946. Pearson's correlation coefficient was 0.88. The model demonstrated an overall accuracy, sensitivity, and specificity of 89.0%, 88.7%, and 89.4%, respectively. This study proposes an AI model for opportunistic osteoporosis screening through CXR, demonstrating good performance and suggesting its potential for broad adoption in preliminary screening among high-risk populations.

2.
J Diabetes Res ; 2022: 5779276, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35308093

RESUMO

Aims: To investigate the applicability of deep learning image assessment software VeriSee DR to different color fundus cameras for the screening of diabetic retinopathy (DR). Methods: Color fundus images of diabetes patients taken with three different nonmydriatic fundus cameras, including 477 Topcon TRC-NW400, 459 Topcon TRC-NW8 series, and 471 Kowa nonmyd 8 series that were judged as "gradable" by one ophthalmologist were enrolled for validation. VeriSee DR was then used for the diagnosis of referable DR according to the International Clinical Diabetic Retinopathy Disease Severity Scale. Gradability, sensitivity, and specificity were calculated for each camera model. Results: All images (100%) from the three camera models were gradable for VeriSee DR. The sensitivity for diagnosing referable DR in the TRC-NW400, TRC-NW8, and non-myd 8 series was 89.3%, 94.6%, and 95.7%, respectively, while the specificity was 94.2%, 90.4%, and 89.3%, respectively. Neither the sensitivity nor the specificity differed significantly between these camera models and the original camera model used for VeriSee DR development (p = 0.40, p = 0.065, respectively). Conclusions: VeriSee DR was applicable to a variety of color fundus cameras with 100% agreement with ophthalmologists in terms of gradability and good sensitivity and specificity for the diagnosis of referable DR.


Assuntos
Inteligência Artificial/normas , Retinopatia Diabética/diagnóstico , Oftalmoscópios/normas , Design de Software , Adulto , Inteligência Artificial/estatística & dados numéricos , Distribuição de Qui-Quadrado , Diabetes Mellitus/diagnóstico por imagem , Retinopatia Diabética/diagnóstico por imagem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Oftalmoscópios/estatística & dados numéricos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA