Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Digit Imaging ; 36(3): 869-878, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36627518

RESUMO

The purpose of this study was to pair computed tomography (CT) imaging and machine learning for automated bone tumor segmentation and classification to aid clinicians in determining the need for biopsy. In this retrospective study (March 2005-October 2020), a dataset of 84 femur CT scans (50 females and 34 males, 20 years and older) with definitive histologic confirmation of bone lesion (71% malignant) were leveraged to perform automated tumor segmentation and classification. Our method involves a deep learning architecture that receives a DICOM slice and predicts (i) a segmentation mask over the estimated tumor region, and (ii) a corresponding class as benign or malignant. Class prediction for each case is then determined via majority voting. Statistical analysis was conducted via fivefold cross validation, with results reported as averages along with 95% confidence intervals. Despite the imbalance between benign and malignant cases in our dataset, our approach attains similar classification performances in specificity (75%) and sensitivity (79%). Average segmentation performance attains 56% Dice score and reaches up to 80% for an image slice in each scan. The proposed approach establishes the first steps in developing an automated deep learning method on bone tumor segmentation and classification from CT imaging. Our approach attains comparable quantitative performance to existing deep learning models using other imaging modalities, including X-ray. Moreover, visual analysis of bone tumor segmentation indicates that our model is capable of learning typical tumor characteristics and provides a promising direction in aiding the clinical decision process for biopsy.


Assuntos
Neoplasias Ósseas , Tomografia Computadorizada por Raios X , Masculino , Feminino , Humanos , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Aprendizado de Máquina , Neoplasias Ósseas/diagnóstico por imagem , Biópsia , Processamento de Imagem Assistida por Computador/métodos
2.
J Imaging Inform Med ; 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38717516

RESUMO

Osteoporosis is the most common chronic metabolic bone disease worldwide. Vertebral compression fracture (VCF) is the most common type of osteoporotic fracture. Approximately 700,000 osteoporotic VCFs are diagnosed annually in the USA alone, resulting in an annual economic burden of ~$13.8B. With an aging population, the rate of osteoporotic VCFs and their associated burdens are expected to rise. Those burdens include pain, functional impairment, and increased medical expenditure. Therefore, it is of utmost importance to develop an analytical tool to aid in the identification of VCFs. Computed Tomography (CT) imaging is commonly used to detect occult injuries. Unlike the existing VCF detection approaches based on CT, the standard clinical criteria for determining VCF relies on the shape of vertebrae, such as loss of vertebral body height. We developed a novel automated vertebrae localization, segmentation, and osteoporotic VCF detection pipeline for CT scans using state-of-the-art deep learning models to bridge this gap. To do so, we employed a publicly available dataset of spine CT scans with 325 scans annotated for segmentation, 126 of which also graded for VCF (81 with VCFs and 45 without VCFs). Our approach attained 96% sensitivity and 81% specificity in detecting VCF at the vertebral-level, and 100% accuracy at the subject-level, outperforming deep learning counterparts tested for VCF detection without segmentation. Crucially, we showed that adding predicted vertebrae segments as inputs significantly improved VCF detection at both vertebral and subject levels by up to 14% Sensitivity and 20% Specificity (p-value = 0.028).

3.
J Imaging Inform Med ; 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38937344

RESUMO

Spine disorders can cause severe functional limitations, including back pain, decreased pulmonary function, and increased mortality risk. Plain radiography is the first-line imaging modality to diagnose suspected spine disorders. Nevertheless, radiographical appearance is not always sufficient due to highly variable patient and imaging parameters, which can lead to misdiagnosis or delayed diagnosis. Employing an accurate automated detection model can alleviate the workload of clinical experts, thereby reducing human errors, facilitating earlier detection, and improving diagnostic accuracy. To this end, deep learning-based computer-aided diagnosis (CAD) tools have significantly outperformed the accuracy of traditional CAD software. Motivated by these observations, we proposed a deep learning-based approach for end-to-end detection and localization of spine disorders from plain radiographs. In doing so, we took the first steps in employing state-of-the-art transformer networks to differentiate images of multiple spine disorders from healthy counterparts and localize the identified disorders, focusing on vertebral compression fractures (VCF) and spondylolisthesis due to their high prevalence and potential severity. The VCF dataset comprised 337 images, with VCFs collected from 138 subjects and 624 normal images collected from 337 subjects. The spondylolisthesis dataset comprised 413 images, with spondylolisthesis collected from 336 subjects and 782 normal images collected from 413 subjects. Transformer-based models exhibited 0.97 Area Under the Receiver Operating Characteristic Curve (AUC) in VCF detection and 0.95 AUC in spondylolisthesis detection. Further, transformers demonstrated significant performance improvements against existing end-to-end approaches by 4-14% AUC (p-values < 10-13) for VCF detection and by 14-20% AUC (p-values < 10-9) for spondylolisthesis detection.

4.
Sci Rep ; 14(1): 12046, 2024 05 27.
Artigo em Inglês | MEDLINE | ID: mdl-38802519

RESUMO

Hip fractures exceed 250,000 cases annually in the United States, with the worldwide incidence projected to increase by 240-310% by 2050. Hip fractures are predominantly diagnosed by radiologist review of radiographs. In this study, we developed a deep learning model by extending the VarifocalNet Feature Pyramid Network (FPN) for detection and localization of proximal femur fractures from plain radiography with clinically relevant metrics. We used a dataset of 823 hip radiographs of 150 subjects with proximal femur fractures and 362 controls to develop and evaluate the deep learning model. Our model attained 0.94 specificity and 0.95 sensitivity in fracture detection over the diverse imaging dataset. We compared the performance of our model against five benchmark FPN models, demonstrating 6-14% sensitivity and 1-9% accuracy improvement. In addition, we demonstrated that our model outperforms a state-of-the-art transformer model based on DINO network by 17% sensitivity and 5% accuracy, while taking half the time on average to process a radiograph. The developed model can aid radiologists and support on-premise integration with hospital cloud services to enable automatic, opportunistic screening for hip fractures.


Assuntos
Aprendizado Profundo , Radiografia , Humanos , Feminino , Masculino , Idoso , Radiografia/métodos , Fraturas do Quadril/diagnóstico por imagem , Pessoa de Meia-Idade , Idoso de 80 Anos ou mais , Fraturas do Fêmur/diagnóstico por imagem , Sensibilidade e Especificidade , Redes Neurais de Computação , Fraturas Proximais do Fêmur
5.
Med Biol Eng Comput ; 61(8): 1947-1959, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37243852

RESUMO

Focused Assessment with Sonography in Trauma (FAST) exam is the standard of care for pericardial and abdominal free fluid detection in emergency medicine. Despite its life saving potential, FAST is underutilized due to requiring clinicians with appropriate training and practice. To aid ultrasound interpretation, the role of artificial intelligence has been studied, while leaving room for improvement in localization information and computation time. The purpose of this study was to develop and test a deep learning approach to rapidly and accurately identify both the presence and location of pericardial effusion on point-of-care ultrasound (POCUS) exams. Each cardiac POCUS exam is analyzed image-by-image via the state-of-the-art YoloV3 algorithm and pericardial effusion presence is determined from the most confident detection. We evaluate our approach over a dataset of POCUS exams (cardiac component of FAST and ultrasound), comprising 37 cases with pericardial effusion and 39 negative controls. Our algorithm attains 92% specificity and 89% sensitivity in pericardial effusion identification, outperforming existing deep learning approaches, and localizes pericardial effusion by 51% Intersection Over Union with ground-truth annotations. Moreover, image processing demonstrates only 57 ms latency. Experimental results demonstrate the feasibility of rapid and accurate pericardial effusion detection from POCUS exams for physician overread.


Assuntos
Derrame Pericárdico , Humanos , Derrame Pericárdico/diagnóstico por imagem , Sistemas Automatizados de Assistência Junto ao Leito , Inteligência Artificial , Ultrassonografia/métodos , Coração
6.
Front Neurol ; 14: 1295132, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38249724

RESUMO

Introduction: Monitoring upper limb function is crucial for tracking progress, assessing treatment effectiveness, and identifying potential problems or complications. Hand goal-directed movements (GDMs) are a crucial aspect of daily life, reflecting planned motor commands with hand trajectories towards specific target locations. Previous studies have shown that GDM tasks can detect early changes in upper limb function in neurodegenerative diseases and can be used to track disease progression over time. Methods: In this study, we used accelerometer data from stroke survivor participants and controls doing activities of daily living to develop an automated deep learning approach to detect GDMs. The model performance for detecting GDM or non-GDM from windowed data achieved an AUC of 0.9, accuracy 0.83, sensitivity 0.81, specificity 0.84 and F1 0.82. Results: We further validated the utility of detecting GDM by extracting features from GDM periods and using these features to classify whether the measurements are collected from a stroke survivor or a control participant, and to predict the Fugl-Meyer assessment score from stroke survivors. Discussion: This study presents a promising and reliable tool for monitoring upper limb function in a real-world setting, and assessing biomarkers related to upper limb health in neurological, neuromuscular and muscles disorders.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA