Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Digit Imaging ; 36(3): 869-878, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36627518

RESUMEN

The purpose of this study was to pair computed tomography (CT) imaging and machine learning for automated bone tumor segmentation and classification to aid clinicians in determining the need for biopsy. In this retrospective study (March 2005-October 2020), a dataset of 84 femur CT scans (50 females and 34 males, 20 years and older) with definitive histologic confirmation of bone lesion (71% malignant) were leveraged to perform automated tumor segmentation and classification. Our method involves a deep learning architecture that receives a DICOM slice and predicts (i) a segmentation mask over the estimated tumor region, and (ii) a corresponding class as benign or malignant. Class prediction for each case is then determined via majority voting. Statistical analysis was conducted via fivefold cross validation, with results reported as averages along with 95% confidence intervals. Despite the imbalance between benign and malignant cases in our dataset, our approach attains similar classification performances in specificity (75%) and sensitivity (79%). Average segmentation performance attains 56% Dice score and reaches up to 80% for an image slice in each scan. The proposed approach establishes the first steps in developing an automated deep learning method on bone tumor segmentation and classification from CT imaging. Our approach attains comparable quantitative performance to existing deep learning models using other imaging modalities, including X-ray. Moreover, visual analysis of bone tumor segmentation indicates that our model is capable of learning typical tumor characteristics and provides a promising direction in aiding the clinical decision process for biopsy.


Asunto(s)
Neoplasias Óseas , Tomografía Computarizada por Rayos X , Masculino , Femenino , Humanos , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático , Neoplasias Óseas/diagnóstico por imagen , Biopsia , Procesamiento de Imagen Asistido por Computador/métodos
2.
Sci Rep ; 14(1): 12046, 2024 05 27.
Artículo en Inglés | MEDLINE | ID: mdl-38802519

RESUMEN

Hip fractures exceed 250,000 cases annually in the United States, with the worldwide incidence projected to increase by 240-310% by 2050. Hip fractures are predominantly diagnosed by radiologist review of radiographs. In this study, we developed a deep learning model by extending the VarifocalNet Feature Pyramid Network (FPN) for detection and localization of proximal femur fractures from plain radiography with clinically relevant metrics. We used a dataset of 823 hip radiographs of 150 subjects with proximal femur fractures and 362 controls to develop and evaluate the deep learning model. Our model attained 0.94 specificity and 0.95 sensitivity in fracture detection over the diverse imaging dataset. We compared the performance of our model against five benchmark FPN models, demonstrating 6-14% sensitivity and 1-9% accuracy improvement. In addition, we demonstrated that our model outperforms a state-of-the-art transformer model based on DINO network by 17% sensitivity and 5% accuracy, while taking half the time on average to process a radiograph. The developed model can aid radiologists and support on-premise integration with hospital cloud services to enable automatic, opportunistic screening for hip fractures.


Asunto(s)
Aprendizaje Profundo , Radiografía , Humanos , Femenino , Masculino , Anciano , Radiografía/métodos , Fracturas de Cadera/diagnóstico por imagen , Persona de Mediana Edad , Anciano de 80 o más Años , Fracturas del Fémur/diagnóstico por imagen , Sensibilidad y Especificidad , Redes Neurales de la Computación , Fracturas Femorales Proximales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA