Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
1.
J Digit Imaging ; 34(1): 53-65, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33479859

RESUMO

Admission trauma whole-body CT is routinely employed as a first-line diagnostic tool for characterizing pelvic fracture severity. Tile AO/OTA grade based on the presence or absence of rotational and translational instability corresponds with need for interventions including massive transfusion and angioembolization. An automated method could be highly beneficial for point of care triage in this critical time-sensitive setting. A dataset of 373 trauma whole-body CTs collected from two busy level 1 trauma centers with consensus Tile AO/OTA grading by three trauma radiologists was used to train and test a triplanar parallel concatenated network incorporating orthogonal full-thickness multiplanar reformat (MPR) views as input with a ResNeXt-50 backbone. Input pelvic images were first derived using an automated registration and cropping technique. Performance of the network for classification of rotational and translational instability was compared with that of (1) an analogous triplanar architecture incorporating an LSTM RNN network, (2) a previously described 3D autoencoder-based method, and (3) grading by a fourth independent blinded radiologist with trauma expertise. Confusion matrix results were derived, anchored to peak Matthews correlation coefficient (MCC). Associations with clinical outcomes were determined using Fisher's exact test. The triplanar parallel concatenated method had the highest accuracies for discriminating translational and rotational instability (85% and 74%, respectively), with specificity, recall, and F1 score of 93.4%, 56.5%, and 0.63 for translational instability and 71.7%, 75.7%, and 0.77 for rotational instability. Accuracy of this method was equivalent to the single radiologist read for rotational instability (74.0% versus 76.7%, p = 0.40), but significantly higher for translational instability (85.0% versus 75.1, p = 0.0007). Mean inference time was < 0.1 s per test image. Translational instability determined with this method was associated with need for angioembolization and massive transfusion (p = 0.002-0.008). Saliency maps demonstrated that the network focused on the sacroiliac complex and pubic symphysis, in keeping with the AO/OTA grading paradigm. A multiview concatenated deep network leveraging 3D information from orthogonal thick-MPR images predicted rotationally and translationally unstable pelvic fractures with accuracy comparable to an independent reader with trauma radiology expertise. Model output demonstrated significant association with key clinical outcomes.


Assuntos
Aprendizado Profundo , Fraturas Ósseas , Ossos Pélvicos , Fraturas Ósseas/diagnóstico por imagem , Humanos , Ossos Pélvicos/diagnóstico por imagem , Pelve , Tomografia Computadorizada por Raios X
2.
Int J Comput Assist Radiol Surg ; 14(9): 1517-1528, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31187399

RESUMO

PURPOSE: Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS: We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS: Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION: Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.


Assuntos
Fluoroscopia , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Algoritmos , Cadáver , Simulação por Computador , Humanos , Imageamento Tridimensional , Modelos Anatômicos , Espalhamento de Radiação , Raios X
3.
Int J Comput Assist Radiol Surg ; 14(9): 1463-1473, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31006106

RESUMO

PURPOSE: Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. METHODS: In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of [Formula: see text]. RESULTS: On synthetic data, a mean prediction error of 5.6 ± 4.5 mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping. CONCLUSION: We present the first tool to detect anatomical landmarks in X-ray images independent of their viewing direction. Access to this information during surgery may benefit decision making and constitutes a first step toward global initialization of 2D/3D registration without the need of calibration. As such, the proposed concept has a strong prospect to facilitate and enhance applications and methods in the realm of image-guided surgery.


Assuntos
Imageamento Tridimensional/métodos , Pelve/diagnóstico por imagem , Radiografia/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Feminino , Humanos , Masculino , Redes Neurais de Computação , Reprodutibilidade dos Testes , Cirurgia Assistida por Computador , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA