Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Int J Med Robot ; : e2570, 2023 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-37690099

RESUMO

OBJECTIVE: This study evaluates the precision of a commercially available spine planning software in automatic spine labelling and screw-trajectory proposal. METHODS: The software uses automatic segmentation and registration of the vertebra to generate screw proposals. 877 trajectories were compared. Four neurosurgeons assessed suggested trajectories, performed corrections, and manually planned pedicle screws. Additionally, automatic identification/labelling was evaluated. RESULTS: Automatic labelling was correct in 89% of the cases. 92.9% of automatically planned trajectories were in accordance with G&R grade A + B. Automatic mode reduced the time spent planning screw trajectories by 7 s per screw to 20 s per vertebra. Manual mode yielded differences in screw-length between surgeons (largest distribution peak: 5 mm), automatic in contrast at 0 mm. The size of suggested pedicle screws was significantly smaller (largest peaks in difference between 0.5 and 3 mm) than the surgeon's choice. CONCLUSION: Automatic identification of vertebrae works in most cases and suggested pedicle screw trajectories are acceptable. So far, it does not substitute for an experienced surgeon's assessment.

2.
J Clin Med ; 10(22)2021 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-34830608

RESUMO

BACKGROUND: Ex vivo fluorescent confocal microscopy (FCM) is a novel and effective method for a fast-automatized histological tissue examination. In contrast, conventional diagnostic methods are primarily based on the skills of the histopathologist. In this study, we investigated the potential of convolutional neural networks (CNNs) for automatized classification of oral squamous cell carcinoma via ex vivo FCM imaging for the first time. MATERIAL AND METHODS: Tissue samples from 20 patients were collected, scanned with an ex vivo confocal microscope immediately after resection, and investigated histopathologically. A CNN architecture (MobileNet) was trained and tested for accuracy. RESULTS: The model achieved a sensitivity of 0.47 and specificity of 0.96 in the automated classification of cancerous tissue in our study. CONCLUSION: In this preliminary work, we trained a CNN model on a limited number of ex vivo FCM images and obtained promising results in the automated classification of cancerous tissue. Further studies using large sample sizes are warranted to introduce this technology into clinics.

3.
Cancers (Basel) ; 13(21)2021 Nov 03.
Artigo em Inglês | MEDLINE | ID: mdl-34771684

RESUMO

Image classification with convolutional neural networks (CNN) offers an unprecedented opportunity to medical imaging. Regulatory agencies in the USA and Europe have already cleared numerous deep learning/machine learning based medical devices and algorithms. While the field of radiology is on the forefront of artificial intelligence (AI) revolution, conventional pathology, which commonly relies on examination of tissue samples on a glass slide, is falling behind in leveraging this technology. On the other hand, ex vivo confocal laser scanning microscopy (ex vivo CLSM), owing to its digital workflow features, has a high potential to benefit from integrating AI tools into the assessment and decision-making process. Aim of this work was to explore a preliminary application of CNN in digitally stained ex vivo CLSM images of cutaneous squamous cell carcinoma (cSCC) for automated detection of tumor tissue. Thirty-four freshly excised tissue samples were prospectively collected and examined immediately after resection. After the histologically confirmed ex vivo CLSM diagnosis, the tumor tissue was annotated for segmentation by experts, in order to train the MobileNet CNN. The model was then trained and evaluated using cross validation. The overall sensitivity and specificity of the deep neural network for detecting cSCC and tumor free areas on ex vivo CLSM slides compared to expert evaluation were 0.76 and 0.91, respectively. The area under the ROC curve was equal to 0.90 and the area under the precision-recall curve was 0.85. The results demonstrate a high potential of deep learning models to detect cSCC regions on digitally stained ex vivo CLSM slides and to distinguish them from tumor-free skin.

4.
Int J Med Robot ; 17(2): e2228, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33462965

RESUMO

BACKGROUND: Two-dimensional (2D)-3D registration is challenging in the presence of implant projections on intraoperative images, which can limit the registration capture range. Here, we investigate the use of deep-learning-based inpainting for removing implant projections from the X-rays to improve the registration performance. METHODS: We trained deep-learning-based inpainting models that can fill in the implant projections on X-rays. Clinical datasets were collected to evaluate the inpainting based on six image similarity measures. The effect of X-ray inpainting on capture range of 2D-3D registration was also evaluated. RESULTS: The X-ray inpainting significantly improved the similarity between the inpainted images and the ground truth. When applying inpainting before the 2D-3D registration process, we demonstrated significant recovery of the capture range by up to 85%. CONCLUSION: Applying deep-learning-based inpainting on X-ray images masked by implants can markedly improve the capture range of the associated 2D-3D registration task.


Assuntos
Aprendizado Profundo , Algoritmos , Humanos , Imageamento Tridimensional , Coluna Vertebral , Tomografia Computadorizada por Raios X , Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...