Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Inform Decis Mak ; 23(1): 274, 2023 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-38031040

RESUMO

BACKGROUND: Point-of-care lung ultrasound (LUS) allows real-time patient scanning to help diagnose pleural effusion (PE) and plan further investigation and treatment. LUS typically requires training and experience from the clinician to accurately interpret the images. To address this limitation, we previously demonstrated a deep-learning model capable of detecting the presence of PE on LUS at an accuracy greater than 90%, when compared to an experienced LUS operator. METHODS: This follow-up study aimed to develop a deep-learning model to provide segmentations for PE in LUS. Three thousand and forty-one LUS images from twenty-four patients diagnosed with PE were selected for this study. Two LUS experts provided the ground truth for training by reviewing and segmenting the images. The algorithm was then trained using ten-fold cross-validation. Once training was completed, the algorithm segmented a separate subset of patients. RESULTS: Comparing the segmentations, we demonstrated an average Dice Similarity Coefficient (DSC) of 0.70 between the algorithm and experts. In contrast, an average DSC of 0.61 was observed between the experts. CONCLUSION: In summary, we showed that the trained algorithm achieved a comparable average DSC at PE segmentation. This represents a promising step toward developing a computational tool for accurately augmenting PE diagnosis and treatment.


Assuntos
Aprendizado Profundo , Derrame Pleural , Humanos , Seguimentos , Algoritmos , Pulmão/diagnóstico por imagem , Derrame Pleural/diagnóstico por imagem
2.
Sci Rep ; 13(1): 21716, 2023 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-38066019

RESUMO

Usually, a baseline image, either through magnetic resonance imaging (MRI) or computed tomography (CT), is captured as a reference before medical procedures such as respiratory interventions like Thoracentesis. In these procedures, ultrasound (US) imaging is often employed for guiding needle placement during Thoracentesis or providing image guidance in MISS procedures within the thoracic region. Following the procedure, a post-procedure image is acquired to monitor and evaluate the patient's progress. Currently, there are no real-time guidance and tracking capabilities that allow a surgeon to perform their procedure using the familiarity of the reference imaging modality. In this work, we propose a real-time volumetric indirect registration using a deep learning approach where the fusion of multi-imaging modalities will allow for guidance and tracking of surgical procedures using US while displaying the resultant changes in a clinically friendly reference imaging modality (MRI). The deep learning method employs a series of generative adversarial networks (GANs), specifically CycleGAN, to conduct an unsupervised image-to-image translation. This process produces spatially aligned US and MRI volumes corresponding to their respective input volumes (MRI and US) of the thoracic spine anatomical region. In this preliminary proof-of-concept study, the focus was on the T9 vertebrae. A clinical expert performs anatomical validation of randomly selected real and generated volumes of the T9 thoracic vertebrae and gives a score of 0 (conclusive anatomical structures present) or 1 (inconclusive anatomical structures present) to each volume to check if the volumes are anatomically accurate. The Dice and Overlap metrics show how accurate the shape of T9 is when compared to real volumes and how consistent the shape of T9 is when compared to other generated volumes. The average Dice, Overlap and Accuracy to clearly label all the anatomical structures of the T9 vertebrae are approximately 80% across the board.


Assuntos
Processamento de Imagem Assistida por Computador , Ultrassom , Humanos , Processamento de Imagem Assistida por Computador/métodos , Estudo de Prova de Conceito , Ultrassonografia , Imageamento por Ressonância Magnética/métodos
3.
Sci Rep ; 12(1): 17581, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-36266463

RESUMO

Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts' performance.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , COVID-19/diagnóstico por imagem , Ultrassonografia/métodos , Algoritmos , Pulmão/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa