Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-39115609

RESUMO

PURPOSE: Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices. METHODS: PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion. RESULTS: Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNS+ (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion. CONCLUSION: PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Code can be found here: https://github.com/mtec-tuhh/PolypNextLSTM .

2.
Int J Comput Assist Radiol Surg ; 19(2): 223-231, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37479942

RESUMO

PURPOSE: Paranasal anomalies are commonly discovered during routine radiological screenings and can present with a wide range of morphological features. This diversity can make it difficult for convolutional neural networks (CNNs) to accurately classify these anomalies, especially when working with limited datasets. Additionally, current approaches to paranasal anomaly classification are constrained to identifying a single anomaly at a time. These challenges necessitate the need for further research and development in this area. METHODS: We investigate the feasibility of using a 3D convolutional neural network (CNN) to classify healthy maxillary sinuses (MS) and MS with polyps or cysts. The task of accurately localizing the relevant MS volume within larger head and neck Magnetic Resonance Imaging (MRI) scans can be difficult, but we develop a strategy which includes the use of a novel sampling technique that not only effectively localizes the relevant MS volume, but also increases the size of the training dataset and improves classification results. Additionally, we employ a Multiple Instance Ensembling (MIE) prediction method to further boost classification performance. RESULTS: With sampling and MIE, we observe that there is consistent improvement in classification performance of all 3D ResNet and 3D DenseNet architecture with an average AUPRC percentage increase of 21.86 ± 11.92% and 4.27 ± 5.04% by sampling and 28.86 ± 12.80% and 9.85 ± 4.02% by sampling and MIE, respectively. CONCLUSION: Sampling and MIE can be effective techniques to improve the generalizability of CNNs for paranasal anomaly classification. We demonstrate the feasibility of classifying anomalies in the MS. We propose a data enlarging strategy through sampling alongside a novel MIE strategy that proves to be beneficial for paranasal anomaly classification in the MS.


Assuntos
Seio Maxilar , Redes Neurais de Computação , Humanos , Seio Maxilar/diagnóstico por imagem , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Cabeça
3.
Laryngoscope ; 134(9): 3927-3934, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38520698

RESUMO

OBJECTIVE: Computer aided diagnostics (CAD) systems can automate the differentiation of maxillary sinus (MS) with and without opacification, simplifying the typically laborious process and aiding in clinical insight discovery within large cohorts. METHODS: This study uses Hamburg City Health Study (HCHS) a large, prospective, long-term, population-based cohort study of participants between 45 and 74 years of age. We develop a CAD system using an ensemble of 3D Convolutional Neural Network (CNN) to analyze cranial MRIs, distinguishing MS with opacifications (polyps, cysts, mucosal thickening) from MS without opacifications. The system is used to find correlations of participants with and without MS opacifications with clinical data (smoking, alcohol, BMI, asthma, bronchitis, sex, age, leukocyte count, C-reactive protein, allergies). RESULTS: The evaluation metrics of CAD system (Area Under Receiver Operator Characteristic: 0.95, sensitivity: 0.85, specificity: 0.90) demonstrated the effectiveness of our approach. MS with opacification group exhibited higher alcohol consumption, higher BMI, higher incidence of intrinsic asthma and extrinsic asthma. Male sex had higher prevalence of MS opacifications. Participants with MS opacifications had higher incidence of hay fever and house dust allergy but lower incidence of bee/wasp venom allergy. CONCLUSION: The study demonstrates a 3D CNN's ability to distinguish MS with and without opacifications, improving automated diagnosis and aiding in correlating clinical data in population studies. LEVEL OF EVIDENCE: 3 Laryngoscope, 134:3927-3934, 2024.


Assuntos
Diagnóstico por Computador , Imageamento por Ressonância Magnética , Seio Maxilar , Humanos , Masculino , Pessoa de Meia-Idade , Feminino , Idoso , Estudos Prospectivos , Seio Maxilar/diagnóstico por imagem , Diagnóstico por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Doenças dos Seios Paranasais/diagnóstico por imagem , Doenças dos Seios Paranasais/epidemiologia , Doenças dos Seios Paranasais/diagnóstico , Redes Neurais de Computação , Sensibilidade e Especificidade
4.
Int J Comput Assist Radiol Surg ; 19(9): 1713-1721, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38850438

RESUMO

PURPOSE: Paranasal anomalies, frequently identified in routine radiological screenings, exhibit diverse morphological characteristics. Due to the diversity of anomalies, supervised learning methods require large labelled dataset exhibiting diverse anomaly morphology. Self-supervised learning (SSL) can be used to learn representations from unlabelled data. However, there are no SSL methods designed for the downstream task of classifying paranasal anomalies in the maxillary sinus (MS). METHODS: Our approach uses a 3D convolutional autoencoder (CAE) trained in an unsupervised anomaly detection (UAD) framework. Initially, we train the 3D CAE to reduce reconstruction errors when reconstructing normal maxillary sinus (MS) image. Then, this CAE is applied to an unlabelled dataset to generate coarse anomaly locations by creating residual MS images. Following this, a 3D convolutional neural network (CNN) reconstructs these residual images, which forms our SSL task. Lastly, we fine-tune the encoder part of the 3D CNN on a labelled dataset of normal and anomalous MS images. RESULTS: The proposed SSL technique exhibits superior performance compared to existing generic self-supervised methods, especially in scenarios with limited annotated data. When trained on just 10% of the annotated dataset, our method achieves an area under the precision-recall curve (AUPRC) of 0.79 for the downstream classification task. This performance surpasses other methods, with BYOL attaining an AUPRC of 0.75, SimSiam at 0.74, SimCLR at 0.73 and masked autoencoding using SparK at 0.75. CONCLUSION: A self-supervised learning approach that inherently focuses on localizing paranasal anomalies proves to be advantageous, particularly when the subsequent task involves differentiating normal from anomalous maxillary sinuses. Access our code at https://github.com/mtec-tuhh/self-supervised-paranasal-anomaly .


Assuntos
Seio Maxilar , Aprendizado de Máquina Supervisionado , Humanos , Seio Maxilar/diagnóstico por imagem , Seio Maxilar/anormalidades , Redes Neurais de Computação , Imageamento Tridimensional/métodos , Tomografia Computadorizada por Raios X/métodos
5.
Med Image Anal ; 99: 103307, 2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39303447

RESUMO

Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm's accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm's prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image Segmentation (MedAI 2021)" competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models' credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care.

6.
Sci Rep ; 13(1): 10120, 2023 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-37344565

RESUMO

Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Nódulo Pulmonar Solitário , Humanos , Tomografia Computadorizada por Raios X/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Pulmão , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico por imagem
7.
Artigo em Inglês | MEDLINE | ID: mdl-38082740

RESUMO

Needle positioning is essential for various medical applications such as epidural anaesthesia. Physicians rely on their instincts while navigating the needle in epidural spaces. Thereby, identifying the tissue structures may be helpful to the physician as they can provide additional feedback in the needle insertion process. To this end, we propose a deep neural network that classifies the tissues from the phase and intensity data of complex OCT signals acquired at the needle tip. We investigate the performance of the deep neural network in a limited labelled dataset scenario and propose a novel contrastive pretraining strategy that learns invariant representation for phase and intensity data. We show that with 10% of the training set, our proposed pretraining strategy helps the model achieve an F1 score of 0.84±0.10 whereas the model achieves an F1 score of 0.60±0.07 without it. Further, we analyse the importance of phase and intensity individually towards tissue classification.


Assuntos
Anestesia Epidural , Tomografia de Coerência Óptica , Aprendizagem , Agulhas , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA