Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
1.
Med Image Underst Anal ; 14122: 48-63, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39156493

RESUMO

Acquiring properly annotated data is expensive in the medical field as it requires experts, time-consuming protocols, and rigorous validation. Active learning attempts to minimize the need for large annotated samples by actively sampling the most informative examples for annotation. These examples contribute significantly to improving the performance of supervised machine learning models, and thus, active learning can play an essential role in selecting the most appropriate information in deep learning-based diagnosis, clinical assessments, and treatment planning. Although some existing works have proposed methods for sampling the best examples for annotation in medical image analysis, they are not task-agnostic and do not use multimodal auxiliary information in the sampler, which has the potential to increase robustness. Therefore, in this work, we propose a Multimodal Variational Adversarial Active Learning (M-VAAL) method that uses auxiliary information from additional modalities to enhance the active sampling. We applied our method to two datasets: i) brain tumor segmentation and multi-label classification using the BraTS2018 dataset, and ii) chest X-ray image classification using the COVID-QU-Ex dataset. Our results show a promising direction toward data-efficient learning under limited annotations.

3.
J Med Imaging (Bellingham) ; 10(4): 045002, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37649957

RESUMO

Purpose: Medical technology for minimally invasive surgery has undergone a paradigm shift with the introduction of robot-assisted surgery. However, it is very difficult to track the position of the surgical tools in a surgical scene, so it is crucial to accurately detect and identify surgical tools. This task can be aided by deep learning-based semantic segmentation of surgical video frames. Furthermore, due to the limited working and viewing areas of these surgical instruments, there is a higher chance of complications from tissue injuries (e.g., tissue scars and tears). Approach: With the aid of digital inpainting algorithms, we present an application that uses image segmentation to remove surgical instruments from laparoscopic/endoscopic video. We employ a modified U-Net architecture (U-NetPlus) to segment the surgical instruments. It consists of a redesigned decoder and a pre-trained VGG11 or VGG16 encoder. The decoder was modified by substituting an up-sampling operation based on nearest-neighbor interpolation for the transposed convolution operation. Furthermore, these interpolation weights do not need to be learned to perform upsampling, which eliminates the artifacts generated by the transposed convolution. In addition, we use a very fast and adaptable data augmentation technique to further enhance performance. The instrument segmentation mask is filled in (i.e., inpainted) by the tool removal algorithms using the previously acquired tool segmentation masks and either previous instrument-containing frames or instrument-free reference frames. Results: We have shown the effectiveness of the proposed surgical tool segmentation/removal algorithms on a robotic instrument dataset from the MICCAI 2015 and 2017 EndoVis Challenge. We report a 90.20% DICE for binary segmentation, a 76.26% DICE for instrument part segmentation, and a 46.07% DICE for instrument type (i.e., all instruments) segmentation on the MICCAI 2017 challenge dataset using our U-NetPlus architecture, outperforming the results of earlier techniques used and tested on these data. In addition, we demonstrated the successful execution of the tool removal algorithm from surgical tool-free videos that contained moving surgical tools that were generated artificially. Conclusions: Our application successfully separates and eliminates the surgical tool to reveal a view of the background tissue that was otherwise hidden by the tool, producing results that are visually similar to the actual data.

4.
J Med Imaging (Bellingham) ; 10(4): 045001, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37457791

RESUMO

Purpose: Stereo matching methods that enable depth estimation are crucial for visualization enhancement applications in computer-assisted surgery. Learning-based stereo matching methods have shown great promise in making accurate predictions on laparoscopic images. However, they require a large amount of training data, and their performance may be degraded due to domain shifts. Approach: Maintaining robustness and improving the accuracy of learning-based methods are still open problems. To overcome the limitations of learning-based methods, we propose a disparity refinement framework consisting of a local disparity refinement method and a global disparity refinement method to improve the results of learning-based stereo matching methods in a cross-domain setting. Those learning-based stereo matching methods are pre-trained on a large public dataset of natural images and are tested on two datasets of laparoscopic images. Results: Qualitative and quantitative results suggest that our proposed disparity framework can effectively refine disparity maps when they are noise-corrupted on an unseen dataset, without compromising prediction accuracy when the network can generalize well on an unseen dataset. Conclusions: Our proposed disparity refinement framework could work with learning-based methods to achieve robust and accurate disparity prediction. Yet, as a large laparoscopic dataset for training learning-based methods does not exist and the generalization ability of networks remains to be improved, the incorporation of the proposed disparity refinement framework into existing networks will contribute to improving their overall accuracy and robustness associated with depth estimation.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37124050

RESUMO

Ultrasound (US) elastography is a technique that enables non-invasive quantification of material properties, such as stiffness, from ultrasound images of deforming tissue. The displacement field is measured from the US images using image matching algorithms, and then a parameter, often the elastic modulus, is inferred or subsequently measured to identify potential tissue pathologies, such as cancerous tissues. Several traditional inverse problem approaches, loosely grouped as either direct or iterative, have been explored to estimate the elastic modulus. Nevertheless, the iterative techniques are typically slow and computationally intensive, while the direct techniques, although more computationally efficient, are very sensitive to measurement noise and require the full displacement field data (i.e., both vector components). In this work, we propose a deep learning approach to solve the inverse problem and recover the spatial distribution of the elastic modulus from one component of the US measured displacement field. The neural network used here is trained using only simulated data obtained via a forward finite element (FE) model with known variations in the modulus field, thus avoiding the reliance on large measurement data sets that may be challenging to acquire. A U-net based neural network is then used to predict the modulus distribution (i.e., solve the inverse problem) using the simulated forward data as input. We quantitatively evaluated our trained model with a simulated test dataset and observed a 0.0018 mean squared error (MSE) and a 1.14% mean absolute percent error (MAPE) between the reconstructed and ground truth elastic modulus. Moreover, we also qualitatively compared the output of our U-net model to experimentally measured displacement data acquired using a US elastography tissue-mimicking calibration phantom.

6.
Artigo em Inglês | MEDLINE | ID: mdl-37123015

RESUMO

Label noise is inevitable in medical image databases developed for deep learning due to the inter-observer variability caused by the different levels of expertise of the experts annotating the images, and, in some cases, the automated methods that generate labels from medical reports. It is known that incorrect annotations or label noise can degrade the actual performance of supervised deep learning models and can bias the model's evaluation. Existing literature show that noise in one class has minimal impact on the model's performance for another class in natural image classification problems where different target classes have a relatively distinct shape and share minimal visual cues for knowledge transfer among the classes. However, it is not clear how class-dependent label noise affects the model's performance when operating on medical images, for which different output classes can be difficult to distinguish even for experts, and there is a high possibility of knowledge transfer across classes during the training period. We hypothesize that for medical image classification tasks where the different classes share a very similar shape with differences only in texture, the noisy label for one class might affect the performance across other classes, unlike the case when the target classes have different shapes and are visually distinct. In this paper, we study this hypothesis using two publicly available datasets: a 2D organ classification dataset with target organ classes being visually distinct, and a histopathology image classification dataset where the target classes look very similar visually. Our results show that the label noise in one class has a much higher impact on the model's performance on other classes for the histopathology dataset compared to the organ dataset.

7.
J Med Imaging (Bellingham) ; 10(5): 051808, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37235130

RESUMO

Purpose: High-resolution late gadolinium enhanced (LGE) cardiac magnetic resonance imaging (MRI) volumes are difficult to acquire due to the limitations of the maximal breath-hold time achievable by the patient. This results in anisotropic 3D volumes of the heart with high in-plane resolution, but low-through-plane resolution. Thus, we propose a 3D convolutional neural network (CNN) approach to improve the through-plane resolution of the cardiac LGE-MRI volumes. Approach: We present a 3D CNN-based framework with two branches: a super-resolution branch to learn the mapping between low-resolution and high-resolution LGE-MRI volumes, and a gradient branch that learns the mapping between the gradient map of low-resolution LGE-MRI volumes and the gradient map of high-resolution LGE-MRI volumes. The gradient branch provides structural guidance to the CNN-based super-resolution framework. To assess the performance of the proposed CNN-based framework, we train two CNN models with and without gradient guidance, namely, dense deep back-projection network (DBPN) and enhanced deep super-resolution network. We train and evaluate our method on the 2018 atrial segmentation challenge dataset. Additionally, we also evaluate these trained models on the left atrial and scar quantification and segmentation challenge 2022 dataset to assess their generalization ability. Finally, we investigate the effect of the proposed CNN-based super-resolution framework on the 3D segmentation of the left atrium (LA) from these cardiac LGE-MRI image volumes. Results: Experimental results demonstrate that our proposed CNN method with gradient guidance consistently outperforms bicubic interpolation and the CNN models without gradient guidance. Furthermore, the segmentation results, evaluated using Dice score, obtained using the super-resolved images generated by our proposed method are superior to the segmentation results obtained using the images generated by bicubic interpolation (p<0.01) and the CNN models without gradient guidance (p<0.05). Conclusion: The presented CNN-based super-resolution method with gradient guidance improves the through-plane resolution of the LGE-MRI volumes and the structure guidance provided by the gradient branch can be useful to aid the 3D segmentation of cardiac chambers, such as LA, from the 3D LGE-MRI images.

8.
Int J Comput Assist Radiol Surg ; 18(6): 1025-1032, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37079248

RESUMO

PURPOSE: In laparoscopic liver surgery, preoperative information can be overlaid onto the intra-operative scene by registering a 3D preoperative model to the intra-operative partial surface reconstructed from the laparoscopic video. To assist with this task, we explore the use of learning-based feature descriptors, which, to our best knowledge, have not been explored for use in laparoscopic liver registration. Furthermore, a dataset to train and evaluate the use of learning-based descriptors does not exist. METHODS: We present the LiverMatch dataset consisting of 16 preoperative models and their simulated intra-operative 3D surfaces. We also propose the LiverMatch network designed for this task, which outputs per-point feature descriptors, visibility scores, and matched points. RESULTS: We compare the proposed LiverMatch network with a network closest to LiverMatch and a histogram-based 3D descriptor on the testing split of the LiverMatch dataset, which includes two unseen preoperative models and 1400 intra-operative surfaces. Results suggest that our LiverMatch network can predict more accurate and dense matches than the other two methods and can be seamlessly integrated with a RANSAC-ICP-based registration algorithm to achieve an accurate initial alignment. CONCLUSION: The use of learning-based feature descriptors in laparoscopic liver registration (LLR) is promising, as it can help achieve an accurate initial rigid alignment, which, in turn, serves as an initialization for subsequent non-rigid registration.


Assuntos
Laparoscopia , Fígado , Humanos , Fígado/diagnóstico por imagem , Fígado/cirurgia , Laparoscopia/métodos , Algoritmos
9.
Data Eng Med Imaging (2023) ; 14314: 91-101, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39139984

RESUMO

Due to limited direct organ visualization, minimally invasive interventions rely extensively on medical imaging and image guidance to ensure accurate surgical instrument navigation and target tissue manipulation. In the context of laparoscopic liver interventions, intra-operative video imaging only provides a limited field-of-view of the liver surface, with no information of any internal liver lesions identified during diagnosis using pre-procedural imaging. Hence, to enhance intra-procedural visualization and navigation, the registration of pre-procedural, diagnostic images and anatomical models featuring target tissues to be accessed or manipulated during surgery entails a sufficient accurate registration of the pre-procedural data into the intra-operative setting. Prior work has demonstrated the feasibility of neural network-based solutions for nonrigid volume-to-surface liver registration. However, view occlusion, lack of meaningful feature landmarks, and liver deformation between the pre- and intra-operative settings all contribute to the difficulty of this registration task. In this work, we leverage some of the state-of-the-art deep learning frameworks to implement and test various network architecture modifications toward improving the accuracy and robustness of volume-to-surface liver registration. Specifically, we focus on the adaptation of a transformer-based segmentation network for the task of better predicting the optimal displacement field for nonrigid registration. Our results suggest that one particular transformer-based network architecture-UTNet-led to significant improvements over baseline performance, yielding a mean displacement error on the order of 4 mm across a variety of datasets.

10.
Data Eng Med Imaging (2023) ; 14314: 78-90, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39144367

RESUMO

Noisy labels hurt deep learning-based supervised image classification performance as the models may overfit the noise and learn corrupted feature extractors. For natural image classification training with noisy labeled data, model initialization with contrastive self-supervised pretrained weights has shown to reduce feature corruption and improve classification performance. However, no works have explored: i) how other self-supervised approaches, such as pretext task-based pretraining, impact the learning with noisy label, and ii) any self-supervised pretraining methods alone for medical images in noisy label settings. Medical images often feature smaller datasets and subtle inter-class variations, requiring human expertise to ensure correct classification. Thus, it is not clear if the methods improving learning with noisy labels in natural image datasets such as CIFAR would also help with medical images. In this work, we explore contrastive and pretext task-based selfsupervised pretraining to initialize the weights of a deep learning classification model for two medical datasets with self-induced noisy labels-NCT-CRC-HE-100K tissue histological images and COVID-QU-Ex chest X-ray images. Our results show that models initialized with pretrained weights obtained from self-supervised learning can effectively learn better features and improve robustness against noisy labels.

11.
Artigo em Inglês | MEDLINE | ID: mdl-36465979

RESUMO

Lung nodule tracking assessment relies on cross-sectional measurements of the largest lesion profile depicted in initial and follow-up computed tomography (CT) images. However, apparent changes in nodule size assessed via simple image-based measurements may also be compromised by the effect of the background lung tissue deformation on the GGN between the initial and follow-up images, leading to erroneous conclusions about nodule changes due to disease. To compensate for the lung deformation and enable consistent nodule tracking, here we propose a feature-based affine registration method and study its performance vis-a-vis several other registration methods. We implement and test each registration method using both a lung- and a lesion-centered region of interest on ten patient CT datasets featuring twelve nodules, including both benign and malignant GGO lesions containing pure GGNs, part-solid, or solid nodules. We evaluate each registration method according to the target registration error (TRE) computed across 30 - 50 homologous fiducial landmarks surrounding the lesions and selected by expert radiologists in both the initial and follow-up patient CT images. Our results show that the proposed feature-based affine lesion-centered registration yielded a 1.1 ± 1.2 mm TRE, while a Symmetric Normalization deformable registration yielded a 1.2 ± 1.2 mm TRE, and a least-square fit registration of the 30-50 validation fiducial landmark set yielded a 1.5 ± 1.2 mm TRE. Although the deformable registration yielded a slightly higher registration accuracy than the feature-based affine registration, it is significantly more computationally efficient, eliminates the need for ambiguous segmentation of GGNs featuring ill-defined borders, and reduces the susceptibility of artificial deformations introduced by the deformable registration, which may lead to increased similarity between the registered initial and follow-up images, over-compensating for the background lung tissue deformation, and, in turn, compromising the true disease-induced nodule change assessment. We also assessed the registration qualitatively, by visual inspection of the subtraction images, and conducted a pilot pre-clinical study that showed the proposed feature-based lesion-centered affine registration effectively compensates for the background lung tissue deformation between the initial and follow-up images and also serves as a reliable baseline registration method prior to assessing lung nodule changes due to disease.

12.
J Med Imaging (Bellingham) ; 9(Suppl 1): 012206, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36225968

RESUMO

Purpose: Among the conferences comprising the Medical Imaging Symposium is the MI104 conference currently titled Image-Guided Procedures, Robotic Interventions, and Modeling, although its name has evolved through at least nine iterations over the last 30 years. Here, we discuss the important role that this forum has presented for researchers in the field during this time. Approach: The origins of the conference are traced from its roots in Image Capture and Display in the late 1980s, and some of the major themes for which the conference and its proceedings have provided a valuable forum are highlighted. Results: These major themes include image display/visualization, surgical tracking/navigation, surgical robotics, interventional imaging, image registration, and modeling. Exceptional work from the conference is highlighted by summarizing keynote lectures, the top 50 most downloaded proceedings papers over the last 30 years, the most downloaded paper each year, and the papers earning student paper and young scientist awards. Conclusions: Looking forward and considering the burgeoning technologies, algorithms, and markets related to image-guided and robot-assisted interventions, we anticipate growth and ever increasing quality of the conference as well as increased interaction with sister conferences within the symposium.

13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1707-1710, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086376

RESUMO

In this paper, we describe a 3D convolutional neural network (CNN) framework to compute and generate super-resolution late gadolinium enhanced (LGE) cardiac magnetic resonance imaging (MRI) images. The proposed CNN framework consists of two branches: a super-resolution branch with a 3D dense deep back-projection network (DBPN) as the backbone to learn the mapping of low-resolution LGE cardiac volumes to high-resolution LGE cardiac volumes, and a gradient branch that learns the mapping of the gradient map of low resolution LGE cardiac volumes to the gradient map of their high-resolution counterparts. The gradient branch of the CNN provides additional cardiac structure information to the super-resolution branch to generate structurally more accurate super-resolution LGE MRI images. We conducted our experiments on the 2018 atrial segmentation challenge dataset. The proposed CNN framework achieved a mean peak signal-to-noise ratio (PSNR) of 30.91 and 25.66 and a mean structural similarity index measure (SSIM) of 0.91 and 0.75 on training the model on low-resolution images downsamp led by a scale factor of 2 and 4, respectively.


Assuntos
Gadolínio , Processamento de Imagem Assistida por Computador , Átrios do Coração , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 5047-5050, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085846

RESUMO

While convolutional neural networks (CNNs) have shown potential in segmenting cardiac structures from magnetic resonance (MR) images, their clinical applications still fall short of providing reliable cardiac segmentation. As a result, it is critical to quantify segmentation uncertainty in order to identify which segmentations might be troublesome. Moreover, quantifying uncertainty is critical in real-world scenarios, where input distributions are frequently moved from the training distribution due to sample bias and non-stationarity. Therefore, well-calibrated uncertainty estimates provide information on whether a model's output should (or should not) be trusted in such situations. In this work, we used a Bayesian version of our previously proposed CondenseUNet [1] framework featuring both a learned group structure and a regularized weight-pruner to reduce the computational cost in volumetric image segmentation and help quantify predictive uncertainty. Our study further showcases the potential of our deep-learning framework to evaluate the correlation between the uncertainty and the segmentation errors for a given model. The proposed model was trained and tested on the Automated Cardiac Diagnosis Challenge (ACDC) dataset featuring 150 cine cardiac MRI patient dataset for the segmentation and uncertainty estimation of the left ventricle (LV), right ventricle (RV), and myocardium (Myo) at end-diastole (ED) and end-systole (ES) phases.


Assuntos
Ventrículos do Coração , Imageamento por Ressonância Magnética , Teorema de Bayes , Ventrículos do Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Incerteza
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4839-4842, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086106

RESUMO

In image-guided surgery, endoscope tracking and surgical scene reconstruction are critical, yet equally challenging tasks. We present a hybrid visual odometry and reconstruction framework for stereo endoscopy that leverages unsupervised learning-based and traditional optical flow methods to enable concurrent endoscope tracking and dense scene reconstruction. More specifically, to reconstruct texture-less tissue surfaces, we use an unsupervised learning-based optical flow method to estimate dense depth maps from stereo images. Robust 3D landmarks are selected from the dense depth maps and tracked via the Kanade-Lucas-Tomasi tracking algorithm. The hybrid visual odometry also benefits from traditional visual odometry modules, such as keyframe insertion and local bundle adjustment. We evaluate the proposed framework on endoscopic video sequences openly available via the SCARED dataset against both ground truth data, as well as two other state-of-the-art methods - ORB-SLAM2 and Endo-depth. Our proposed method achieved comparable results in terms of both RMS Absolute Trajectory Error and Cloud-to-Mesh RMS Error, suggesting its potential to enable accurate endoscope tracking and scene reconstruction.


Assuntos
Fluxo Óptico , Algoritmos , Endoscópios , Endoscopia Gastrointestinal , Imageamento Tridimensional/métodos
16.
Artigo em Inglês | MEDLINE | ID: mdl-35645450

RESUMO

Accurate alignment of longitudinal diffusion weighted imaging (DWI) scans of a subject is necessary to investigate longitudinal changes in DWI-derived diffusion measures such as fractional anisotropy (FA), mean diffusivity (MD), and quantitative anisotropy (QA). Currently, studies investigating these changes in the context of repetitive non-concussive head injuries (RHIs) perform pairwise rigid registration of all scans of a subject to the first scan or any other reference scan or template. Prajapati et.al 1 show that this strategy of performing pairwise rigid registration lead to a discrepancy in the rigid transformations. To eliminate this discrepancy, they propose performing transitive inverse consistent rigid registration of the longitudinal scans, and they analyze the impact of this approach on the mean values of the local/regional estimates of these diffusion measures. In this work, we further analyze the impact of transitive inverse consistent rigid registration on the distributions (CDFs) of the local/regional estimates of diffusion measures. We identify the regions (among the 48 anatomically defined regions by the JHU DTI-based white matter atlas2,3) that show significant differences in the CDFs obtained using pairwise inverse consistent and transitive inverse consistent rigid registration by performing the two sided Kolmogorov-Smirnov(KS) hypothesis test. We find that for MD and QA, there are certain subjects that have five or more regions with significant differences in the CDFs. Further, these are the same subjects for which Prajapati et.al 1 found regions with 2%-4% differences in the mean values of these diffusion measures. Thus, our results further strengthen the recommendation made by Prajapati et.al 1 to employ transitive inverse consistent rigid registration when investigating local/regional longitudinal changes in diffusion measures.

17.
Artigo em Inglês | MEDLINE | ID: mdl-35634478

RESUMO

While deep learning has shown potential in solving a variety of medical image analysis problems including segmentation, registration, motion estimation, etc., their applications in the real-world clinical setting are still not affluent due to the lack of reliability caused by the failures of deep learning models in prediction. Furthermore, deep learning models need a large number of labeled datasets. In this work, we propose a novel method that incorporates uncertainty estimation to detect failures in the segmentation masks generated by CNNs. Our study further showcases the potential of our model to evaluate the correlation between the uncertainty and the segmentation errors for a given model. Furthermore, we introduce a multi-task cross-task learning consistency approach to enforce the correlation between the pixel-level (segmentation) and the geometric-level (distance map) tasks. Our extensive experimentation with varied quantities of labeled data in the training sets justifies the effectiveness of our model for the segmentation and uncertainty estimation of the left ventricle (LV), right ventricle (RV), and myocardium (Myo) at end-diastole (ED) and end-systole (ES) phases from cine MRI images available through the MICCAI 2017 ACDC Challenge Dataset. Our study serves as a proof-of-concept of how uncertainty measure correlates with the erroneous segmentation generated by different deep learning models, further showcasing the potential of our model to flag low-quality segmentation from a given model in our future study.

18.
Comput Cardiol (2010) ; 20222022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37124718

RESUMO

Pulsed field ablation (PFA) has the potential to evolve into an efficient alternative to traditional RF ablation for atrial fibrillation treatment. However, achieving irreversible tissue electroporation is critical to suppressing arrhythmic pathways, raising the need for accurate lesion characterization. To understand the physics behind the tissue response PFA, we propose a quasi-dynamic model that quantifies tissue conductance at end-electroporation and identifies regions that have undergone fully irreversible electroporation (IRE). The model uses several parameters and numerically solves the electrical field diffusion into the tissue by iteratively updating the tissue conductance until equilibrium at end-electroporation. The model yields a steady-state tissue conductance map used to identify the irreversible lesion. We conducted numerical experiments mimicking a lasso catheter featuring nine 3-mm electrodes spaced circumferentially at 3.75 mm and fired sequentially using a 1500 V and 3000 V pulse amplitude. The IRE lesion region has a surface area and volume of 780 mm2 and 1411 mm3, respectively, at 1500 V, and 1178 mm2 and 2760 mm3, respectively, at 3000 V. Lesion discontinuity was observed at 5.0 mm depth with 1500 V, and 7.2 mm depth with 3000 V. This quasi-dynamic model yields tissue conductance maps, predicts irreversible lesion and lesion penumbra at end-electroporation, and confirms larger lesions with higher pulse amplitudes.

19.
Appl Sci (Basel) ; 12(23)2022 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-37125242

RESUMO

Learning good data representations for medical imaging tasks ensures the preservation of relevant information and the removal of irrelevant information from the data to improve the interpretability of the learned features. In this paper, we propose a semi-supervised model-namely, combine-all in semi-supervised learning (CqSL)-to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two important tasks in medical imaging: segmentation and reconstruction. Our work is motivated by the recent progress in image segmentation using semi-supervised learning (SSL), which has shown good results with limited labeled data and large amounts of unlabeled data. A disentanglement block decomposes an input image into a domain-invariant spatial factor and a domain-specific non-spatial factor. We assume that medical images acquired using multiple scanners (different domain information) share a common spatial space but differ in non-spatial space (intensities, contrast, etc.). Hence, we utilize our spatial information to generate segmentation masks from unlabeled datasets using a generative adversarial network (GAN). Finally, to reconstruct the original image, our conditioning layer-based reconstruction block recombines spatial information with random non-spatial information sampled from the generative models. Our ablation study demonstrates the benefits of disentanglement in holding domain-invariant (spatial) as well as domain-specific (non-spatial) information with high accuracy. We further apply a structured L 2 similarity ( S L 2 SIM ) loss along with a mutual information minimizer (MIM) to improve the adversarially trained generative models for better reconstruction. Experimental results achieved on the STACOM 2017 ACDC cine cardiac magnetic resonance (MR) dataset suggest that our proposed (CqSL) model outperforms fully supervised and semi-supervised models, achieving an 83.2% performance accuracy even when using only 1% labeled data. We hypothesize that our proposed model has the potential to become an efficient semantic segmentation tool that may be used for domain adaptation in data-limited medical imaging scenarios, where annotations are expensive. Code, and experimental configurations will be made available publicly.

20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3795-3799, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892062

RESUMO

Cardiac Cine Magnetic Resonance (CMR) Imaging has made a significant paradigm shift in medical imaging technology, thanks to its capability of acquiring high spatial and temporal resolution images of different structures within the heart that can be used for reconstructing patient-specific ventricular computational models. In this work, we describe the development of dynamic patient-specific right ventricle (RV) models associated with normal subjects and abnormal RV patients to be subsequently used to assess RV function based on motion and kinematic analysis. We first constructed static RV models using segmentation masks of cardiac chambers generated from our accurate, memory-efficient deep neural architecture - CondenseUNet - featuring both a learned group structure and a regularized weight-pruner to estimate the motion of the right ventricle. In our study, we use a deep learning-based deformable network that takes 3D input volumes and outputs a motion field which is then used to generate isosurface meshes of the cardiac geometry at all cardiac frames by propagating the end-diastole (ED) isosurface mesh using the reconstructed motion field. The proposed model was trained and tested on the Automated Cardiac Diagnosis Challenge (ACDC) dataset featuring 150 cine cardiac MRI patient datasets. The isosurface meshes generated using the proposed pipeline were compared to those obtained using motion propagation via traditional non-rigid registration based on several performance metrics, including Dice score and mean absolute distance (MAD).


Assuntos
Aprendizado Profundo , Imagem Cinética por Ressonância Magnética , Ventrículos do Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA