Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 75
Filtrar
1.
Healthc Technol Lett ; 11(2-3): 67-75, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638503

RESUMO

Endoscopic renal surgeries have high re-operation rates, particularly for lower volume surgeons. Due to the limited field and depth of view of current endoscopes, mentally mapping preoperative computed tomography (CT) images of patient anatomy to the surgical field is challenging. The inability to completely navigate the intrarenal collecting system leads to missed kidney stones and tumors, subsequently raising recurrence rates. A guidance system is proposed to estimate the endoscope positions within the CT to reduce re-operation rates. A Structure from Motion algorithm is used to reconstruct the kidney collecting system from the endoscope videos. In addition, the kidney collecting system is segmented from CT scans using 3D U-Net to create a 3D model. The two collecting system representations can then be registered to provide information on the relative endoscope position. Correct reconstruction and localization of intrarenal anatomy and endoscope position is demonstrated. Furthermore, a 3D map is created supported by the RGB endoscope images to reduce the burden of mental mapping during surgery. The proposed reconstruction pipeline has been validated for guidance. It can reduce the mental burden for surgeons and is a step towards the long-term goal of reducing re-operation rates in kidney stone surgery.

2.
Healthc Technol Lett ; 11(2-3): 40-47, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638492

RESUMO

Kidney stones require surgical removal when they grow too large to be broken up externally or to pass on their own. Upper tract urothelial carcinoma is also sometimes treated endoscopically in a similar procedure. These surgeries are difficult, particularly for trainees who often miss tumours, stones or stone fragments, requiring re-operation. Furthermore, there are no patient-specific simulators to facilitate training or standardized visualization tools for ureteroscopy despite its high prevalence. Here a system ASSIST-U is proposed to create realistic ureteroscopy images and videos solely using preoperative computerized tomography (CT) images to address these unmet needs. A 3D UNet model is trained to automatically segment CT images and construct 3D surfaces. These surfaces are then skeletonized for rendering. Finally, a style transfer model is trained using contrastive unpaired translation (CUT) to synthesize realistic ureteroscopy images. Cross validation on the CT segmentation model achieved a Dice score of 0.853 ± 0.084. CUT style transfer produced visually plausible images; the kernel inception distance to real ureteroscopy images was reduced from 0.198 (rendered) to 0.089 (synthesized). The entire pipeline from CT to synthesized ureteroscopy is also qualitatively demonstrated. The proposed ASSIST-U system shows promise for aiding surgeons in the visualization of kidney ureteroscopy.

3.
J Endourol ; 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38661528

RESUMO

INTRODUCTION: Endoscopic tumor ablation of upper tract urothelial carcinoma (UTUC) allows for tumor control with the benefit of renal preservation but is impacted by intraoperative visibility. We sought to develop a computer vision model for real-time, automated segmentation of UTUC tumors to augment visualization during treatment. MATERIALS AND METHODS: We collected twenty videos of endoscopic treatment of UTUC from two institutions. Frames from each video (N=3387) were extracted and manually annotated to identify tumors and areas of ablated tumor. Three established computer vision models (U-Net, U-Net++ and UNext) were trained using these annotated frames and compared. Eighty percent of the data was used to train the models while 10% was used for both validation and testing. We evaluated the highest performing model for tumor and ablated tissue segmentation using a pixel-based analysis. The model and a video overlay depicting tumor segmentation were further evaluated intraoperatively. RESULTS: All twenty videos (mean 36 seconds ± 58s) demonstrated tumor identification and 12 depicted areas of ablated tumor. The U-Net model demonstrated the best performance for segmentation of both tumors (AUC-ROC of 0.96) and areas of ablated tumor (AUC-ROC of 0.90). Additionally, we implemented a working system to process real-time video feeds and overlay model predictions intraoperatively. The model was able to annotate new videos at 15 fps. CONCLUSIONS: Computer vision models demonstrate excellent real-time performance for automated upper tract urothelial tumor segmentation during ureteroscopy.

4.
Med Image Anal ; 95: 103164, 2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38615431

RESUMO

Blessed by vast amounts of data, learning-based methods have achieved remarkable performance in countless tasks in computer vision and medical image analysis. Although these deep models can simulate highly nonlinear mapping functions, they are not robust with regard to the domain shift of input data. This is a significant concern that impedes the large-scale deployment of deep models in medical images since they have inherent variation in data distribution due to the lack of imaging standardization. Therefore, researchers have explored many domain generalization (DG) methods to alleviate this problem. In this work, we introduce a Hessian-based vector field that can effectively model the tubular shape of vessels, which is an invariant feature for data across various distributions. The vector field serves as a good embedding feature to take advantage of the self-attention mechanism in a vision transformer. We design paralleled transformer blocks that stress the local features with different scales. Furthermore, we present a novel data augmentation method that introduces perturbations in image style while the vessel structure remains unchanged. In experiments conducted on public datasets of different modalities, we show that our model achieves superior generalizability compared with the existing algorithms. Our code and trained model are publicly available at https://github.com/MedICL-VU/Vector-Field-Transformer.

5.
Neurologist ; 29(3): 166-169, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38372201

RESUMO

INTRODUCTION: We present the case of a gentleman who developed rapidly progressive vision loss, ophthalmo-paresis, and flaccid quadriparesis in the context of severe intracranial hypertension. We reviewed the available cases in the literature to increase awareness of this rare clinical entity.Case Report:A 36-year-old man developed rapidly progressive vision loss, ophthalmo-paresis, and flaccid quadriparesis. He had an extensive workup, only notable for severe intracranial hypertension, >55 cm of H 2 O. No inflammatory features were present, and the patient responded to CSF diversion. Few similar cases are available in the literature, but all show markedly elevated intracranial pressure associated with extensive neuroaxis dysfunction. Similarly, these patients improved with CSF diversion but did not appear to respond to immune-based therapies. CONCLUSIONS: We term this extensive neuroaxis dysfunction intracranial hypertension associated with poly-cranio-radicular-neuropathy (IHP) and distinguish it from similar immune-mediated clinical presentations. Clinicians should be aware of the different etiologies of this potentially devastating clinical presentation to inform appropriate and timely treatment.


Assuntos
Hipertensão Intracraniana , Humanos , Masculino , Adulto , Hipertensão Intracraniana/complicações , Hipertensão Intracraniana/diagnóstico , Hipertensão Intracraniana/etiologia , Polirradiculoneuropatia/diagnóstico , Polirradiculoneuropatia/complicações
6.
IEEE Trans Med Imaging ; 43(5): 1995-2009, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38224508

RESUMO

Deep learning models have demonstrated remarkable success in multi-organ segmentation but typically require large-scale datasets with all organs of interest annotated. However, medical image datasets are often low in sample size and only partially labeled, i.e., only a subset of organs are annotated. Therefore, it is crucial to investigate how to learn a unified model on the available partially labeled datasets to leverage their synergistic potential. In this paper, we systematically investigate the partial-label segmentation problem with theoretical and empirical analyses on the prior techniques. We revisit the problem from a perspective of partial label supervision signals and identify two signals derived from ground truth and one from pseudo labels. We propose a novel two-stage framework termed COSST, which effectively and efficiently integrates comprehensive supervision signals with self-training. Concretely, we first train an initial unified model using two ground truth-based signals and then iteratively incorporate the pseudo label signal to the initial model using self-training. To mitigate performance degradation caused by unreliable pseudo labels, we assess the reliability of pseudo labels via outlier detection in latent space and exclude the most unreliable pseudo labels from each self-training iteration. Extensive experiments are conducted on one public and three private partial-label segmentation tasks over 12 CT datasets. Experimental results show that our proposed COSST achieves significant improvement over the baseline method, i.e., individual networks trained on each partially labeled dataset. Compared to the state-of-the-art partial-label segmentation methods, COSST demonstrates consistent superior performance on various segmentation tasks and with different training data sizes.


Assuntos
Bases de Dados Factuais , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Aprendizado de Máquina Supervisionado
7.
Comput Biol Med ; 152: 106414, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36525831

RESUMO

BACKGROUND: Anterior temporal lobe resection is an effective treatment for temporal lobe epilepsy. The post-surgical structural changes could influence the follow-up treatment. Capturing post-surgical changes necessitates a well-established cortical shape correspondence between pre- and post-surgical surfaces. Yet, most cortical surface registration methods are designed for normal neuroanatomy. Surgical changes can introduce wide ranging artifacts in correspondence, for which conventional surface registration methods may not work as intended. METHODS: In this paper, we propose a novel particle method for one-to-one dense shape correspondence between pre- and post-surgical surfaces with temporal lobe resection. The proposed method can handle partial structural abnormality involving non-rigid changes. Unlike existing particle methods using implicit particle adjacency, we consider explicit particle adjacency to establish a smooth correspondence. Moreover, we propose hierarchical optimization of particles rather than full optimization of all particles at once to avoid trappings of locally optimal particle update. RESULTS: We evaluate the proposed method on 25 pairs of T1-MRI with pre- and post-simulated resection on the anterior temporal lobe and 25 pairs of patients with actual resection. We show improved accuracy over several cortical regions in terms of ROI boundary Hausdorff distance with 4.29 mm and Dice similarity coefficients with average value 0.841, compared to existing surface registration methods on simulated data. In 25 patients with actual resection of the anterior temporal lobe, our method shows an improved shape correspondence in qualitative and quantitative evaluation on parcellation-off ratio with average value 0.061 and cortical thickness changes. We also show better smoothness of the correspondence without self-intersection, compared with point-wise matching methods which show various degrees of self-intersection. CONCLUSION: The proposed method establishes a promising one-to-one dense shape correspondence for temporal lobe resection. The resulting correspondence is smooth without self-intersection. The proposed hierarchical optimization strategy could accelerate optimization and improve the optimization accuracy. According to the results on the paired surfaces with temporal lobe resection, the proposed method outperforms the compared methods and is more reliable to capture cortical thickness changes.


Assuntos
Epilepsia do Lobo Temporal , Lobo Temporal , Humanos , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/cirurgia , Epilepsia do Lobo Temporal/diagnóstico por imagem , Epilepsia do Lobo Temporal/cirurgia , Imageamento por Ressonância Magnética/métodos , Resultado do Tratamento
8.
Hum Brain Mapp ; 44(4): 1417-1431, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36409662

RESUMO

The striatum has traditionally been the focus of Huntington's disease research due to the primary insult to this region and its central role in motor symptoms. Beyond the striatum, evidence of cortical alterations caused by Huntington's disease has surfaced. However, findings are not coherent between studies which have used cortical thickness for Huntington's disease since it is the well-established cortical metric of interest in other diseases. In this study, we propose a more comprehensive approach to cortical morphology in Huntington's disease using cortical thickness, sulcal depth, and local gyrification index. Our results show consistency with prior findings in cortical thickness, including its limitations. Our comparison between cortical thickness and local gyrification index underscores the complementary nature of these two measures-cortical thickness detects changes in the sensorimotor and posterior areas while local gyrification index identifies insular differences. Since local gyrification index and cortical thickness measures detect changes in different regions, the two used in tandem could provide a clinically relevant measure of disease progression. Our findings suggest that differences in insular regions may correspond to earlier neurodegeneration and may provide a complementary cortical measure for detection of subtle early cortical changes due to Huntington's disease.


Assuntos
Doença de Huntington , Neocórtex , Humanos , Doença de Huntington/diagnóstico por imagem , Córtex Cerebral/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos
9.
J Endourol ; 37(4): 495-501, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36401503

RESUMO

Objective: To evaluate the performance of computer vision models for automated kidney stone segmentation during flexible ureteroscopy and laser lithotripsy. Materials and Methods: We collected 20 ureteroscopy videos of intrarenal kidney stone treatment and extracted frames (N = 578) from these videos. We manually annotated kidney stones on each frame. Eighty percent of the data were used to train three standard computer vision models (U-Net, U-Net++, and DenseNet) for automatic stone segmentation during flexible ureteroscopy. The remaining data (20%) were used to compare performance of the three models after optimization through Dice coefficients and binary cross entropy. We identified the highest performing model and evaluated automatic segmentation performance during ureteroscopy for both stone localization and treatment using a separate set of endoscopic videos. We evaluated performance of the pixel-based analysis using area under the receiver operating characteristic curve (AUC-ROC), accuracy, sensitivity, and positive predictive value both in previously recorded videos and in real time. Results: A computer vision model (U-Net++) was evaluated, trained, and optimized for kidney stone segmentation during ureteroscopy using 20 surgical videos (mean video duration of 22 seconds, standard deviation ±13 seconds). The model showed good performance for stone localization with both digital ureteroscopes (AUC-ROC: 0.98) and fiberoptic ureteroscopes (AUC-ROC: 0.93). Furthermore, the model was able to accurately segment stones and stone fragments <270 µm in diameter during laser fragmentation (AUC-ROC: 0.87) and dusting (AUC-ROC: 0.77). The model automatically annotated videos intraoperatively in three cases and could do so in real time at 30 frames per second (FPS). Conclusion: Computer vision models demonstrate strong performance for automatic stone segmentation during ureteroscopy. Automatically annotating new videos at 30 FPS demonstrate the feasibility of real-time application during surgery, which could facilitate tracking tools for stone treatment.


Assuntos
Cálculos Renais , Litotripsia a Laser , Humanos , Ureteroscopia , Resultado do Tratamento , Cálculos Renais/diagnóstico por imagem , Cálculos Renais/cirurgia , Ureteroscópios
10.
Med Image Anal ; 83: 102628, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36283200

RESUMO

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Assuntos
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagem
11.
Otol Neurotol ; 43(10): 1252-1256, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36109146

RESUMO

HYPOTHESIS: Machine learning-derived algorithms are capable of automated calculation of vestibular schwannoma tumor volumes without operator input. BACKGROUND: Volumetric measurements are most sensitive for detection of vestibular schwannoma growth and important for patient counseling and management decisions. Yet, manually measuring volume is logistically challenging and time-consuming. METHODS: We developed a deep learning framework fusing transformers and convolutional neural networks to calculate vestibular schwannoma volumes without operator input. The algorithm was trained, validated, and tested on an external, publicly available data set consisting of magnetic resonance imaging images of medium and large tumors (178-9,598 mm 3 ) with uniform acquisition protocols. The algorithm was then trained, validated, and tested on an internal data set of variable size tumors (5-6,126 mm 3 ) with variable acquisition protocols. RESULTS: The externally trained algorithm yielded 87% voxel overlap (Dice score) with manually segmented tumors on the external data set. The same algorithm failed to translate to accurate tumor detection when tested on the internal data set, with Dice score of 36%. Retraining on the internal data set yielded Dice score of 82% when compared with manually segmented images, and 85% when only considering tumors of similar size as the external data set (>178 mm 3 ). Manual segmentation by two experts demonstrated high intraclass correlation coefficient (0.999). CONCLUSION: Sophisticated machine learning algorithms delineate vestibular schwannomas with an accuracy exceeding established norms of up to 20% error for repeated manual volumetric measurements-87% accuracy on a homogeneous data set, and 82% to 85% accuracy on a more varied data set mirroring real world neurotology practice. This technology has promise for clinical applicability and time savings.


Assuntos
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagem , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
12.
Front Neurol ; 13: 811315, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35785345

RESUMO

Purpose: In this cross-sectional, proof-of-concept study, we propose that using the more pathologically-specific neurite orientation dispersion and density imaging (NODDI) method, in conjunction with high-resolution probabilistic tractography, white matter tract templates can improve the assessment of regional axonal injury and its association with disability of people with multiple sclerosis (pwMS). Methods: Parametric maps of the neurite density index, orientation dispersion index, and the apparent isotropic volume fraction (IVF) were estimated in 18 pwMS and nine matched healthy controls (HCs). Tract-specific values were measured in transcallosal (TC) fibers from the paracentral lobules and TC and corticospinal fibers from the ventral and dorsal premotor areas, presupplementary and supplementary motor areas, and primary motor cortex. The nonparametric Mann-Whitney U test assessed group differences in the NODDI-derived metrics; the Spearman's rank correlation analyses measured associations between the NODDI metrics and other clinical or radiological variables. Results: IVF values of the TC fiber bundles from the paracentral, presupplementary, and supplementary motor areas were both higher in pwMS than in HCs (p ≤ 0.045) and in pwMS with motor disability compared to those without motor disability (p ≤ 0.049). IVF in several TC tracts was associated with the Expanded Disability Status Scale score (p ≤ 0.047), while regional and overall lesion burden correlated with the Timed 25-Foot Walking Test (p ≤ 0.049). Conclusion: IVF alterations are present in pwMS even when the other NODDI metrics are still mostly preserved. Changes in IVF are biologically non-specific and may not necessarily drive irreversible functional loss. However, by possibly preceding downstream pathologies that are strongly associated with disability accretion, IVF changes are indicators of, otherwise, occult prelesional tissue injury.

13.
Biomed Opt Express ; 13(3): 1398-1409, 2022 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-35415003

RESUMO

Optical coherence tomography (OCT) has become the gold standard for ophthalmic diagnostic imaging. However, clinical OCT image-quality is highly variable and limited visualization can introduce errors in the quantitative analysis of anatomic and pathologic features-of-interest. Frame-averaging is a standard method for improving image-quality, however, frame-averaging in the presence of bulk-motion can degrade lateral resolution and prolongs total acquisition time. We recently introduced a method called self-fusion, which reduces speckle noise and enhances OCT signal-to-noise ratio (SNR) by using similarity between from adjacent frames and is more robust to motion-artifacts than frame-averaging. However, since self-fusion is based on deformable registration, it is computationally expensive. In this study a convolutional neural network was implemented to offset the computational overhead of self-fusion and perform OCT denoising in real-time. The self-fusion network was pretrained to fuse 3 frames to achieve near video-rate frame-rates. Our results showed a clear gain in peak SNR in the self-fused images over both the raw and frame-averaged OCT B-scans. This approach delivers a fast and robust OCT denoising alternative to frame-averaging without the need for repeated image acquisition. Real-time self-fusion image enhancement will enable improved localization of OCT field-of-view relative to features-of-interest and improved sensitivity for anatomic features of disease.

14.
Front Neuroimaging ; 1: 861687, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37555187

RESUMO

In the fields of longitudinal cortical segmentation and surface-based cortical thickness (CT) measurement, difficulty in assessing accuracy remains a substantial limitation due to the inability of experimental validation against ground truth. Although methods have been developed to create synthetic datasets for these purposes, none provide a robust mechanism for measuring exact thickness changes with surface-based approaches. This work presents a registration-based technique for inducing synthetic cortical atrophy to create a longitudinal ground truth dataset specifically designed to address this gap in surface-based accuracy validation techniques. Across the entire brain, our method can induce up to between 0.8 and 2.5 mm of localized cortical atrophy in a given gyrus depending on the region's original thickness. By calculating the image deformation to induce this atrophy at 400% of the original resolution in each direction, we can induce a sub-voxel resolution amount of atrophy while minimizing partial volume effects. We also show that cortical segmentations of synthetically atrophied images exhibit similar segmentation error to those obtained from images of naturally atrophied brains. Importantly, our method relies exclusively on publicly available software and datasets.

15.
J Ultrasound Med ; 41(6): 1509-1524, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34553780

RESUMO

OBJECTIVES: Early placental volume (PV) has been associated with small-for-gestational-age infants born under the 10th/5th centiles (SGA10/SGA5). Manual or semiautomated PV quantification from 3D ultrasound (3DUS) is time intensive, limiting its incorporation into clinical care. We devised a novel convolutional neural network (CNN) pipeline for fully automated placenta segmentation from 3DUS images, exploring the association between the calculated PV and SGA. METHODS: Volumes of 3DUS obtained from singleton pregnancies at 11-14 weeks' gestation were automatically segmented by our CNN pipeline trained and tested on 99/25 images, combining two 2D and one 3D models with downsampling/upsampling architecture. The PVs derived from the automated segmentations (PVCNN ) were used to train multivariable logistic-regression classifiers for SGA10/SGA5. The test performance for predicting SGA was compared to PVs obtained via the semiautomated VOCAL (GE-Healthcare) method (PVVOCAL ). RESULTS: We included 442 subjects with 37 (8.4%) and 18 (4.1%) SGA10/SGA5 infants, respectively. Our segmentation pipeline achieved a mean Dice score of 0.88 on an independent test-set. Adjusted models including PVCNN or PVVOCAL were similarly predictive of SGA10 (area under curve [AUC]: PVCNN  = 0.780, PVVOCAL  = 0.768). The addition of PVCNN to a clinical model without any PV included (AUC = 0.725) yielded statistically significant improvement in AUC (P < .05); whereas PVVOCAL did not (P = .105). Moreover, when predicting SGA5, including the PVCNN (0.897) brought statistically significant improvement over both the clinical model (0.839, P = .015) and the PVVOCAL model (0.870, P = .039). CONCLUSIONS: First trimester PV measurements derived from our CNN segmentation pipeline are significantly associated with future SGA. This fully automated tool enables the incorporation of including placental volumetric biometry into the bedside clinical evaluation as part of a multivariable prediction model for risk stratification and patient counseling.


Assuntos
Placenta , Ultrassonografia Pré-Natal , Feminino , Idade Gestacional , Humanos , Recém-Nascido , Recém-Nascido Pequeno para a Idade Gestacional , Placenta/diagnóstico por imagem , Gravidez , Primeiro Trimestre da Gravidez , Ultrassonografia Pré-Natal/métodos
16.
Artigo em Inglês | MEDLINE | ID: mdl-34873358

RESUMO

Longitudinal information is important for monitoring the progression of neurodegenerative diseases, such as Huntington's disease (HD). Specifically, longitudinal magnetic resonance imaging (MRI) studies may allow the discovery of subtle intra-subject changes over time that may otherwise go undetected because of inter-subject variability. For HD patients, the primary imaging-based marker of disease progression is the atrophy of subcortical structures, mainly the caudate and putamen. To better understand the course of subcortical atrophy in HD and its correlation with clinical outcome measures, highly accurate segmentation is important. In recent years, subcortical segmentation methods have moved towards deep learning, given the state-of-the-art accuracy and computational efficiency provided by these models. However, these methods are not designed for longitudinal analysis, but rather treat each time point as an independent sample, discarding the longitudinal structure of the data. In this paper, we propose a deep learning based subcortical segmentation method that takes into account this longitudinal information. Our method takes a longitudinal pair of 3D MRIs as input, and jointly computes the corresponding segmentations. We use bi-directional convolutional long short-term memory (C-LSTM) blocks in our model to leverage the longitudinal information between scans. We test our method on the PREDICT-HD dataset and use the Dice coefficient, average surface distance and 95-percent Hausdorff distance as our evaluation metrics. Compared to cross-sectional segmentation, we improve the overall accuracy of segmentation, and our method has more consistent performance across time points. Furthermore, our method identifies a stronger correlation between subcortical volume loss and decline in the total motor score, an important clinical outcome measure for HD.

17.
Artigo em Inglês | MEDLINE | ID: mdl-34873357

RESUMO

Difficulty in validating accuracy remains a substantial setback in the field of surface-based cortical thickness (CT) measurement due to the lack of experimental validation against ground truth. Although methods have been developed to create synthetic datasets for this purpose, none provide a robust mechanism for measuring exact thickness changes with surface-based approaches. This work presents a registration-based technique for inducing synthetic cortical atrophy to create a longitudinal, ground truth dataset specifically designed for accuracy validation of surface-based CT measurements. Across the entire brain, we show our method can induce up to between 0.6 and 2.6 mm of localized cortical atrophy in a given gyrus depending on the region's original thickness. By calculating the image deformation to induce this atrophy at 400% of the original resolution in each direction, we can induce a sub-voxel resolution amount of atrophy while minimizing partial volume effects. We also show that our method can be extended beyond application to CT measurements for the accuracy validation of longitudinal cortical segmentation and surface reconstruction pipelines when measuring accuracy against cortical landmarks. Importantly, our method relies exclusively on publicly available software and datasets.

18.
Artigo em Inglês | MEDLINE | ID: mdl-34873359

RESUMO

The subcortical structures of the brain are relevant for many neurodegenerative diseases like Huntington's disease (HD). Quantitative segmentation of these structures from magnetic resonance images (MRIs) has been studied in clinical and neuroimaging research. Recently, convolutional neural networks (CNNs) have been successfully used for many medical image analysis tasks, including subcortical segmentation. In this work, we propose a 2-stage cascaded 3D subcortical segmentation framework, with the same 3D CNN architecture for both stages. Attention gates, residual blocks and output adding are used in our proposed 3D CNN. In the first stage, we apply our model to downsampled images to output a coarse segmentation. Next, we crop the extended subcortical region from the original image based on this coarse segmentation, and we input the cropped region to the second CNN to obtain the final segmentation. Left and right pairs of thalamus, caudate, pallidum and putamen are considered in our segmentation. We use the Dice coefficient as our metric and evaluate our method on two datasets: the publicly available IBSR dataset and a subset of the PREDICT-HD database, which includes healthy controls and HD subjects. We train our models on only healthy control subjects and test on both healthy controls and HD subjects to examine model generalizability. Compared with the state-of-the-art methods, our method has the highest mean Dice score on all considered subcortical structures (except the thalamus on IBSR), with more pronounced improvement for HD subjects. This suggests that our method may have better ability to segment MRIs of subjects with neurodegenerative disease.

19.
Artigo em Inglês | MEDLINE | ID: mdl-34950935

RESUMO

Optical coherence tomography (OCT) is a non-invasive imaging technique widely used for ophthalmology. It can be extended to OCT angiography (OCT-A), which reveals the retinal vasculature with improved contrast. Recent deep learning algorithms produced promising vascular segmentation results; however, 3D retinal vessel segmentation remains difficult due to the lack of manually annotated training data. We propose a learning-based method that is only supervised by a self-synthesized modality named local intensity fusion (LIF). LIF is a capillary-enhanced volume computed directly from the input OCT-A. We then construct the local intensity fusion encoder (LIFE) to map a given OCT-A volume and its LIF counterpart to a shared latent space. The latent space of LIFE has the same dimensions as the input data and it contains features common to both modalities. By binarizing this latent space, we obtain a volumetric vessel segmentation. Our method is evaluated in a human fovea OCT-A and three zebrafish OCT-A volumes with manual labels. It yields a Dice score of 0.7736 on human data and 0.8594 ± 0.0275 on zebrafish data, a dramatic improvement over existing unsupervised algorithms.

20.
Proc SPIE Int Soc Opt Eng ; 115962021 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-34531630

RESUMO

In pre- and post-surgical surface shape analysis, establishing shape correspondence is necessary to investigate the postoperative surface changes. However, structural absence after the operation accompanies focal non-rigid changes, which leads to challenges in existing surface registration methods. In this paper, we present a fully automatic particle-based method to establish surface correspondence that can handle partial structural abnormality in the temporal lobe resection. Our method optimizes the coordinates of points which are modeled as particles on surfaces in a hierarchical way to reduce a chance of being trapped in a local minimum during the optimization. In the experiments, we evaluate the effectiveness of our method in comparison with conventional spherical registration (FreeSurfer) on two scenarios: cortical thickness changes in healthy controls within a short scan-rescan time window and patients with temporal lobe resection. The post-surgical scan is acquired at least 1 year after the presurgical scan. In region of interest-wise (ROI-wise) analysis, no changes on cortical thickness are found in both methods on the healthy control group. In patients, since there is no ground truth available, we instead investigated the disagreement between our method and FreeSurfer. We see poorly matched ROIs and large cortical thickness changes using FreeSurfer. On the contrary, our method shows well-matched ROIs and subtle cortical thickness changes. This suggests that the proposed method can establish a stable shape correspondence, which is not fully captured in a conventional spherical registration.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...