Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 213
Filtrar
1.
Front Neurosci ; 18: 1411797, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38988766

RESUMEN

Neuroimaging-based prediction of neurocognitive measures is valuable for studying how the brain's structure relates to cognitive function. However, the accuracy of prediction using popular linear regression models is relatively low. We propose a novel deep regression method, namely TractoSCR, that allows full supervision for contrastive learning in regression tasks using diffusion MRI tractography. TractoSCR performs supervised contrastive learning by using the absolute difference between continuous regression labels (i.e., neurocognitive scores) to determine positive and negative pairs. We apply TractoSCR to analyze a large-scale dataset including multi-site harmonized diffusion MRI and neurocognitive data from 8,735 participants in the Adolescent Brain Cognitive Development (ABCD) Study. We extract white matter microstructural measures using a fine parcellation of white matter tractography into fiber clusters. Using these measures, we predict three scores related to domains of higher-order cognition (general cognitive ability, executive function, and learning/memory). To identify important fiber clusters for prediction of these neurocognitive scores, we propose a permutation feature importance method for high-dimensional data. We find that TractoSCR obtains significantly higher accuracy of neurocognitive score prediction compared to other state-of-the-art methods. We find that the most predictive fiber clusters are predominantly located within the superficial white matter and projection tracts, particularly the superficial frontal white matter and striato-frontal connections. Overall, our results demonstrate the utility of contrastive representation learning methods for regression, and in particular for improving neuroimaging-based prediction of higher-order cognitive abilities. Our code will be available at: https://github.com/SlicerDMRI/TractoSCR.

2.
Sci Data ; 11(1): 494, 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38744868

RESUMEN

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n = 92), metastases (n = 11), and others (n = 11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.


Asunto(s)
Neoplasias Encefálicas , Bases de Datos Factuales , Imagen por Resonancia Magnética , Imagen Multimodal , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/cirugía , Encéfalo/diagnóstico por imagen , Encéfalo/cirugía , Glioma/diagnóstico por imagen , Glioma/cirugía , Ultrasonografía , Neuronavegación/métodos
3.
medRxiv ; 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-37745329

RESUMEN

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n=92), metastases (n=11), and others (n=11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.

4.
Trop Med Infect Dis ; 8(8)2023 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-37624349

RESUMEN

OBJECTIVE: To measure the economic impacts of the longer pre-XDR-TB treatment regimen and the shorter BEAT-TB India regimen. METHODS: In the current study, the economic impacts of the current 18-month pre-XDR-TB treatment regimen and the 6-9 month BEAT-TB regimen were evaluated using an economic model via a decision tree analysis from a societal perspective. The incremental costs and quality-adjusted life years (QALYs) gained from the introduction of the BEAT-TB regimen for pre-XDR-TB patients were estimated. RESULTS: For a cohort of 1000 pre-XDR-TB patients, we found that the BEAT-TB India regimen yielded higher undiscounted life years (40,548 vs. 21,009) and more QALYs gained (27,633 vs. 15,812) than the 18-month regimen. The BEAT-TB India regimen was found to be cost-saving, with an incremental cost of USD -128,651 when compared to the 18-month regimen. The current analysis did not consider the possibility of reduced TB recurrence after use of the BEAT-TB regimen, so it might have under-estimated the benefits. CONCLUSION: As a lower-cost intervention with improved health outcomes, the BEAT-TB India regimen is dominant when compared to the 18-month regimen.

5.
IEEE J Biomed Health Inform ; 27(9): 4352-4361, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37276107

RESUMEN

Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpretation of LUS be challenging for novice operators, but visual quantification of B-lines remains subject to observer variability. In this work, we investigate the strengths and weaknesses of multiple deep learning approaches for automated B-line detection and localization in LUS videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising 1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines. Based on this dataset, we present a benchmark of established deep learning methods applied to the task of B-line detection. To pave the way for interpretable quantification of B-lines, we propose a novel "single-point" approach to B-line localization using only the point of origin. Our results show that (a) the area under the receiver operating characteristic curve ranges from 0.864 to 0.955 for the benchmarked detection methods, (b) within this range, the best performance is achieved by models that leverage multiple successive frames as input, and (c) the proposed single-point approach for B-line localization reaches an F 1-score of 0.65, performing on par with the inter-observer agreement. The dataset and developed methods can facilitate further biomedical research on automated interpretation of lung ultrasound with the potential to expand the clinical utility.


Asunto(s)
Aprendizaje Profundo , Edema Pulmonar , Humanos , Pulmón/diagnóstico por imagen , Ultrasonografía/métodos , Edema Pulmonar/diagnóstico , Tórax
6.
Eur J Heart Fail ; 25(7): 1166-1169, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37218619

RESUMEN

AIM: Acute decompensated heart failure (ADHF) is the leading cause of cardiovascular hospitalizations in the United States. Detecting B-lines through lung ultrasound (LUS) can enhance clinicians' prognostic and diagnostic capabilities. Artificial intelligence/machine learning (AI/ML)-based automated guidance systems may allow novice users to apply LUS to clinical care. We investigated whether an AI/ML automated LUS congestion score correlates with expert's interpretations of B-line quantification from an external patient dataset. METHODS AND RESULTS: This was a secondary analysis from the BLUSHED-AHF study which investigated the effect of LUS-guided therapy on patients with ADHF. In BLUSHED-AHF, LUS was performed and B-lines were quantified by ultrasound operators. Two experts then separately quantified the number of B-lines per ultrasound video clip recorded. Here, an AI/ML-based lung congestion score (LCS) was calculated for all LUS clips from BLUSHED-AHF. Spearman correlation was computed between LCS and counts from each of the original three raters. A total of 3858 LUS clips were analysed on 130 patients. The LCS demonstrated good agreement with the two experts' B-line quantification score (r = 0.894, 0.882). Both experts' B-line quantification scores had significantly better agreement with the LCS than they did with the ultrasound operator's score (p < 0.005, p < 0.001). CONCLUSION: Artificial intelligence/machine learning-based LCS correlated with expert-level B-line quantification. Future studies are needed to determine whether automated tools may assist novice users in LUS interpretation.


Asunto(s)
Insuficiencia Cardíaca , Edema Pulmonar , Humanos , Inteligencia Artificial , Insuficiencia Cardíaca/diagnóstico por imagen , Insuficiencia Cardíaca/complicaciones , Pulmón/diagnóstico por imagen , Edema Pulmonar/diagnóstico por imagen , Edema Pulmonar/etiología , Ultrasonografía/métodos
7.
Med Image Comput Comput Assist Interv ; 14228: 227-237, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38371724

RESUMEN

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

8.
Med Image Comput Comput Assist Interv ; 2023: 448-458, 2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-38655383

RESUMEN

We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.

9.
Osteoarthr Cartil Open ; 4(1): 100234, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36474467

RESUMEN

Objective: Knee osteoarthritis (KOA) is a prevalent disease with a high economic and social cost. Magnetic resonance imaging (MRI) can be used to visualize many KOA-related structures including bone marrow lesions (BMLs), which are associated with OA pain. Several semi-automated software methods have been developed to segment BMLs, using manual, labor-intensive methods, which can be costly for large clinical trials and other studies of KOA. The goal of our study was to develop and validate a more efficient method to quantify BML volume on knee MRI scans. Materials and methods: We have applied a deep learning approach using a patch-based convolutional neural network (CNN) which was trained using 673 MRI data sets and the segmented BML masks obtained from a trained reader. Given the location of a BML provided by the reader, the network performed a fully automated segmentation of the BML, removing the need for tedious manual delineation. Accuracy was quantified using the Pearson's correlation coefficient, by a comparison to a second expert reader, and using the Dice Similarity Score (DSC). Results: The Pearson's R2 value was 0.94 and we found similar agreement when comparing two readers (R2 â€‹= â€‹0.85) and each reader versus the DL model (R2 â€‹= â€‹0.95 and R2 â€‹= â€‹0.81). The average DSC was 0.70. Conclusions: We developed and validated a deep learning-based method to segment BMLs on knee MRI data sets. This has the potential to be a valuable tool for future large studies of KOA.

10.
Biomed Image Regist (2022) ; 13386: 103-115, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36383500

RESUMEN

In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss function that penalizes the intensity differences between the fixed and moving images, along with a suitable regularizer on the deformation. However, since images typically have large untextured regions, merely maximizing similarity between the two images is not sufficient to recover the true deformation. This problem is exacerbated by texture in other regions, which introduces severe non-convexity into the landscape of the training objective and ultimately leads to overfitting. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching and deformation estimation. Here, we introduce a simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques. Our code is publicly available from http://github.com/balbasty/superwarp.

11.
Bioinformatics ; 38(7): 2015-2021, 2022 03 28.
Artículo en Inglés | MEDLINE | ID: mdl-35040929

RESUMEN

MOTIVATION: Mass spectrometry imaging (MSI) provides rich biochemical information in a label-free manner and therefore holds promise to substantially impact current practice in disease diagnosis. However, the complex nature of MSI data poses computational challenges in its analysis. The complexity of the data arises from its large size, high-dimensionality and spectral nonlinearity. Preprocessing, including peak picking, has been used to reduce raw data complexity; however, peak picking is sensitive to parameter selection that, perhaps prematurely, shapes the downstream analysis for tissue classification and ensuing biological interpretation. RESULTS: We propose a deep learning model, massNet, that provides the desired qualities of scalability, nonlinearity and speed in MSI data analysis. This deep learning model was used, without prior preprocessing and peak picking, to classify MSI data from a mouse brain harboring a patient-derived tumor. The massNet architecture established automatically learning of predictive features, and automated methods were incorporated to identify peaks with potential for tumor delineation. The model's performance was assessed using cross-validation, and the results demonstrate higher accuracy and a substantial gain in speed compared to the established classical machine learning method, support vector machine. AVAILABILITY AND IMPLEMENTATION: https://github.com/wabdelmoula/massNet. The data underlying this article are available in the NIH Common Fund's National Metabolomics Data Repository (NMDR) Metabolomics Workbench under project id (PR001292) with http://dx.doi.org/10.21228/M8Q70T. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Animales , Ratones , Espectrometría de Masas/métodos , Metabolómica/métodos , Aprendizaje Automático , Neoplasias/diagnóstico por imagen
12.
IEEE Trans Med Imaging ; 41(6): 1454-1467, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-34968177

RESUMEN

In this paper, we present a deep learning method, DDMReg, for accurate registration between diffusion MRI (dMRI) datasets. In dMRI registration, the goal is to spatially align brain anatomical structures while ensuring that local fiber orientations remain consistent with the underlying white matter fiber tract anatomy. DDMReg is a novel method that uses joint whole-brain and tract-specific information for dMRI registration. Based on the successful VoxelMorph framework for image registration, we propose a novel registration architecture that leverages not only whole brain information but also tract-specific fiber orientation information. DDMReg is an unsupervised method for deformable registration between pairs of dMRI datasets: it does not require nonlinearly pre-registered training data or the corresponding deformation fields as ground truth. We perform comparisons with four state-of-the-art registration methods on multiple independently acquired datasets from different populations (including teenagers, young and elderly adults) and different imaging protocols and scanners. We evaluate the registration performance by assessing the ability to align anatomically corresponding brain structures and ensure fiber spatial agreement between different subjects after registration. Experimental results show that DDMReg obtains significantly improved registration performance compared to the state-of-the-art methods. Importantly, we demonstrate successful generalization of DDMReg to dMRI data from different populations with varying ages and acquired using different acquisition protocols and different scanners.


Asunto(s)
Aprendizaje Profundo , Sustancia Blanca , Adolescente , Adulto , Anciano , Encéfalo/anatomía & histología , Encéfalo/diagnóstico por imagen , Imagen de Difusión por Resonancia Magnética/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
13.
Artículo en Inglés | MEDLINE | ID: mdl-37250854

RESUMEN

In order to tackle the difficulty associated with the ill-posed nature of the image registration problem, regularization is often used to constrain the solution space. For most learning-based registration approaches, the regularization usually has a fixed weight and only constrains the spatial transformation. Such convention has two limitations: (i) Besides the laborious grid search for the optimal fixed weight, the regularization strength of a specific image pair should be associated with the content of the images, thus the "one value fits all" training scheme is not ideal; (ii) Only spatially regularizing the transformation may neglect some informative clues related to the ill-posedness. In this study, we propose a mean-teacher based registration framework, which incorporates an additional temporal consistency regularization term by encouraging the teacher model's prediction to be consistent with that of the student model. More importantly, instead of searching for a fixed weight, the teacher enables automatically adjusting the weights of the spatial regularization and the temporal consistency regularization by taking advantage of the transformation uncertainty and appearance uncertainty. Extensive experiments on the challenging abdominal CT-MRI registration show that our training strategy can promisingly advance the original learning-based method in terms of efficient hyperparameter tuning and a better tradeoff between accuracy and smoothness.

14.
IEEE Trans Biomed Eng ; 69(4): 1310-1317, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34543188

RESUMEN

OBJECTIVE: A craniotomy is the removal of a part of the skull to allow surgeons to have access to the brain and treat tumors. When accessing the brain, a tissue deformation occurs and can negatively influence the surgical procedure outcome. In this work, we present a novel Augmented Reality neurosurgical system to superimpose pre-operative 3D meshes derived from MRI onto a view of the brain surface acquired during surgery. METHODS: Our method uses cortical vessels as main features to drive a rigid then non-rigid 3D/2D registration. We first use a feature extractor network to produce probability maps that are fed to a pose estimator network to infer the 6-DoF rigid pose. Then, to account for brain deformation, we add a non-rigid refinement step formulated as a Shape-from-Template problem using physics-based constraints that helps propagate the deformation to sub-cortical level and update tumor location. RESULTS: We tested our method retrospectively on 6 clinical datasets and obtained low pose error, and showed using synthetic dataset that considerable brain shift compensation and low TRE can be achieved at cortical and sub-cortical levels. CONCLUSION: The results show that our solution achieved accuracy below the actual clinical errors demonstrating the feasibility of practical use of our system. SIGNIFICANCE: This work shows that we can provide coherent Augmented Reality visualization of 3D cortical vessels observed through the craniotomy using a single camera view and that cortical vessels provide strong features for performing both rigid and non-rigid registration.


Asunto(s)
Realidad Aumentada , Neurocirugia , Cirugía Asistida por Computador , Encéfalo/diagnóstico por imagen , Encéfalo/cirugía , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética , Estudios Retrospectivos , Cirugía Asistida por Computador/métodos
15.
IEEE Trans Med Imaging ; 41(4): 836-845, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34699353

RESUMEN

We propose a novel pairwise distance measure between image keypoint sets, for the purpose of large-scale medical image indexing. Our measure generalizes the Jaccard index to account for soft set equivalence (SSE) between keypoint elements, via an adaptive kernel framework modeling uncertainty in keypoint appearance and geometry. A new kernel is proposed to quantify the variability of keypoint geometry in location and scale. Our distance measure may be estimated between O (N 2) image pairs in [Formula: see text] operations via keypoint indexing. Experiments report the first results for the task of predicting family relationships from medical images, using 1010 T1-weighted MRI brain volumes of 434 families including monozygotic and dizygotic twins, siblings and half-siblings sharing 100%-25% of their polymorphic genes. Soft set equivalence and the keypoint geometry kernel improve upon standard hard set equivalence (HSE) and appearance kernels alone in predicting family relationships. Monozygotic twin identification is near 100%, and three subjects with uncertain genotyping are automatically paired with their self-reported families, the first reported practical application of image-based family identification. Our distance measure can also be used to predict group categories, sex is predicted with an AUC = 0.97. Software is provided for efficient fine-grained curation of large, generic image datasets.


Asunto(s)
Imagen por Resonancia Magnética , Gemelos Monocigóticos , Humanos , Neuroimagen , Programas Informáticos
16.
J Clin Tuberc Other Mycobact Dis ; 25: 100277, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34545343

RESUMEN

The COVID-19 pandemic has impacted health systems and health programs across the world. For tuberculosis (TB), it is predicted to set back progress by at least twelve years. Public private mix (PPM)has made a vital contribution to reach End TB targets with a ten-fold rise in TB notifications from private providers between 2012 and 2019. This is due in large part to the efforts of intermediary agencies, which aggregate demand from private providers. The COVID-19 pandemic has put these gains at risk over the past year. In this rapid assessment, representatives of 15 intermediary agencies from seven countries that are considered the highest priority for PPM in TB care (the Big Seven) share their views on the impact of COVID-19 on their programs, the private providers operating under their PPM schemes, and their private TB clients. All intermediaries reported a drop in TB testing and notifications, and the closure of some private practices. While travel restrictions and the fear of contracting COVID-19 were the main contributing factors, there were also unanticipated expenses for private providers, which were transferred to patients via increased prices. Intermediaries also had their routine activities disrupted and had to shift tasks and budgets to meet the new needs. However, the intermediaries and their partners rapidly adapted, including an increased use of digital tools, patient-centric services, and ancillary support for private providers. Despite many setbacks, the COVID-19 pandemic has underlined the importance of effective private sector engagement. The robust approach to fight COVID-19 has shown the possibilities for ending TB with a similar approach, augmented by the digital revolution around treatment and diagnostics and the push to decentralize health services.

17.
Nat Commun ; 12(1): 5544, 2021 09 20.
Artículo en Inglés | MEDLINE | ID: mdl-34545087

RESUMEN

Mass spectrometry imaging (MSI) is an emerging technology that holds potential for improving, biomarker discovery, metabolomics research, pharmaceutical applications and clinical diagnosis. Despite many solutions being developed, the large data size and high dimensional nature of MSI, especially 3D datasets, still pose computational and memory complexities that hinder accurate identification of biologically relevant molecular patterns. Moreover, the subjectivity in the selection of parameters for conventional pre-processing approaches can lead to bias. Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. The resulting msiPL method learns and visualizes the underlying non-linear spectral manifold, revealing biologically relevant clusters of tissue anatomy in a mouse kidney and tumor heterogeneity in human prostatectomy tissue, colorectal carcinoma, and glioblastoma mouse model, with identification of underlying m/z peaks. The method is applied for the analysis of MSI datasets ranging from 3.3 to 78.9 GB, without prior pre-processing and peak picking, and acquired using different mass spectrometers at different centers.


Asunto(s)
Imagenología Tridimensional , Redes Neurales de la Computación , Espectrometría de Masa por Láser de Matriz Asistida de Ionización Desorción , Algoritmos , Animales , Tejido Conectivo/diagnóstico por imagen , Tejido Conectivo/patología , Aprendizaje Profundo , Modelos Animales de Enfermedad , Humanos , Riñón/diagnóstico por imagen , Metabolómica , Ratones , Neoplasias/diagnóstico por imagen , Neoplasias/metabolismo , Dinámicas no Lineales , Reproducibilidad de los Resultados , alfa-Defensinas/metabolismo
18.
Artículo en Inglés | MEDLINE | ID: mdl-34367471

RESUMEN

The loss function of an unsupervised multimodal image registration framework has two terms, i.e., a metric for similarity measure and regularization. In the deep learning era, researchers proposed many approaches to automatically learn the similarity metric, which has been shown effective in improving registration performance. However, for the regularization term, most existing multimodal registration approaches still use a hand-crafted formula to impose artificial properties on the estimated deformation field. In this work, we propose a unimodal cyclic regularization training pipeline, which learns task-specific prior knowledge from simpler unimodal registration, to constrain the deformation field of multimodal registration. In the experiment of abdominal CT-MR registration, the proposed method yields better results over conventional regularization methods, especially for severely deformed local regions.

19.
Med Image Anal ; 69: 101939, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33388458

RESUMEN

In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Entropía , Humanos , Interpretación de Imagen Asistida por Computador , Imagenología Tridimensional
20.
Proc IEEE Int Symp Biomed Imaging ; 2021: 443-447, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36225596

RESUMEN

Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...