Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 112
1.
Sci Data ; 11(1): 494, 2024 May 14.
Article En | MEDLINE | ID: mdl-38744868

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n = 92), metastases (n = 11), and others (n = 11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.


Brain Neoplasms , Databases, Factual , Magnetic Resonance Imaging , Multimodal Imaging , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/surgery , Brain/diagnostic imaging , Brain/surgery , Glioma/diagnostic imaging , Glioma/surgery , Ultrasonography , Neuronavigation/methods
2.
medRxiv ; 2024 Apr 08.
Article En | MEDLINE | ID: mdl-37745329

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n=92), metastases (n=11), and others (n=11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.

3.
IEEE J Biomed Health Inform ; 27(9): 4352-4361, 2023 09.
Article En | MEDLINE | ID: mdl-37276107

Lung ultrasound (LUS) is an important imaging modality used by emergency physicians to assess pulmonary congestion at the patient bedside. B-line artifacts in LUS videos are key findings associated with pulmonary congestion. Not only can the interpretation of LUS be challenging for novice operators, but visual quantification of B-lines remains subject to observer variability. In this work, we investigate the strengths and weaknesses of multiple deep learning approaches for automated B-line detection and localization in LUS videos. We curate and publish, BEDLUS, a new ultrasound dataset comprising 1,419 videos from 113 patients with a total of 15,755 expert-annotated B-lines. Based on this dataset, we present a benchmark of established deep learning methods applied to the task of B-line detection. To pave the way for interpretable quantification of B-lines, we propose a novel "single-point" approach to B-line localization using only the point of origin. Our results show that (a) the area under the receiver operating characteristic curve ranges from 0.864 to 0.955 for the benchmarked detection methods, (b) within this range, the best performance is achieved by models that leverage multiple successive frames as input, and (c) the proposed single-point approach for B-line localization reaches an F 1-score of 0.65, performing on par with the inter-observer agreement. The dataset and developed methods can facilitate further biomedical research on automated interpretation of lung ultrasound with the potential to expand the clinical utility.


Deep Learning , Pulmonary Edema , Humans , Lung/diagnostic imaging , Ultrasonography/methods , Pulmonary Edema/diagnosis , Thorax
4.
Med Image Comput Comput Assist Interv ; 14228: 227-237, 2023 Oct.
Article En | MEDLINE | ID: mdl-38371724

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

5.
Med Image Comput Comput Assist Interv ; 2023: 448-458, 2023 Oct 13.
Article En | MEDLINE | ID: mdl-38655383

We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.

6.
Biomed Image Regist (2022) ; 13386: 103-115, 2022 Jul.
Article En | MEDLINE | ID: mdl-36383500

In recent years, learning-based image registration methods have gradually moved away from direct supervision with target warps to instead use self-supervision, with excellent results in several registration benchmarks. These approaches utilize a loss function that penalizes the intensity differences between the fixed and moving images, along with a suitable regularizer on the deformation. However, since images typically have large untextured regions, merely maximizing similarity between the two images is not sufficient to recover the true deformation. This problem is exacerbated by texture in other regions, which introduces severe non-convexity into the landscape of the training objective and ultimately leads to overfitting. In this paper, we argue that the relative failure of supervised registration approaches can in part be blamed on the use of regular U-Nets, which are jointly tasked with feature extraction, feature matching and deformation estimation. Here, we introduce a simple but crucial modification to the U-Net that disentangles feature extraction and matching from deformation prediction, allowing the U-Net to warp the features, across levels, as the deformation field is evolved. With this modification, direct supervision using target warps begins to outperform self-supervision approaches that require segmentations, presenting new directions for registration when images do not have segmentations. We hope that our findings in this preliminary workshop paper will re-ignite research interest in supervised image registration techniques. Our code is publicly available from http://github.com/balbasty/superwarp.

7.
Bioinformatics ; 38(7): 2015-2021, 2022 03 28.
Article En | MEDLINE | ID: mdl-35040929

MOTIVATION: Mass spectrometry imaging (MSI) provides rich biochemical information in a label-free manner and therefore holds promise to substantially impact current practice in disease diagnosis. However, the complex nature of MSI data poses computational challenges in its analysis. The complexity of the data arises from its large size, high-dimensionality and spectral nonlinearity. Preprocessing, including peak picking, has been used to reduce raw data complexity; however, peak picking is sensitive to parameter selection that, perhaps prematurely, shapes the downstream analysis for tissue classification and ensuing biological interpretation. RESULTS: We propose a deep learning model, massNet, that provides the desired qualities of scalability, nonlinearity and speed in MSI data analysis. This deep learning model was used, without prior preprocessing and peak picking, to classify MSI data from a mouse brain harboring a patient-derived tumor. The massNet architecture established automatically learning of predictive features, and automated methods were incorporated to identify peaks with potential for tumor delineation. The model's performance was assessed using cross-validation, and the results demonstrate higher accuracy and a substantial gain in speed compared to the established classical machine learning method, support vector machine. AVAILABILITY AND IMPLEMENTATION: https://github.com/wabdelmoula/massNet. The data underlying this article are available in the NIH Common Fund's National Metabolomics Data Repository (NMDR) Metabolomics Workbench under project id (PR001292) with http://dx.doi.org/10.21228/M8Q70T. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Deep Learning , Neoplasms , Animals , Mice , Mass Spectrometry/methods , Metabolomics/methods , Machine Learning , Neoplasms/diagnostic imaging
8.
IEEE Trans Med Imaging ; 41(6): 1454-1467, 2022 06.
Article En | MEDLINE | ID: mdl-34968177

In this paper, we present a deep learning method, DDMReg, for accurate registration between diffusion MRI (dMRI) datasets. In dMRI registration, the goal is to spatially align brain anatomical structures while ensuring that local fiber orientations remain consistent with the underlying white matter fiber tract anatomy. DDMReg is a novel method that uses joint whole-brain and tract-specific information for dMRI registration. Based on the successful VoxelMorph framework for image registration, we propose a novel registration architecture that leverages not only whole brain information but also tract-specific fiber orientation information. DDMReg is an unsupervised method for deformable registration between pairs of dMRI datasets: it does not require nonlinearly pre-registered training data or the corresponding deformation fields as ground truth. We perform comparisons with four state-of-the-art registration methods on multiple independently acquired datasets from different populations (including teenagers, young and elderly adults) and different imaging protocols and scanners. We evaluate the registration performance by assessing the ability to align anatomically corresponding brain structures and ensure fiber spatial agreement between different subjects after registration. Experimental results show that DDMReg obtains significantly improved registration performance compared to the state-of-the-art methods. Importantly, we demonstrate successful generalization of DDMReg to dMRI data from different populations with varying ages and acquired using different acquisition protocols and different scanners.


Deep Learning , White Matter , Adolescent , Adult , Aged , Brain/anatomy & histology , Brain/diagnostic imaging , Diffusion Magnetic Resonance Imaging/methods , Humans , Image Processing, Computer-Assisted/methods
9.
Article En | MEDLINE | ID: mdl-37250854

In order to tackle the difficulty associated with the ill-posed nature of the image registration problem, regularization is often used to constrain the solution space. For most learning-based registration approaches, the regularization usually has a fixed weight and only constrains the spatial transformation. Such convention has two limitations: (i) Besides the laborious grid search for the optimal fixed weight, the regularization strength of a specific image pair should be associated with the content of the images, thus the "one value fits all" training scheme is not ideal; (ii) Only spatially regularizing the transformation may neglect some informative clues related to the ill-posedness. In this study, we propose a mean-teacher based registration framework, which incorporates an additional temporal consistency regularization term by encouraging the teacher model's prediction to be consistent with that of the student model. More importantly, instead of searching for a fixed weight, the teacher enables automatically adjusting the weights of the spatial regularization and the temporal consistency regularization by taking advantage of the transformation uncertainty and appearance uncertainty. Extensive experiments on the challenging abdominal CT-MRI registration show that our training strategy can promisingly advance the original learning-based method in terms of efficient hyperparameter tuning and a better tradeoff between accuracy and smoothness.

10.
Nat Commun ; 12(1): 5544, 2021 09 20.
Article En | MEDLINE | ID: mdl-34545087

Mass spectrometry imaging (MSI) is an emerging technology that holds potential for improving, biomarker discovery, metabolomics research, pharmaceutical applications and clinical diagnosis. Despite many solutions being developed, the large data size and high dimensional nature of MSI, especially 3D datasets, still pose computational and memory complexities that hinder accurate identification of biologically relevant molecular patterns. Moreover, the subjectivity in the selection of parameters for conventional pre-processing approaches can lead to bias. Therefore, we assess if a probabilistic generative model based on a fully connected variational autoencoder can be used for unsupervised analysis and peak learning of MSI data to uncover hidden structures. The resulting msiPL method learns and visualizes the underlying non-linear spectral manifold, revealing biologically relevant clusters of tissue anatomy in a mouse kidney and tumor heterogeneity in human prostatectomy tissue, colorectal carcinoma, and glioblastoma mouse model, with identification of underlying m/z peaks. The method is applied for the analysis of MSI datasets ranging from 3.3 to 78.9 GB, without prior pre-processing and peak picking, and acquired using different mass spectrometers at different centers.


Imaging, Three-Dimensional , Neural Networks, Computer , Spectrometry, Mass, Matrix-Assisted Laser Desorption-Ionization , Algorithms , Animals , Connective Tissue/diagnostic imaging , Connective Tissue/pathology , Deep Learning , Disease Models, Animal , Humans , Kidney/diagnostic imaging , Metabolomics , Mice , Neoplasms/diagnostic imaging , Neoplasms/metabolism , Nonlinear Dynamics , Reproducibility of Results , alpha-Defensins/metabolism
11.
Med Image Anal ; 69: 101939, 2021 04.
Article En | MEDLINE | ID: mdl-33388458

In this work, we propose a theoretical framework based on maximum profile likelihood for pairwise and groupwise registration. By an asymptotic analysis, we demonstrate that maximum profile likelihood registration minimizes an upper bound on the joint entropy of the distribution that generates the joint image data. Further, we derive the congealing method for groupwise registration by optimizing the profile likelihood in closed form, and using coordinate ascent, or iterative model refinement. We also describe a method for feature based registration in the same framework and demonstrate it on groupwise tractographic registration. In the second part of the article, we propose an approach to deep metric registration that implements maximum likelihood registration using deep discriminative classifiers. We show further that this approach can be used for maximum profile likelihood registration to discharge the need for well-registered training data, using iterative model refinement. We demonstrate that the method succeeds on a challenging registration problem where the standard mutual information approach does not perform well.


Deep Learning , Algorithms , Entropy , Humans , Image Interpretation, Computer-Assisted , Imaging, Three-Dimensional
12.
Proc IEEE Int Symp Biomed Imaging ; 2021: 443-447, 2021 Apr.
Article En | MEDLINE | ID: mdl-36225596

Prostate cancer is the second most prevalent cancer in men worldwide. Deep neural networks have been successfully applied for prostate cancer diagnosis in magnetic resonance images (MRI). Pathology results from biopsy procedures are often used as ground truth to train such systems. There are several sources of noise in creating ground truth from biopsy data including sampling and registration errors. We propose: 1) A fully convolutional neural network (FCN) to produce cancer probability maps across the whole prostate gland in MRI; 2) A Gaussian weighted loss function to train the FCN with sparse biopsy locations; 3) A probabilistic framework to model biopsy location uncertainty and adjust cancer probability given the deep model predictions. We assess the proposed method on 325 biopsy locations from 203 patients. We observe that the proposed loss improves the area under the receiver operating characteristic curve and the biopsy location adjustment improves the sensitivity of the models.

13.
Article En | MEDLINE | ID: mdl-36282980

We propose and demonstrate a representation learning approach by maximizing the mutual information between local features of images and text. The goal of this approach is to learn useful image representations by taking advantage of the rich information contained in the free text that describes the findings in the image. Our method trains image and text encoders by encouraging the resulting representations to exhibit high local mutual information. We make use of recent advances in mutual information estimation with neural network discriminators. We argue that the sum of local mutual information is typically a lower bound on the global mutual information. Our experimental results in the downstream image classification tasks demonstrate the advantages of using local features for image-text representation learning. Our code is available at: https://github.com/RayRuizhiLiao/mutual_info_img_txt.

14.
IEEE Trans Med Imaging ; 39(12): 3868-3878, 2020 12.
Article En | MEDLINE | ID: mdl-32746129

Fully convolutional neural networks (FCNs), and in particular U-Nets, have achieved state-of-the-art results in semantic segmentation for numerous medical imaging applications. Moreover, batch normalization and Dice loss have been used successfully to stabilize and accelerate training. However, these networks are poorly calibrated i.e. they tend to produce overconfident predictions for both correct and erroneous classifications, making them unreliable and hard to interpret. In this paper, we study predictive uncertainty estimation in FCNs for medical image segmentation. We make the following contributions: 1) We systematically compare cross-entropy loss with Dice loss in terms of segmentation quality and uncertainty estimation of FCNs; 2) We propose model ensembling for confidence calibration of the FCNs trained with batch normalization and Dice loss; 3) We assess the ability of calibrated FCNs to predict segmentation quality of structures and detect out-of-distribution test examples. We conduct extensive experiments across three medical image segmentation applications of the brain, the heart, and the prostate to evaluate our contributions. The results of this study offer considerable insight into the predictive uncertainty estimation and out-of-distribution detection in medical image segmentation and provide practical recipes for confidence calibration. Moreover, we consistently demonstrate that model ensembling improves confidence calibration.


Image Processing, Computer-Assisted , Neural Networks, Computer , Brain/diagnostic imaging , Calibration , Humans , Male , Uncertainty
15.
Int J Comput Assist Radiol Surg ; 15(7): 1215-1223, 2020 Jul.
Article En | MEDLINE | ID: mdl-32372384

PURPOSE: The detection of clinically significant prostate cancer (PCa) is shown to greatly benefit from MRI-ultrasound fusion biopsy, which involves overlaying pre-biopsy MRI volumes (or targets) with real-time ultrasound images. In previous literature, machine learning models trained on either MRI or ultrasound data have been proposed to improve biopsy guidance and PCa detection. However, quantitative fusion of information from MRI and ultrasound has not been explored in depth in a large study. This paper investigates information fusion approaches between MRI and ultrasound to improve targeting of PCa foci in biopsies. METHODS: We build models of fully convolutional networks (FCN) using data from a newly proposed ultrasound modality, temporal enhanced ultrasound (TeUS), and apparent diffusion coefficient (ADC) from 107 patients with 145 biopsy cores. The architecture of our models is based on U-Net and U-Net with attention gates. Models are built using joint training through intermediate and late fusion of the data. We also build models with data from each modality, separately, to use as baseline. The performance is evaluated based on the area under the curve (AUC) for predicting clinically significant PCa. RESULTS: Using our proposed deep learning framework and intermediate fusion, integration of TeUS and ADC outperforms the individual modalities for cancer detection. We achieve an AUC of 0.76 for detection of all PCa foci, and 0.89 for PCa with larger foci. Results indicate a shared representation between multiple modalities outperforms the average unimodal predictions. CONCLUSION: We demonstrate the significant potential of multimodality integration of information from MRI and TeUS to improve PCa detection, which is essential for accurate targeting of cancer foci during biopsy. By using FCNs as the architecture of choice, we are able to predict the presence of clinically significant PCa in entire imaging planes immediately, without the need for region-based analysis. This reduces the overall computational time and enables future intra-operative deployment of this technology.


Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Ultrasonography/methods , Humans , Image-Guided Biopsy/methods , Male , Models, Theoretical , Prostatic Neoplasms/pathology
17.
Adv Neural Inf Process Syst ; 33: 8895-8906, 2020 Dec.
Article En | MEDLINE | ID: mdl-36415583

Ensembling is now recognized as an effective approach for increasing the predictive performance and calibration of deep networks. We introduce a new approach, Parameter Ensembling by Perturbation (PEP), that constructs an ensemble of parameter values as random perturbations of the optimal parameter set from training by a Gaussian with a single variance parameter. The variance is chosen to maximize the log-likelihood of the ensemble average ( L ) on the validation data set. Empirically, and perhaps surprisingly, L has a well-defined maximum as the variance grows from zero (which corresponds to the baseline model). Conveniently, calibration level of predictions also tends to grow favorably until the peak of L is reached. In most experiments, PEP provides a small improvement in performance, and, in some cases, a substantial improvement in empirical calibration. We show that this "PEP effect" (the gain in log-likelihood) is related to the mean curvature of the likelihood function and the empirical Fisher information. Experiments on ImageNet pre-trained networks including ResNet, DenseNet, and Inception showed improved calibration and likelihood. We further observed a mild improvement in classification accuracy on these networks. Experiments on classification benchmarks such as MNIST and CIFAR-10 showed improved calibration and likelihood, as well as the relationship between the PEP effect and overfitting; this demonstrates that PEP can be used to probe the level of overfitting that occurred during training. In general, no special training procedure or network architecture is needed, and in the case of pre-trained networks, no additional training is needed.

18.
Med Image Comput Comput Assist Interv ; 12264: 735-744, 2020 Oct.
Article En | MEDLINE | ID: mdl-33778818

Intra-operative brain shift is a well-known phenomenon that describes non-rigid deformation of brain tissues due to gravity and loss of cerebrospinal fluid among other phenomena. This has a negative influence on surgical outcome that is often based on pre-operative planning where the brain shift is not considered. We present a novel brain-shift aware Augmented Reality method to align pre-operative 3D data onto the deformed brain surface viewed through a surgical microscope. We formulate our non-rigid registration as a Shape-from-Template problem. A pre-operative 3D wire-like deformable model is registered onto a single 2D image of the cortical vessels, which is automatically segmented. This 3D/2D registration drives the underlying brain structures, such as tumors, and compensates for the brain shift in sub-cortical regions. We evaluated our approach on simulated and real data composed of 6 patients. It achieved good quantitative and qualitative results making it suitable for neurosurgical guidance.

19.
Article En | MEDLINE | ID: mdl-33840881

Brain shift is a non-rigid deformation of brain tissue that is affected by loss of cerebrospinal fluid, tissue manipulation and gravity among other phenomena. This deformation can negatively influence the outcome of a surgical procedure since surgical planning based on pre-operative image becomes less valid. We present a novel method to compensate for brain shift that maps preoperative image data to the deformed brain during intra-operative neurosurgical procedures and thus increases the likelihood of achieving a gross total resection while decreasing the risk to healthy tissue surrounding the tumor. Through a 3D/2D non-rigid registration process, a 3D articulated model derived from pre-operative imaging is aligned onto 2D images of the vessels viewed through the surgical miscroscopic intra-operatively. The articulated 3D vessels constrain a volumetric biomechanical model of the brain to propagate cortical vessel deformation to the parenchyma and in turn to the tumor. The 3D/2D non-rigid registration is performed using an energy minimization approach that satisfies both projective and physical constraints. Our method is evaluated on real and synthetic data of human brain showing both quantitative and qualitative results and exhibiting its particular suitability for real-time surgical guidance.

20.
Int J Comput Assist Radiol Surg ; 15(1): 75-85, 2020 Jan.
Article En | MEDLINE | ID: mdl-31444624

PURPOSE: Brain shift during tumor resection can progressively invalidate the accuracy of neuronavigation systems and affect neurosurgeons' ability to achieve optimal resections. This paper compares two methods that have been presented in the literature to compensate for brain shift: a thin-plate spline deformation model and a finite element method (FEM). For this comparison, both methods are driven by identical sparse data. Specifically, both methods are driven by displacements between automatically detected and matched feature points from intraoperative 3D ultrasound (iUS). Both methods have been shown to be fast enough for intraoperative brain shift correction (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018; Luo et al. in J Med Imaging (Bellingham) 4(3):035003, 2017). However, the spline method requires no preprocessing and ignores physical properties of the brain while the FEM method requires significant preprocessing and incorporates patient-specific physical and geometric constraints. The goal of this work was to explore the relative merits of these methods on recent clinical data. METHODS: Data acquired during 19 sequential tumor resections in Brigham and Women's Hospital's Advanced Multi-modal Image-Guided Operating Suite between December 2017 and October 2018 were considered for this retrospective study. Of these, 15 cases and a total of 24 iUS to iUS image pairs met inclusion requirements. Automatic feature detection (Machado et al. in Int J Comput Assist Radiol Surg 13(10):1525-1538, 2018) was used to detect and match features in each pair of iUS images. Displacements between matched features were then used to drive both the spline model and the FEM method to compensate for brain shift between image acquisitions. The accuracies of the resultant deformation models were measured by comparing the displacements of manually identified landmarks before and after deformation. RESULTS: The mean initial subcortical registration error between preoperative MRI and the first iUS image averaged 5.3 ± 0.75 mm. The mean subcortical brain shift, measured using displacements between manually identified landmarks in pairs of iUS images, was 2.5 ± 1.3 mm. Our results showed that FEM was able to reduce subcortical registration error by a small but statistically significant amount (from 2.46 to 2.02 mm). A large variability in the results of the spline method prevented us from demonstrating either a statistically significant reduction in subcortical registration error after applying the spline method or a statistically significant difference between the results of the two methods. CONCLUSIONS: In this study, we observed less subcortical brain shift than has previously been reported in the literature (Frisken et al., in: Miller (ed) Biomechanics of the brain, Springer, Cham, 2019). This may be due to the fact that we separated out the initial misregistration between preoperative MRI and the first iUS image from our brain shift measurements or it may be due to modern neurosurgical practices designed to reduce brain shift, including reduced craniotomy sizes and better control of intracranial pressure with the use of mannitol and other medications. It appears that the FEM method and its use of geometric and biomechanical constraints provided more consistent brain shift correction and better correction farther from the driving feature displacements than the simple spline model. The spline-based method was simpler and tended to give better results for small deformations. However, large variability in the spline results and relatively small brain shift prevented this study from demonstrating a statistically significant difference between the results of the two methods.


Brain Neoplasms/surgery , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Neuronavigation/methods , Neurosurgical Procedures/methods , Brain Neoplasms/diagnosis , Finite Element Analysis , Humans , Retrospective Studies , Ultrasonography/methods
...