Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Front Comput Neurosci ; 18: 1365727, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38784680

RESUMO

Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardized MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy. We acquired and publicly release a curated multi-center routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (n = 124) and T2-weighted (T2w) (n = 363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: (1) initial segmentations by a specialized company; (2) review by one of three trained radiologists; and (3) validation by an expert team. Inter- and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset. The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w, and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalize well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models. The MC-RC dataset and all trained deep learning models were made available online.

2.
Sci Data ; 11(1): 494, 2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38744868

RESUMO

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n = 92), metastases (n = 11), and others (n = 11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.


Assuntos
Neoplasias Encefálicas , Bases de Dados Factuais , Imageamento por Ressonância Magnética , Imagem Multimodal , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Encéfalo/diagnóstico por imagem , Encéfalo/cirurgia , Glioma/diagnóstico por imagem , Glioma/cirurgia , Ultrassonografia , Neuronavegação/métodos
3.
medRxiv ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-37745329

RESUMO

The standard of care for brain tumors is maximal safe surgical resection. Neuronavigation augments the surgeon's ability to achieve this but loses validity as surgery progresses due to brain shift. Moreover, gliomas are often indistinguishable from surrounding healthy brain tissue. Intraoperative magnetic resonance imaging (iMRI) and ultrasound (iUS) help visualize the tumor and brain shift. iUS is faster and easier to incorporate into surgical workflows but offers a lower contrast between tumorous and healthy tissues than iMRI. With the success of data-hungry Artificial Intelligence algorithms in medical image analysis, the benefits of sharing well-curated data cannot be overstated. To this end, we provide the largest publicly available MRI and iUS database of surgically treated brain tumors, including gliomas (n=92), metastases (n=11), and others (n=11). This collection contains 369 preoperative MRI series, 320 3D iUS series, 301 iMRI series, and 356 segmentations collected from 114 consecutive patients at a single institution. This database is expected to help brain shift and image analysis research and neurosurgical training in interpreting iUS and iMRI.

4.
Eur Radiol ; 33(11): 8067-8076, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37328641

RESUMO

OBJECTIVES: Surgical planning of vestibular schwannoma surgery would benefit greatly from a robust method of delineating the facial-vestibulocochlear nerve complex with respect to the tumour. This study aimed to optimise a multi-shell readout-segmented diffusion-weighted imaging (rs-DWI) protocol and develop a novel post-processing pipeline to delineate the facial-vestibulocochlear complex within the skull base region, evaluating its accuracy intraoperatively using neuronavigation and tracked electrophysiological recordings. METHODS: In a prospective study of five healthy volunteers and five patients who underwent vestibular schwannoma surgery, rs-DWI was performed and colour tissue maps (CTM) and probabilistic tractography of the cranial nerves were generated. In patients, the average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD-95) were calculated with reference to the neuroradiologist-approved facial nerve segmentation. The accuracy of patient results was assessed intraoperatively using neuronavigation and tracked electrophysiological recordings. RESULTS: Using CTM alone, the facial-vestibulocochlear complex of healthy volunteer subjects was visualised on 9/10 sides. CTM were generated in all 5 patients with vestibular schwannoma enabling the facial nerve to be accurately identified preoperatively. The mean ASSD between the annotators' two segmentations was 1.11 mm (SD 0.40) and the mean HD-95 was 4.62 mm (SD 1.78). The median distance from the nerve segmentation to a positive stimulation point was 1.21 mm (IQR 0.81-3.27 mm) and 2.03 mm (IQR 0.99-3.84 mm) for the two annotators, respectively. CONCLUSIONS: rs-DWI may be used to acquire dMRI data of the cranial nerves within the posterior fossa. CLINICAL RELEVANCE STATEMENT: Readout-segmented diffusion-weighted imaging and colour tissue mapping provide 1-2 mm spatially accurate imaging of the facial-vestibulocochlear nerve complex, enabling accurate preoperative localisation of the facial nerve. This study evaluated the technique in 5 healthy volunteers and 5 patients with vestibular schwannoma. KEY POINTS: • Readout-segmented diffusion-weighted imaging (rs-DWI) with colour tissue mapping (CTM) visualised the facial-vestibulocochlear nerve complex on 9/10 sides in 5 healthy volunteer subjects. • Using rs-DWI and CTM, the facial nerve was visualised in all 5 patients with vestibular schwannoma and within 1.21-2.03 mm of the nerve's true intraoperative location. • Reproducible results were obtained on different scanners.


Assuntos
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagem , Neuroma Acústico/cirurgia , Neuroma Acústico/patologia , Estudos Prospectivos , Imagem de Tensor de Difusão/métodos , Imagem de Difusão por Ressonância Magnética , Nervo Facial/diagnóstico por imagem , Nervo Facial/patologia , Nervo Vestibulococlear/patologia
5.
Med Image Anal ; 83: 102628, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36283200

RESUMO

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Assuntos
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagem
6.
Med Image Comput Comput Assist Interv ; 2023: 448-458, 2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-38655383

RESUMO

We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.

7.
Med Image Comput Comput Assist Interv ; 14228: 227-237, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38371724

RESUMO

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

8.
Front Radiol ; 2: 837191, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37492670

RESUMO

Objective: The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management. Methods: We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons. Results: Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 ± 0.05, F1 = 89.3 ± 3.0%, accuracy = 89.3 ± 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 ± 0.08, F1 = 89.1 ± 5.2, accuracy = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases. Conclusions: We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.

9.
Sci Data ; 8(1): 286, 2021 10 28.
Artigo em Inglês | MEDLINE | ID: mdl-34711849

RESUMO

Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Neuroma Acústico/diagnóstico por imagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Redes Neurais de Computação , Adulto Jovem
10.
Int J Comput Assist Radiol Surg ; 16(10): 1653-1661, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34120269

RESUMO

PURPOSE: Accurate segmentation of brain resection cavities (RCs) aids in postoperative analysis and determining follow-up treatment. Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique, but require large annotated datasets for training. Annotation of 3D medical images is time-consuming, requires highly trained raters and may suffer from high inter-rater variability. Self-supervised learning strategies can leverage unlabeled data for training. METHODS: We developed an algorithm to simulate resections from preoperative magnetic resonance images (MRIs). We performed self-supervised training of a 3D CNN for RC segmentation using our simulation method. We curated EPISURG, a dataset comprising 430 postoperative and 268 preoperative MRIs from 430 refractory epilepsy patients who underwent resective neurosurgery. We fine-tuned our model on three small annotated datasets from different institutions and on the annotated images in EPISURG, comprising 20, 33, 19 and 133 subjects. RESULTS: The model trained on data with simulated resections obtained median (interquartile range) Dice score coefficients (DSCs) of 81.7 (16.4), 82.4 (36.4), 74.9 (24.2) and 80.5 (18.7) for each of the four datasets. After fine-tuning, DSCs were 89.2 (13.3), 84.1 (19.8), 80.2 (20.1) and 85.2 (10.8). For comparison, inter-rater agreement between human annotators from our previous study was 84.0 (9.9). CONCLUSION: We present a self-supervised learning strategy for 3D CNNs using simulated RCs to accurately segment real RCs on postoperative MRI. Our method generalizes well to data from different institutions, pathologies and modalities. Source code, segmentation models and the EPISURG dataset are available at https://github.com/fepegar/resseg-ijcars .


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Encéfalo/cirurgia , Humanos , Imageamento por Ressonância Magnética , Aprendizado de Máquina Supervisionado
12.
Med Image Anal ; 67: 101862, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33129151

RESUMO

Brain tissue segmentation from multimodal MRI is a key building block of many neuroimaging analysis pipelines. Established tissue segmentation approaches have, however, not been developed to cope with large anatomical changes resulting from pathology, such as white matter lesions or tumours, and often fail in these cases. In the meantime, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly. However, few existing approaches allow for the joint segmentation of normal tissue and brain lesions. Developing a DNN for such a joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on task-specific imaging protocols including a task-specific set of imaging modalities. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from aggregated task-specific hetero-modal domain-shifted and partially-annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper bound of the risk to deal with heterogeneous imaging modalities across datasets. To deal with potential domain shift, we integrated and tested three conventional techniques based on data augmentation, adversarial learning and pseudo-healthy generation. For each individual task, our joint approach reaches comparable performance to task-specific and fully-supervised models. The proposed framework is assessed on two different types of brain lesions: White matter lesions and gliomas. In the latter case, lacking a joint ground-truth for quantitative assessment purposes, we propose and use a novel clinically-relevant qualitative assessment methodology.


Assuntos
Imageamento por Ressonância Magnética , Neuroimagem , Encéfalo/diagnóstico por imagem , Humanos , Aprendizagem , Redes Neurais de Computação
13.
Int J Comput Assist Radiol Surg ; 15(9): 1445-1455, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32676869

RESUMO

PURPOSE: Management of vestibular schwannoma (VS) is based on tumour size as observed on T1 MRI scans with contrast agent injection. The current clinical practice is to measure the diameter of the tumour in its largest dimension. It has been shown that volumetric measurement is more accurate and more reliable as a measure of VS size. The reference approach to achieve such volumetry is to manually segment the tumour, which is a time intensive task. We suggest that semi-automated segmentation may be a clinically applicable solution to this problem and that it could replace linear measurements as the clinical standard. METHODS: Using high-quality software available for academic purposes, we ran a comparative study of manual versus semi-automated segmentation of VS on MRI with 5 clinicians and scientists. We gathered both quantitative and qualitative data to compare the two approaches; including segmentation time, segmentation effort and segmentation accuracy. RESULTS: We found that the selected semi-automated segmentation approach is significantly faster (167 s vs 479 s, [Formula: see text]), less temporally and physically demanding and has approximately equal performance when compared with manual segmentation, with some improvements in accuracy. There were some limitations, including algorithmic unpredictability and error, which produced more frustration and increased mental effort in comparison with manual segmentation. CONCLUSION: We suggest that semi-automated segmentation could be applied clinically for volumetric measurement of VS on MRI. In future, the generic software could be refined for use specifically for VS segmentation, thereby improving accuracy.


Assuntos
Diagnóstico por Computador/métodos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Neurilemoma/diagnóstico por imagem , Neuroma Acústico/diagnóstico por imagem , Reconhecimento Automatizado de Padrão , Algoritmos , Automação , Meios de Contraste/farmacologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neurilemoma/patologia , Neuroimagem , Neuroma Acústico/patologia , Reprodutibilidade dos Testes , Software
14.
J Neurosurg ; 134(1): 171-179, 2019 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-31812137

RESUMO

OBJECTIVE: Automatic segmentation of vestibular schwannomas (VSs) from MRI could significantly improve clinical workflow and assist in patient management. Accurate tumor segmentation and volumetric measurements provide the best indicators to detect subtle VS growth, but current techniques are labor intensive and dedicated software is not readily available within the clinical setting. The authors aim to develop a novel artificial intelligence (AI) framework to be embedded in the clinical routine for automatic delineation and volumetry of VS. METHODS: Imaging data (contrast-enhanced T1-weighted [ceT1] and high-resolution T2-weighted [hrT2] MR images) from all patients meeting the study's inclusion/exclusion criteria who had a single sporadic VS treated with Gamma Knife stereotactic radiosurgery were used to create a model. The authors developed a novel AI framework based on a 2.5D convolutional neural network (CNN) to exploit the different in-plane and through-plane resolutions encountered in standard clinical imaging protocols. They used a computational attention module to enable the CNN to focus on the small VS target and propose a supervision on the attention map for more accurate segmentation. The manually segmented target tumor volume (also tested for interobserver variability) was used as the ground truth for training and evaluation of the CNN. We quantitatively measured the Dice score, average symmetric surface distance (ASSD), and relative volume error (RVE) of the automatic segmentation results in comparison to manual segmentations to assess the model's accuracy. RESULTS: Imaging data from all eligible patients (n = 243) were randomly split into 3 nonoverlapping groups for training (n = 177), hyperparameter tuning (n = 20), and testing (n = 46). Dice, ASSD, and RVE scores were measured on the testing set for the respective input data types as follows: ceT1 93.43%, 0.203 mm, 6.96%; hrT2 88.25%, 0.416 mm, 9.77%; combined ceT1/hrT2 93.68%, 0.199 mm, 7.03%. Given a margin of 5% for the Dice score, the automated method was shown to achieve statistically equivalent performance in comparison to an annotator using ceT1 images alone (p = 4e-13) and combined ceT1/hrT2 images (p = 7e-18) as inputs. CONCLUSIONS: The authors developed a robust AI framework for automatically delineating and calculating VS tumor volume and have achieved excellent results, equivalent to those achieved by an independent human annotator. This promising AI technology has the potential to improve the management of patients with VS and potentially other brain tumors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...