Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Front Med (Lausanne) ; 11: 1445325, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39371344

RESUMO

Neurodegenerative disorders such as Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) significantly impact brain function and cognition. Advanced neuroimaging techniques, particularly Magnetic Resonance Imaging (MRI), play a crucial role in diagnosing these conditions by detecting structural abnormalities. This study leverages the ADNI and OASIS datasets, renowned for their extensive MRI data, to develop effective models for detecting AD and MCI. The research conducted three sets of tests, comparing multiple groups: multi-class classification (AD vs. Cognitively Normal (CN) vs. MCI), binary classification (AD vs. CN, and MCI vs. CN), to evaluate the performance of models trained on ADNI and OASIS datasets. Key preprocessing techniques such as Gaussian filtering, contrast enhancement, and resizing were applied to both datasets. Additionally, skull stripping using U-Net was utilized to extract features by removing the skull. Several prominent deep learning architectures including DenseNet-201, EfficientNet-B0, ResNet-50, ResNet-101, and ResNet-152 were investigated to identify subtle patterns associated with AD and MCI. Transfer learning techniques were employed to enhance model performance, leveraging pre-trained datasets for improved Alzheimer's MCI detection. ResNet-101 exhibited superior performance compared to other models, achieving 98.21% accuracy on the ADNI dataset and 97.45% accuracy on the OASIS dataset in multi-class classification tasks encompassing AD, CN, and MCI. It also performed well in binary classification tasks distinguishing AD from CN. ResNet-152 excelled particularly in binary classification between MCI and CN on the OASIS dataset. These findings underscore the utility of deep learning models in accurately identifying and distinguishing neurodegenerative diseases, showcasing their potential for enhancing clinical diagnosis and treatment monitoring.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39371473

RESUMO

Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.

3.
bioRxiv ; 2024 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-39282435

RESUMO

In spite of the great progress that has been made towards automating brain extraction in human magnetic resonance imaging (MRI), challenges remain in the automation of this task for mouse models of brain disorders. Researchers often resort to editing brain segmentation results manually when automated methods fail to produce accurate delineations. However, manual corrections can be labor-intensive and introduce interrater variability. This motivated our development of a new deep-learning-based method for brain segmentation of mouse MRI, which we call Mouse Brain Extractor. We adapted the existing SwinUNETR architecture (Hatamizadeh et al., 2021) with the goal of making it more robust to scale variance. Our approach is to supply the network model with supplementary spatial information in the form of absolute positional encoding. We use a new scheme for positional encoding, which we call Global Positional Encoding (GPE). GPE is based on a shared coordinate frame that is relative to the entire input image. This differs from the positional encoding used in SwinUNETR, which solely employs relative pairwise image patch positions. GPE also differs from the conventional absolute positional encoding approach, which encodes position relative to a subimage rather than the entire image. We trained and tested our method on a heterogeneous dataset of N=223 mouse MRI, for which we generated a corresponding set of manually-edited brain masks. These data were acquired previously in other studies using several different scanners and imaging protocols and included in vivo and ex vivo images of mice with heterogeneous brain structure due to different genotypes, strains, diseases, ages, and sexes. We evaluated our method's results against those of seven existing rodent brain extraction methods and two state-of-the art deep-learning approaches, nnU-Net (Isensee et al., 2018) and SwinUNETR. Overall, our proposed method achieved average Dice scores on the order of 0.98 and average HD95 measures on the order of 100 µm when compared to the manually-labeled brain masks. In statistical analyses, our method significantly outperformed the conventional approaches and performed as well as or significantly better than the nnU-Net and SwinUNETR methods. These results suggest that Global Positional Encoding provides additional contextual information that enables our Mouse Brain Extractor to perform competitively on datasets containing multiple resolutions.

4.
Neuroimage ; 298: 120769, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39122056

RESUMO

Skull stripping is a crucial preprocessing step in magnetic resonance imaging (MRI), where experts manually create brain masks. This labor-intensive process heavily relies on the annotator's expertise, as automation faces challenges such as low tissue contrast, significant variations in image resolution, and blurred boundaries between the brain and surrounding tissues, particularly in rodents. In this study, we have developed a lightweight framework based on Swin-UNETR to automate the skull stripping process in MRI scans of mice and rats. The primary objective of this framework is to eliminate the need for preprocessing, reduce the workload, and provide an out-of-the-box solution capable of adapting to various MRI image resolutions. By employing a lightweight neural network, we aim to lower the performance requirements of the framework. To validate the effectiveness of our approach, we trained and evaluated the network using publicly available multi-center data, encompassing 1,037 rodents and 1,142 images from 89 centers, resulting in a preliminary mean Dice coefficient of 0.9914. The framework, data, and pre-trained models can be found on the following link: https://github.com/VitoLin21/Rodent-Skull-Stripping.


Assuntos
Encéfalo , Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Crânio , Animais , Imageamento por Ressonância Magnética/métodos , Ratos , Camundongos , Encéfalo/diagnóstico por imagem , Crânio/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos
5.
Comput Biol Med ; 179: 108845, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39002314

RESUMO

BACKGROUND: Brain extraction in magnetic resonance imaging (MRI) data is an important segmentation step in many neuroimaging preprocessing pipelines. Image segmentation is one of the research fields in which deep learning had the biggest impact in recent years. Consequently, traditional brain extraction methods are now being replaced by deep learning-based methods. METHOD: Here, we used a unique dataset compilation comprising 7837 T1-weighted (T1w) MR images from 191 different OpenNeuro datasets in combination with advanced deep learning methods to build a fast, high-precision brain extraction tool called deepbet. RESULTS: deepbet sets a novel state-of-the-art performance during cross-dataset validation with a median Dice score (DSC) of 99.0 on unseen datasets, outperforming the current best performing deep learning (DSC=97.9) and classic (DSC=96.5) methods. While current methods are more sensitive to outliers, deepbet achieves a Dice score of >97.4 across all 7837 images from 191 different datasets. This robustness was additionally tested in 5 external datasets, which included challenging clinical MR images. During visual exploration of each method's output which resulted in the lowest Dice score, major errors could be found for all of the tested tools except deepbet. Finally, deepbet uses a compute efficient variant of the UNet architecture, which accelerates brain extraction by a factor of ≈10 compared to current methods, enabling the processing of one image in ≈2 s on low level hardware. CONCLUSIONS: In conclusion, deepbet demonstrates superior performance and reliability in brain extraction across a wide range of T1w MR images of adults, outperforming existing top tools. Its high minimal Dice score and minimal objective errors, even in challenging conditions, validate deepbet as a highly dependable tool for accurate brain extraction. deepbet can be conveniently installed via "pip install deepbet" and is publicly accessible at https://github.com/wwu-mmll/deepbet.


Assuntos
Encéfalo , Aprendizado Profundo , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Bases de Dados Factuais , Neuroimagem/métodos
6.
ArXiv ; 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38463507

RESUMO

Skull-stripping is the removal of background and non-brain anatomical features from brain images. While many skull-stripping tools exist, few target pediatric populations. With the emergence of multi-institutional pediatric data acquisition efforts to broaden the understanding of perinatal brain development, it is essential to develop robust and well-tested tools ready for the relevant data processing. However, the broad range of neuroanatomical variation in the developing brain, combined with additional challenges such as high motion levels, as well as shoulder and chest signal in the images, leaves many adult-specific tools ill-suited for pediatric skull-stripping. Building on an existing framework for robust and accurate skull-stripping, we propose developmental SynthStrip (d-SynthStrip), a skull-stripping model tailored to pediatric images. This framework exposes networks to highly variable images synthesized from label maps. Our model substantially outperforms pediatric baselines across scan types and age cohorts. In addition, the <1-minute runtime of our tool compares favorably to the fastest baselines. We distribute our model at https://w3id.org/synthstrip.

7.
J Neurosci Methods ; 405: 110078, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38340902

RESUMO

BACKGROUND: Whole brain delineation (WBD) is utilized in neuroimaging analysis for data preprocessing and deriving whole brain image metrics. Current automated WBD techniques for analysis of preclinical brain MRI data show limited accuracy when images present with significant neuropathology and anatomical deformations, such as that resulting from organophosphate intoxication (OPI) and Alzheimer's Disease (AD), and inadequate generalizability. METHODS: A modified 2D U-Net framework was employed for WBD of MRI rodent brains, consisting of 27 convolutional layers, batch normalization, two dropout layers and data augmentation, after training parameter optimization. A total of 265 T2-weighted 7.0 T MRI scans were utilized for the study, including 125 scans of an OPI rat model for neural network training. For testing and validation, 20 OPI rat scans and 120 scans of an AD rat model were utilized. U-Net performance was evaluated using Dice coefficients (DC) and Hausdorff distances (HD) between the U-Net-generated and manually segmented WBDs. RESULTS: The U-Net achieved a DC (median[range]) of 0.984[0.936-0.990] and HD of 1.69[1.01-6.78] mm for OPI rat model scans, and a DC (mean[range]) of 0.975[0.898-0.991] and HD of 1.49[0.86-3.89] for the AD rat model scans. COMPARISON WITH EXISTING METHODS: The proposed approach is fully automated and robust across two rat strains and longitudinal brain changes with a computational speed of 8 seconds/scan, overcoming limitations of manual segmentation. CONCLUSIONS: The modified 2D U-Net provided a fully automated, efficient, and generalizable segmentation approach that achieved high accuracy across two disparate rat models of neurological diseases.


Assuntos
Doença de Alzheimer , Processamento de Imagem Assistida por Computador , Ratos , Animais , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Neuroimagem , Doença de Alzheimer/diagnóstico por imagem
8.
Med Phys ; 51(3): 2230-2238, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37956307

RESUMO

BACKGROUND: Despite extensive efforts to obtain accurate segmentation of magnetic resonance imaging (MRI) scans of a head, it remains challenging primarily due to variations in intensity distribution, which depend on the equipment and parameters used. PURPOSE: The goal of this study is to evaluate the effectiveness of an automatic segmentation method for head MRI scans using a multistep Dense U-Net (MDU-Net) architecture. METHODS: The MDU-Net-based method comprises two steps. The first step is to segment the scalp, skull, and whole brain from head MRI scans using a convolutional neural network (CNN). In the first step, a hybrid network is used to combine 2.5D Dense U-Net and 3D Dense U-Net structure. This hybrid network acquires logits in three orthogonal planes (axial, coronal, and sagittal) using 2.5D Dense U-Nets and fuses them by averaging. The resultant fused probability map with head MRI scans then serves as the input to a 3D Dense U-Net. In this process, different ratios of active contour loss and focal loss are applied. The second step is to segment the cerebrospinal fluid (CSF), white matter, and gray matter from extracted brain MRI scans using CNNs. In the second step, the histogram of the extracted brain MRI scans is standardized and then a 2.5D Dense U-Net is used to further segment the brain's specific tissues using the focal loss. A dataset of 100 head MRI scans from an OASIS-3 dataset was used for training, internal validation, and testing, with ratios of 80%, 10%, and 10%, respectively. Using the proposed approach, we segmented the head MRI scans into five areas (scalp, skull, CSF, white matter, and gray matter) and evaluated the segmentation results using the Dice similarity coefficient (DSC) score, Hausdorff distance (HD), and the average symmetric surface distance (ASSD) as evaluation metrics. We compared these results with those obtained using the Res-U-Net, Dense U-Net, U-Net++, Swin-Unet, and H-Dense U-Net models. RESULTS: The MDU-Net model showed DSC values of 0.933, 0.830, 0.833, 0.953, and 0.917 in the scalp, skull, CSF, white matter, and gray matter, respectively. The corresponding HD values were 2.37, 2.89, 2.13, 1.52, and 1.53 mm, respectively. The ASSD values were 0.50, 1.63, 1.28, 0.26, and 0.27 mm, respectively. Comparing these results with other models revealed that the MDU-Net model demonstrated the best performance in terms of the DSC values for the scalp, CSF, white matter, and gray matter. When compared with the H-Dense U-Net model, which showed the highest performance among the other models, the MDU-Net model showed substantial improvements in the HD view, particularly in the gray matter region, with a difference of approximately 9%. In addition, in terms of the ASSD, the MDU-Net model outperformed the H-Dense U-Net model, showing an approximately 7% improvements in the white matter and approximately 9% improvements in the gray matter. CONCLUSION: Compared with existing models in terms of DSC, HD, and ASSD, the proposed MDU-Net model demonstrated the best performance on average and showed its potential to enhance the accuracy of automatic segmentation for head MRI scans.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Couro Cabeludo
9.
Comput Methods Programs Biomed ; 243: 107912, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37981454

RESUMO

BACKGROUND AND OBJECTIVE: We present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich complex valued k-space. METHODS: Using four datasets from different institutions with a total of around 200,000 MRI slices, we show that our network can perform skull-stripping on the raw data of MRIs while preserving the phase information which no other skull stripping algorithm is able to work with. For two of the datasets, skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain is used as the ground truth, whereas the third and fourth dataset comes with per-hand annotated brain segmentations. RESULTS: All four datasets were very similar to the ground truth (DICE scores of 92 %-99 % and Hausdorff distances of under 5.5 pixel). Results on slices above the eye-region reach DICE scores of up to 99 %, whereas the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-Strip often has smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. CONCLUSION: With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Besides preserving valuable information for further diagnostics, this approach makes an immediate anonymization of patient data possible, already before being transformed into the image domain. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.


Assuntos
Algoritmos , Crânio , Humanos , Crânio/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Processamento de Imagem Assistida por Computador/métodos , Cabeça , Imageamento por Ressonância Magnética/métodos
10.
Brain Sci ; 13(9)2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37759856

RESUMO

This research comprises experiments with a deep learning framework for fully automating the skull stripping from brain magnetic resonance (MR) images. Conventional techniques for segmentation have progressed to the extent of Convolutional Neural Networks (CNN). We proposed and experimented with a contemporary variant of the deep learning framework based on mask region convolutional neural network (Mask-RCNN) for all anatomical orientations of brain MR images. We trained the system from scratch to build a model for classification, detection, and segmentation. It is validated by images taken from three different datasets: BrainWeb; NAMIC, and a local hospital. We opted for purposive sampling to select 2000 images of T1 modality from data volumes followed by a multi-stage random sampling technique to segregate the dataset into three batches for training (75%), validation (15%), and testing (10%) respectively. We utilized a robust backbone architecture, namely ResNet-101 and Functional Pyramid Network (FPN), to achieve optimal performance with higher accuracy. We subjected the same data to two traditional methods, namely Brain Extraction Tools (BET) and Brain Surface Extraction (BSE), to compare their performance results. Our proposed method had higher mean average precision (mAP) = 93% and content validity index (CVI) = 0.95%, which were better than comparable methods. We contributed by training Mask-RCNN from scratch for generating reusable learning weights known as transfer learning. We contributed to methodological novelty by applying a pragmatic research lens, and used a mixed method triangulation technique to validate results on all anatomical modalities of brain MR images. Our proposed method improved the accuracy and precision of skull stripping by fully automating it and reducing its processing time and operational cost and reliance on technicians. This research study has also provided grounds for extending the work to the scale of explainable artificial intelligence (XAI).

11.
Phys Med Biol ; 68(20)2023 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-37659398

RESUMO

Objective.Skull stripping is a key step in the pre-processing of rodent brain magnetic resonance images (MRI). This study aimed to develop a new skull stripping method via U2-Net, a neural network model based on deep learning method, for rat brain MRI.Approach.In this study, 599 rats were enrolled and U2-Net was applied to segment MRI images of rat brain. The intercranial tissue of each rat was manually labeled. 476 rats (approximate 80%) were used for training set while 123 rats (approximate 20%) were used to test the performance of the trained U2-Net model. For evaluation, the segmentation result by the U2-Net model is compared with the manual label, and traditional segment methods. Quantitative evaluation, including Dice coefficient, Jaccard coefficient, Sensitivity, Specificity, Pixel accuracy, Hausdorff coefficient, True positive rate, False positive rate and the volumes of whole brain, were calculated to compare the segmentation results among different models.Main results.The U2-Net model was performed better than the software of RATS and BrainSuite, in which the quantitative values of training U2-Net model were 0.9907 ± 0.0016 (Dice coefficient), 0.9816 ± 0.0032 (Jaccard coefficient), 0.9912 ± 0.0020 (Sensitivity), 0.9989 ± 0.0002 (Specificity), 0.9982 ± 0.0003 (Pixel accuracy), 5.2390 ± 2.5334 (Hausdorff coefficient), 0.9902 ± 0.0025 (True positive rate), 0.0009 ± 0.0002(False positive rate) respectively.Significance.This study provides a new method that achieves reliable performance in rat brain skull stripping of MRI images, which could contribute to the processing of rat brain MRI.

12.
Radiol Phys Technol ; 16(3): 373-383, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37291372

RESUMO

In automated analyses of brain morphometry, skull stripping or brain extraction is a critical first step because it provides accurate spatial registration and signal-intensity normalization. Therefore, it is imperative to develop an ideal skull-stripping method in the field of brain image analysis. Previous reports have shown that convolutional neural network (CNN) method is better at skull stripping than non-CNN methods. We aimed to evaluate the accuracy of skull stripping in a single-contrast CNN model using eight-contrast magnetic resonance (MR) images. A total of 12 healthy participants and 12 patients with a clinical diagnosis of unilateral Sturge-Weber syndrome were included in our study. A 3-T MR imaging system and QRAPMASTER were used for data acquisition. We obtained eight-contrast images produced by post-processing T1, T2, and proton density (PD) maps. To evaluate the accuracy of skull stripping in our CNN method, gold-standard intracranial volume (ICVG) masks were used to train the CNN model. The ICVG masks were defined by experts using manual tracing. The accuracy of the intracranial volume obtained from the single-contrast CNN model (ICVE) was evaluated using the Dice similarity coefficient [= 2(ICVE ⋂ ICVG)/(ICVE + ICVG)]. Our study showed significantly higher accuracy in the PD-weighted image (WI), phase-sensitive inversion recovery (PSIR), and PD-short tau inversion recovery (STIR) compared to the other three contrast images (T1-WI, T2-fluid-attenuated inversion recovery [FLAIR], and T1-FLAIR). In conclusion, PD-WI, PSIR, and PD-STIR should be used instead of T1-WI for skull stripping in the CNN models.


Assuntos
Encéfalo , Crânio , Humanos , Crânio/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
13.
J Neural Eng ; 20(3)2023 06 16.
Artigo em Inglês | MEDLINE | ID: mdl-37253355

RESUMO

Objective. Hydrocephalus is the leading indication for pediatric neurosurgical care worldwide. Identification of postinfectious hydrocephalus (PIH) verses non-postinfectious hydrocephalus, as well as the pathogen involved in PIH is crucial for developing an appropriate treatment plan. Accurate identification requires clinical diagnosis by neuroscientists and microbiological analysis, which are time-consuming and expensive. In this study, we develop a domain enriched AI method for computerized tomography (CT)-based infection diagnosis in hydrocephalic imagery. State-of-the-art (SOTA) convolutional neural network (CNN) approaches form an attractive neural engineering solution for addressing this problem as pathogen-specific features need discovery. Yet black-box deep networks often need unrealistic abundant training data and are not easily interpreted.Approach. In this paper, a novel brain attention regularizer is proposed, which encourages the CNN to put more focus inside brain regions in its feature extraction and decision making. Our approach is then extended to a hybrid 2D/3D network that mines inter-slice information. A new strategy of regularization is also designed for enabling collaboration between 2D and 3D branches.Main results. Our proposed method achieves SOTA results on a CURE Children's Hospital of Uganda dataset with an accuracy of 95.8% in hydrocephalus classification and 84% in pathogen classification. Statistical analysis is performed to demonstrate that our proposed methods obtain significant improvements over the existing SOTA alternatives.Significance. Such attention regularized learning has particularly pronounced benefits in regimes where training data may be limited, thereby enhancing generalizability. To the best of our knowledge, our findings are unique among early efforts in interpretable AI-based models for classification of hydrocephalus and underlying pathogen using CT scans.


Assuntos
Aprendizado Profundo , Hidrocefalia , Criança , Humanos , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Hidrocefalia/diagnóstico por imagem , Atenção
14.
Neurooncol Adv ; 5(1): vdad027, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37051331

RESUMO

Background: Brain tumors are the most common solid tumors and the leading cause of cancer-related death among all childhood cancers. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high interoperator variability. We present a multi-institutional deep learning-based method for automated brain extraction and segmentation of pediatric brain tumors based on multi-parametric MRI scans. Methods: Multi-parametric scans (T1w, T1w-CE, T2, and T2-FLAIR) of 244 pediatric patients ( n = 215 internal and n = 29 external cohorts) with de novo brain tumors, including a variety of tumor subtypes, were preprocessed and manually segmented to identify the brain tissue and tumor subregions into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). The internal cohort was split into training ( n = 151), validation ( n = 43), and withheld internal test ( n = 21) subsets. DeepMedic, a three-dimensional convolutional neural network, was trained and the model parameters were tuned. Finally, the network was evaluated on the withheld internal and external test cohorts. Results: Dice similarity score (median ± SD) was 0.91 ± 0.10/0.88 ± 0.16 for the whole tumor, 0.73 ± 0.27/0.84 ± 0.29 for ET, 0.79 ± 19/0.74 ± 0.27 for union of all non-enhancing components (i.e., NET, CC, ED), and 0.98 ± 0.02 for brain tissue in both internal/external test sets. Conclusions: Our proposed automated brain extraction and tumor subregion segmentation models demonstrated accurate performance on segmentation of the brain tissue and whole tumor regions in pediatric brain tumors and can facilitate detection of abnormal regions for further clinical measurements.

15.
BMC Med Imaging ; 23(1): 44, 2023 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-36973775

RESUMO

BACKGROUND: Experimental ischemic stroke models play a fundamental role in interpreting the mechanism of cerebral ischemia and appraising the development of pathological extent. An accurate and automatic skull stripping tool for rat brain image volumes with magnetic resonance imaging (MRI) are crucial in experimental stroke analysis. Due to the deficiency of reliable rat brain segmentation methods and motivated by the demand for preclinical studies, this paper develops a new skull stripping algorithm to extract the rat brain region in MR images after stroke, which is named Rat U-Net (RU-Net). METHODS: Based on a U-shape like deep learning architecture, the proposed framework integrates batch normalization with the residual network to achieve efficient end-to-end segmentation. A pooling index transmission mechanism between the encoder and decoder is exploited to reinforce the spatial correlation. Two different modalities of diffusion-weighted imaging (DWI) and T2-weighted MRI (T2WI) corresponding to two in-house datasets with each consisting of 55 subjects were employed to evaluate the performance of the proposed RU-Net. RESULTS: Extensive experiments indicated great segmentation accuracy across diversified rat brain MR images. It was suggested that our rat skull stripping network outperformed several state-of-the-art methods and achieved the highest average Dice scores of 98.04% (p < 0.001) and 97.67% (p < 0.001) in the DWI and T2WI image datasets, respectively. CONCLUSION: The proposed RU-Net is believed to be potential for advancing preclinical stroke investigation and providing an efficient tool for pathological rat brain image extraction, where accurate segmentation of the rat brain region is fundamental.


Assuntos
AVC Isquêmico , Acidente Vascular Cerebral , Ratos , Animais , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Crânio , Encéfalo/diagnóstico por imagem , Acidente Vascular Cerebral/diagnóstico por imagem
16.
Diagnostics (Basel) ; 13(2)2023 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-36673122

RESUMO

Over the last few years, brain tumor-related clinical cases have increased substantially, particularly in adults, due to environmental and genetic factors. If they are unidentified in the early stages, there is a risk of severe medical complications, including death. So, early diagnosis of brain tumors plays a vital role in treatment planning and improving a patient's condition. There are different forms, properties, and treatments of brain tumors. Among them, manual identification and classification of brain tumors are complex, time-demanding, and sensitive to error. Based on these observations, we developed an automated methodology for detecting and classifying brain tumors using the magnetic resonance (MR) imaging modality. The proposed work includes three phases: pre-processing, classification, and segmentation. In the pre-processing, we started with the skull-stripping process through morphological and thresholding operations to eliminate non-brain matters such as skin, muscle, fat, and eyeballs. Then we employed image data augmentation to improve the model accuracy by minimizing the overfitting. Later in the classification phase, we developed a novel lightweight convolutional neural network (lightweight CNN) model to extract features from skull-free augmented brain MR images and then classify them as normal and abnormal. Finally, we obtained infected tumor regions from the brain MR images in the segmentation phase using a fast-linking modified spiking cortical model (FL-MSCM). Based on this sequence of operations, our framework achieved 99.58% classification accuracy and 95.7% of dice similarity coefficient (DSC). The experimental results illustrate the efficiency of the proposed framework and its appreciable performance compared to the existing techniques.

17.
Adv Exp Med Biol ; 1394: 103-117, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36587384

RESUMO

This chapter focuses on the division and location of brain deformities such as tumors in magnetic resonance imaging (MRI) through Chan-Vese active contour segmentation. Brain tumor division and identification is a major test in the area of biomedical picture processing. To detect the size and location of the tumor, various techniques are available, but active contour gives accurate knowledge of the region for segmentation. Chan-Vese Active contour method provides independent, robust and more flexible segmentation. In this chapter, firstly we used preprocessing technique in which noise and unused parts of the brain and skull are removed, for this we proposed the skull stripping method. Then, we applied feature extraction to enhance the image intensity and quality, and lastly, used Chan-Vese active contour with a level set image segmentation technique to detect the tumor. The tumor area was calculated after tumor detection.


Assuntos
Neoplasias Encefálicas , Neoplasias da Medula Espinal , Humanos , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Carcinogênese , Transformação Celular Neoplásica , Neoplasias da Medula Espinal/diagnóstico por imagem , Biologia Computacional , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
18.
Neuroimage ; 260: 119474, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-35842095

RESUMO

The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.


Assuntos
Encéfalo , Crânio , Adulto , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Meios de Contraste , Cabeça , Humanos , Processamento de Imagem Assistida por Computador/métodos , Recém-Nascido , Imageamento por Ressonância Magnética/métodos , Crânio/diagnóstico por imagem , Crânio/patologia
19.
Front Neurosci ; 16: 801769, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35368273

RESUMO

Skull stripping is an initial and critical step in the pipeline of mouse fMRI analysis. Manual labeling of the brain usually suffers from intra- and inter-rater variability and is highly time-consuming. Hence, an automatic and efficient skull-stripping method is in high demand for mouse fMRI studies. In this study, we investigated a 3D U-Net based method for automatic brain extraction in mouse fMRI studies. Two U-Net models were separately trained on T2-weighted anatomical images and T2*-weighted functional images. The trained models were tested on both interior and exterior datasets. The 3D U-Net models yielded a higher accuracy in brain extraction from both T2-weighted images (Dice > 0.984, Jaccard index > 0.968 and Hausdorff distance < 7.7) and T2*-weighted images (Dice > 0.964, Jaccard index > 0.931 and Hausdorff distance < 3.3), compared with the two widely used mouse skull-stripping methods (RATS and SHERM). The resting-state fMRI results using automatic segmentation with the 3D U-Net models are highly consistent with those obtained by manual segmentation for both the seed-based and group independent component analysis. These results demonstrate that the 3D U-Net based method can replace manual brain extraction in mouse fMRI analysis.

20.
Front Aging Neurosci ; 14: 807903, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35309883

RESUMO

Although skull-stripping and brain region segmentation are essential for precise quantitative analysis of positron emission tomography (PET) of mouse brains, deep learning (DL)-based unified solutions, particularly for spatial normalization (SN), have posed a challenging problem in DL-based image processing. In this study, we propose an approach based on DL to resolve these issues. We generated both skull-stripping masks and individual brain-specific volumes-of-interest (VOIs-cortex, hippocampus, striatum, thalamus, and cerebellum) based on inverse spatial normalization (iSN) and deep convolutional neural network (deep CNN) models. We applied the proposed methods to mutated amyloid precursor protein and presenilin-1 mouse model of Alzheimer's disease. Eighteen mice underwent T2-weighted MRI and 18F FDG PET scans two times, before and after the administration of human immunoglobulin or antibody-based treatments. For training the CNN, manually traced brain masks and iSN-based target VOIs were used as the label. We compared our CNN-based VOIs with conventional (template-based) VOIs in terms of the correlation of standardized uptake value ratio (SUVR) by both methods and two-sample t-tests of SUVR % changes in target VOIs before and after treatment. Our deep CNN-based method successfully generated brain parenchyma mask and target VOIs, which shows no significant difference from conventional VOI methods in SUVR correlation analysis, thus establishing methods of template-based VOI without SN.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...