Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39059508

RESUMO

PURPOSE: The purpose of this study was to investigate an extended self-adapting nnU-Net framework for detecting and segmenting brain metastases (BM) on magnetic resonance imaging (MRI). METHODS AND MATERIALS: Six different nnU-Net systems with adaptive data sampling, adaptive Dice loss, or different patch/batch sizes were trained and tested for detecting and segmenting intraparenchymal BM with a size ≥2 mm on 3 Dimensional (3D) post-Gd T1-weighted MRI volumes using 2092 patients from 7 institutions (1712, 195, and 185 patients for training, validation, and testing, respectively). Gross tumor volumes of BM delineated by physicians for stereotactic radiosurgery were collected retrospectively and curated at each institute. Additional centralized data curation was carried out to create gross tumor volumes of uncontoured BM by 2 radiologists to improve the accuracy of ground truth. The training data set was augmented with synthetic BMs of 1025 MRI volumes using a 3D generative pipeline. BM detection was evaluated by lesion-level sensitivity and false-positive (FP) rate. BM segmentation was assessed by lesion-level Dice similarity coefficient, 95-percentile Hausdorff distance, and average Hausdorff distance (HD). The performances were assessed across different BM sizes. Additional testing was performed using a second data set of 206 patients. RESULTS: Of the 6 nnU-Net systems, the nnU-Net with adaptive Dice loss achieved the best detection and segmentation performance on the first testing data set. At an FP rate of 0.65 ± 1.17, overall sensitivity was 0.904 for all sizes of BM, 0.966 for BM ≥0.1 cm3, and 0.824 for BM <0.1 cm3. Mean values of Dice similarity coefficient, 95-percentile Hausdorff distance, and average HD of all detected BMs were 0.758, 1.45, and 0.23 mm, respectively. Performances on the second testing data set achieved a sensitivity of 0.907 at an FP rate of 0.57 ± 0.85 for all BM sizes, and an average HD of 0.33 mm for all detected BM. CONCLUSIONS: Our proposed extension of the self-configuring nnU-Net framework substantially improved small BM detection sensitivity while maintaining a controlled FP rate. Clinical utility of the extended nnU-Net model for assisting early BM detection and stereotactic radiosurgery planning will be investigated.

2.
Acad Radiol ; 31(11): 4621-4628, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38908922

RESUMO

RATIONALE AND OBJECTIVES: To assess a deep learning application (DLA) for acute ischemic stroke (AIS) detection on brain magnetic resonance imaging (MRI) in the emergency room (ER) and the effect of T2-weighted imaging (T2WI) on its performance. MATERIALS AND METHODS: We retrospectively analyzed brain MRIs taken through the ER from March to October 2021 that included diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) sequences. MRIs were processed by the DLA, and sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were evaluated, with three neuroradiologists establishing the gold standard for detection performance. In addition, we examined the impact of axial T2WI, when available, on the accuracy and processing time of DLA. RESULTS: The study included 947 individuals (mean age ± standard deviation, 64 years ± 16; 461 men, 486 women), with 239 (25%) positive for AIS. The overall performance of DLA was as follows: sensitivity, 90%; specificity, 89%; accuracy, 89%; and AUROC, 0.95. The average processing time was 24 s. In the subgroup with T2WI, T2WI did not significantly impact MRI assessments but did result in longer processing times (35 s without T2WI compared to 48 s with T2WI, p < 0.001). CONCLUSION: The DLA successfully identified AIS in the ER setting with an average processing time of 24 s. The absence of performance acquire with axial T2WI suggests that the DLA can diagnose AIS with just axial DWI and FLAIR sequences, potentially shortening the exam duration in the ER.


Assuntos
Aprendizado Profundo , AVC Isquêmico , Imageamento por Ressonância Magnética , Sensibilidade e Especificidade , Triagem , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Estudos Retrospectivos , AVC Isquêmico/diagnóstico por imagem , Triagem/métodos , Imageamento por Ressonância Magnética/métodos , Serviço Hospitalar de Emergência , Idoso , Imagem de Difusão por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem
3.
Sci Rep ; 14(1): 9380, 2024 04 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654066

RESUMO

Vision transformers (ViTs) have revolutionized computer vision by employing self-attention instead of convolutional neural networks and demonstrated success due to their ability to capture global dependencies and remove spatial biases of locality. In medical imaging, where input data may differ in size and resolution, existing architectures require resampling or resizing during pre-processing, leading to potential spatial resolution loss and information degradation. This study proposes a co-ordinate-based embedding that encodes the geometry of medical images, capturing physical co-ordinate and resolution information without the need for resampling or resizing. The effectiveness of the proposed embedding is demonstrated through experiments with UNETR and SwinUNETR models for infarct segmentation on MRI dataset with AxTrace and AxADC contrasts. The dataset consists of 1142 training, 133 validation and 143 test subjects. Both models with the addition of co-ordinate based positional embedding achieved substantial improvements in mean Dice score by 6.5% and 7.6%. The proposed embedding showcased a statistically significant advantage p-value< 0.0001 over alternative approaches. In conclusion, the proposed co-ordinate-based pixel-wise positional embedding method offers a promising solution for Transformer-based models in medical image analysis. It effectively leverages physical co-ordinate information to enhance performance without compromising spatial resolution and provides a foundation for future advancements in positional embedding techniques for medical applications.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Redes Neurais de Computação
4.
Front Neurosci ; 14: 260, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32508558

RESUMO

Recent advances in deep learning have improved the segmentation accuracy of subcortical brain structures, which would be useful in neuroimaging studies of many neurological disorders. However, most existing deep learning based approaches in neuroimaging do not investigate the specific difficulties that exist in segmenting extremely small but important brain regions such as the subnuclei of the amygdala. To tackle this challenging task, we developed a dual-branch dilated residual 3D fully convolutional network with parallel convolutions to extract more global context and alleviate the class imbalance issue by maintaining a small receptive field that is just the size of the regions of interest (ROIs). We also conduct multi-scale feature fusion in both parallel and series to compensate the potential information loss during convolutions, which has been shown to be important for small objects. The serial feature fusion enabled by residual connections is further enhanced by a proposed top-down attention-guided refinement unit, where the high-resolution low-level spatial details are selectively integrated to complement the high-level but coarse semantic information, enriching the final feature representations. As a result, the segmentations resulting from our method are more accurate both volumetrically and morphologically, compared with other deep learning based approaches. To the best of our knowledge, this work is the first deep learning-based approach that targets the subregions of the amygdala. We also demonstrated the feasibility of using a cycle-consistent generative adversarial network (CycleGAN) to harmonize multi-site MRI data, and show that our method generalizes well to challenging traumatic brain injury (TBI) datasets collected from multiple centers. This appears to be a promising strategy for image segmentation for multiple site studies and increased morphological variability from significant brain pathology.

5.
Neuroimage Clin ; 25: 102183, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32058319

RESUMO

The association of epilepsy with structural brain changes and cognitive abnormalities in midlife has raised concern regarding the possibility of future accelerated brain and cognitive aging and increased risk of later life neurocognitive disorders. To address this issue we examined age-related processes in both structural and functional neuroimaging among individuals with temporal lobe epilepsy (TLE, N = 104) who were participants in the Epilepsy Connectome Project (ECP). Support vector regression (SVR) models were trained from 151 healthy controls and used to predict TLE patients' brain ages. It was found that TLE patients on average have both older structural (+6.6 years) and functional (+8.3 years) brain ages compared to healthy controls. Accelerated functional brain age (functional - chronological age) was mildly correlated (corrected P = 0.07) with complex partial seizure frequency and the number of anti-epileptic drug intake. Functional brain age was a significant correlate of declining cognition (fluid abilities) and partially mediated chronological age-fluid cognition relationships. Chronological age was the only positive predictor of crystallized cognition. Accelerated aging is evident not only in the structural brains of patients with TLE, but also in their functional brains. Understanding the causes of accelerated brain aging in TLE will be clinically important in order to potentially prevent or mitigate their cognitive deficits.


Assuntos
Senilidade Prematura , Córtex Cerebral , Envelhecimento Cognitivo , Disfunção Cognitiva , Conectoma/métodos , Epilepsia do Lobo Temporal , Adulto , Fatores Etários , Senilidade Prematura/diagnóstico por imagem , Senilidade Prematura/etiologia , Senilidade Prematura/patologia , Senilidade Prematura/fisiopatologia , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/patologia , Córtex Cerebral/fisiopatologia , Envelhecimento Cognitivo/fisiologia , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/etiologia , Disfunção Cognitiva/patologia , Disfunção Cognitiva/fisiopatologia , Epilepsia do Lobo Temporal/complicações , Epilepsia do Lobo Temporal/diagnóstico por imagem , Epilepsia do Lobo Temporal/patologia , Epilepsia do Lobo Temporal/fisiopatologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Máquina de Vetores de Suporte , Adulto Jovem
6.
Brain Connect ; 9(2): 184-193, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30803273

RESUMO

The National Institutes of Health-sponsored Epilepsy Connectome Project aims to characterize connectivity changes in temporal lobe epilepsy (TLE) patients. The magnetic resonance imaging protocol follows that used in the Human Connectome Project, and includes 20 min of resting-state functional magnetic resonance imaging acquired at 3T using 8-band multiband imaging. Glasser parcellation atlas was combined with the FreeSurfer subcortical regions to generate resting-state functional connectivity (RSFC), amplitude of low-frequency fluctuations (ALFFs), and fractional ALFF measures. Seven different frequency ranges such as Slow-5 (0.01-0.027 Hz) and Slow-4 (0.027-0.073 Hz) were selected to compute these measures. The goal was to train machine learning classification models to discriminate TLE patients from healthy controls, and to determine which combination of the resting state measure and frequency range produced the best classification model. The samples included age- and gender-matched groups of 60 TLE patients and 59 healthy controls. Three traditional machine learning models were trained: support vector machine, linear discriminant analysis, and naive Bayes classifier. The highest classification accuracy was obtained using RSFC measures in the Slow-4 + 5 band (0.01-0.073 Hz) as features. Leave-one-out cross-validation accuracies were ∼83%, with receiver operating characteristic area-under-the-curve reaching close to 90%. Increased connectivity from right area posterior 9-46v in TLE patients contributed to the high accuracies. With increased sample sizes in the near future, better machine learning models will be trained not only to aid the diagnosis of TLE, but also as a tool to understand this brain disorder.


Assuntos
Conectoma/métodos , Epilepsia do Lobo Temporal/diagnóstico por imagem , Epilepsia do Lobo Temporal/fisiopatologia , Adulto , Teorema de Bayes , Encéfalo/fisiopatologia , Feminino , Lateralidade Funcional , Hipocampo/fisiopatologia , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Máquina de Vetores de Suporte , Lobo Temporal/fisiopatologia
7.
EJNMMI Phys ; 5(1): 24, 2018 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-30417316

RESUMO

BACKGROUND: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction. RESULTS: deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate 18F-FDG PET results with average errors of less than 1% in most brain regions. CONCLUSIONS: We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single 18F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging.

8.
Tomography ; 4(3): 138-147, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30320213

RESUMO

This study evaluated the feasibility of using only diagnostically relevant magnetic resonance (MR) images together with deep learning for positron emission tomography (PET)/MR attenuation correction (deepMRAC) in the pelvis. Such an approach could eliminate dedicated MRAC sequences that have limited diagnostic utility but can substantially lengthen acquisition times for multibed position scans. We used axial T2 and T1 LAVA Flex magnetic resonance imaging images that were acquired for diagnostic purposes as inputs to a 3D deep convolutional neural network. The network was trained to produce a discretized (air, water, fat, and bone) substitute computed tomography (CT) (CTsub). Discretized (CTref-discrete) and continuously valued (CTref) reference CT images were created to serve as ground truth for network training and attenuation correction, respectively. Training was performed with data from 12 subjects. CTsub, CTref, and the system MRAC were used for PET/MR attenuation correction, and quantitative PET values of the resulting images were compared in 6 test subjects. Overall, the network produced CTsub with Dice coefficients of 0.79 ± 0.03 for cortical bone, 0.98 ± 0.01 for soft tissue (fat: 0.94 ± 0.0; water: 0.88 ± 0.02), and 0.49 ± 0.17 for bowel gas when compared with CTref-discrete. The root mean square error of the whole PET image was 4.9% by using deepMRAC and 11.6% by using the system MRAC. In evaluating 16 soft tissue lesions, the distribution of errors for maximum standardized uptake value was significantly narrower using deepMRAC (-1.0% ± 1.3%) than using system MRAC method (0.0% ± 6.4%) according to the Brown-Forsy the test (P < .05). These results indicate that improved PET/MR attenuation correction can be achieved in the pelvis using only diagnostically relevant MR images.

9.
Magn Reson Med ; 80(6): 2759-2770, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29774599

RESUMO

PURPOSE: To describe and evaluate a new segmentation method using deep convolutional neural network (CNN), 3D fully connected conditional random field (CRF), and 3D simplex deformable modeling to improve the efficiency and accuracy of knee joint tissue segmentation. METHODS: A segmentation pipeline was built by combining a semantic segmentation CNN, 3D fully connected CRF, and 3D simplex deformable modeling. A convolutional encoder-decoder network was designed as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification for 12 different joint structures. The 3D fully connected CRF was applied to regularize contextual relationship among voxels within the same tissue class and between different classes. The 3D simplex deformable modeling refined the output from 3D CRF to preserve the overall shape and maintain a desirable smooth surface for joint structures. The method was evaluated on 3D fast spin-echo (3D-FSE) MR image data sets. Quantitative morphological metrics were used to evaluate the accuracy and robustness of the method in comparison to the ground truth data. RESULTS: The proposed segmentation method provided good performance for segmenting all knee joint structures. There were 4 tissue types with high mean Dice coefficient above 0.9 including the femur, tibia, muscle, and other non-specified tissues. There were 7 tissue types with mean Dice coefficient between 0.8 and 0.9 including the femoral cartilage, tibial cartilage, patella, patellar cartilage, meniscus, quadriceps and patellar tendon, and infrapatellar fat pad. There was 1 tissue type with mean Dice coefficient between 0.7 and 0.8 for joint effusion and Baker's cyst. Most musculoskeletal tissues had a mean value of average symmetric surface distance below 1 mm. CONCLUSION: The combined CNN, 3D fully connected CRF, and 3D deformable modeling approach was well-suited for performing rapid and accurate comprehensive tissue segmentation of the knee joint. The deep learning-based segmentation method has promising potential applications in musculoskeletal imaging.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Joelho/diagnóstico por imagem , Redes Neurais de Computação , Idoso , Osso e Ossos/diagnóstico por imagem , Cartilagem/diagnóstico por imagem , Aprendizado Profundo , Feminino , Humanos , Articulação do Joelho/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X
10.
Med Phys ; 2018 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-29763997

RESUMO

PURPOSE: In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. METHODS: MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 s). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on eight human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. RESULT: Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76 ± 0.03, 0.96 ± 0.006, and 0.88 ± 0.01. In PET quantitation, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. CONCLUSION: The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantitation with accurate and rapid pseudo CT generation.

11.
Neuroimage ; 175: 32-44, 2018 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-29604454

RESUMO

Brain extraction or skull stripping of magnetic resonance images (MRI) is an essential step in neuroimaging studies, the accuracy of which can severely affect subsequent image processing procedures. Current automatic brain extraction methods demonstrate good results on human brains, but are often far from satisfactory on nonhuman primates, which are a necessary part of neuroscience research. To overcome the challenges of brain extraction in nonhuman primates, we propose a fully-automated brain extraction pipeline combining deep Bayesian convolutional neural network (CNN) and fully connected three-dimensional (3D) conditional random field (CRF). The deep Bayesian CNN, Bayesian SegNet, is used as the core segmentation engine. As a probabilistic network, it is not only able to perform accurate high-resolution pixel-wise brain segmentation, but also capable of measuring the model uncertainty by Monte Carlo sampling with dropout in the testing stage. Then, fully connected 3D CRF is used to refine the probability result from Bayesian SegNet in the whole 3D context of the brain volume. The proposed method was evaluated with a manually brain-extracted dataset comprising T1w images of 100 nonhuman primates. Our method outperforms six popular publicly available brain extraction packages and three well-established deep learning based methods with a mean Dice coefficient of 0.985 and a mean average symmetric surface distance of 0.220 mm. A better performance against all the compared methods was verified by statistical tests (all p-values < 10-4, two-sided, Bonferroni corrected). The maximum uncertainty of the model on nonhuman primate brain extraction has a mean value of 0.116 across all the 100 subjects. The behavior of the uncertainty was also studied, which shows the uncertainty increases as the training set size decreases, the number of inconsistent labels in the training set increases, or the inconsistency between the training set and the testing set increases.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Neuroimagem/métodos , Animais , Teorema de Bayes , Feminino , Macaca mulatta , Masculino
12.
Magn Reson Med ; 79(4): 2379-2391, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-28733975

RESUMO

PURPOSE: To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. METHODS: A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. RESULTS: The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. CONCLUSION: The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine.


Assuntos
Imageamento Tridimensional , Joelho/diagnóstico por imagem , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Algoritmos , Osso e Ossos/diagnóstico por imagem , Cartilagem/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...