Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Cancer Cell ; 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38942025

RESUMO

Global investigation of medulloblastoma has been hindered by the widespread inaccessibility of molecular subgroup testing and paucity of data. To bridge this gap, we established an international molecularly characterized database encompassing 934 medulloblastoma patients from thirteen centers across China and the United States. We demonstrate how image-based machine learning strategies have the potential to create an alternative pathway for non-invasive, presurgical, and low-cost molecular subgroup prediction in the clinical management of medulloblastoma. Our robust validation strategies-including cross-validation, external validation, and consecutive validation-demonstrate the model's efficacy as a generalizable molecular diagnosis classifier. The detailed analysis of MRI characteristics replenishes the understanding of medulloblastoma through a nuanced radiographic lens. Additionally, comparisons between East Asia and North America subsets highlight critical management implications. We made this comprehensive dataset, which includes MRI signatures, clinicopathological features, treatment variables, and survival data, publicly available to advance global medulloblastoma research.

2.
Sci Data ; 10(1): 489, 2023 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-37500686

RESUMO

Brain magnetic resonance imaging (MRI) provides detailed soft tissue contrasts that are critical for disease diagnosis and neuroscience research. Higher MRI resolution typically comes at the cost of signal-to-noise ratio (SNR) and tissue contrast, particularly for more common 3 Tesla (3T) MRI scanners. At ultra-high magnetic field strength, 7 Tesla (7T) MRI allows for higher resolution with greater tissue contrast and SNR. However, the prohibitively high costs of 7T MRI scanners deter their widespread adoption in clinical and research centers. To obtain higher-quality images without 7T MRI scanners, algorithms that can synthesize 7T MR images from 3T MR images are under active development. Here, we make available a dataset of paired T1-weighted and T2-weighted MR images at 3T and 7T of 10 healthy subjects to facilitate the development and evaluation of 3T-to-7T MR image synthesis models. The quality of the dataset is assessed using image quality metrics implemented in MRIQC.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Humanos , Algoritmos , Benchmarking , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Razão Sinal-Ruído
3.
Radiol Artif Intell ; 5(3): e220246, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37293349

RESUMO

Purpose: To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging. Materials and Methods: In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient. Image quality of the reconstructed ultra-low-dose PET was evaluated in comparison with a simulated standard 1% PET image. The performance of Masked-LMCTrans was compared with that of CNNs with pure convolution operations (classic U-Net family), and the effect of different CNN encoders on feature representation was assessed. Statistical differences in the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and visual information fidelity (VIF) were assessed by two-sample testing with the Wilcoxon signed rank t test. Results: The study included 21 patients (mean age, 15 years ± 7 [SD]; 12 female) in the primary cohort and 10 patients (mean age, 13 years ± 4; six female) in the external test cohort. Masked-LMCTrans-reconstructed follow-up PET images demonstrated significantly less noise and more detailed structure compared with simulated 1% extremely ultra-low-dose PET images. SSIM, PSNR, and VIF were significantly higher for Masked-LMCTrans-reconstructed PET (P < .001), with improvements of 15.8%, 23.4%, and 186%, respectively. Conclusion: Masked-LMCTrans achieved high image quality reconstruction of 1% low-dose whole-body PET images.Keywords: Pediatrics, PET, Convolutional Neural Network (CNN), Dose Reduction Supplemental material is available for this article. © RSNA, 2023.

4.
IEEE Trans Med Imaging ; 42(7): 1932-1943, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37018314

RESUMO

The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.


Assuntos
Algoritmos , Diagnóstico por Imagem , Radiografia , Retina
5.
Eur J Nucl Med Mol Imaging ; 50(5): 1337-1350, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36633614

RESUMO

PURPOSE: To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS: In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS: For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION: Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.


Assuntos
Inteligência Artificial , Fluordesoxiglucose F18 , Humanos , Redução da Medicação , Tomografia por Emissão de Pósitrons/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
6.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4990-5001, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-34874872

RESUMO

Segmenting breast tumors from dynamic contrast-enhanced magnetic resonance (DCE-MR) images is a critical step for early detection and diagnosis of breast cancer. However, variable shapes and sizes of breast tumors, as well as inhomogeneous background, make it challenging to accurately segment tumors in DCE-MR images. Therefore, in this article, we propose a novel tumor-sensitive synthesis module and demonstrate its usage after being integrated with tumor segmentation. To suppress false-positive segmentation with similar contrast enhancement characteristics to true breast tumors, our tumor-sensitive synthesis module can feedback differential loss of the true and false breast tumors. Thus, by following the tumor-sensitive synthesis module after the segmentation predictions, the false breast tumors with similar contrast enhancement characteristics to the true ones will be effectively reduced in the learned segmentation model. Moreover, the synthesis module also helps improve the boundary accuracy while inaccurate predictions near the boundary will lead to higher loss. For the evaluation, we build a very large-scale breast DCE-MR image dataset with 422 subjects from different patients, and conduct comprehensive experiments and comparisons with other algorithms to justify the effectiveness, adaptability, and robustness of our proposed method.


Assuntos
Neoplasias da Mama , Redes Neurais de Computação , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Mama/patologia , Algoritmos , Imageamento por Ressonância Magnética/métodos
7.
IEEE J Biomed Health Inform ; 26(9): 4635-4644, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35749336

RESUMO

Federated learning is an emerging research paradigm for enabling collaboratively training deep learning models without sharing patient data. However, the data from different institutions are usually heterogeneous across institutions, which may reduce the performance of models trained using federated learning. In this study, we propose a novel heterogeneity-aware federated learning method, SplitAVG, to overcome the performance drops from data heterogeneity in federated learning. Unlike previous federated methods that require complex heuristic training or hyper parameter tuning, our SplitAVG leverages the simple network split and feature map concatenation strategies to encourage the federated model training an unbiased estimator of the target data distribution. We compare SplitAVG with seven state-of-the-art federated learning methods, using centrally hosted training data as the baseline on a suite of both synthetic and real-world federated datasets. We find that the performance of models trained using all the comparison federated learning methods degraded significantly with the increasing degrees of data heterogeneity. In contrast, SplitAVG method achieves comparable results to the baseline method under all heterogeneous settings, that it achieves 96.2% of the accuracy and 110.4% of the mean absolute error obtained by the baseline in a diabetic retinopathy binary classification dataset and a bone age prediction dataset, respectively, on highly heterogeneous data partitions. We conclude that SplitAVG method can effectively overcome the performance drops from variability in data distributions across institutions. Experimental results also show that SplitAVG can be adapted to different base convolutional neural networks (CNNs) and generalized to various types of medical imaging tasks. The code is publicly available at https://github.com/zm17943/SplitAVG.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Humanos , Redes Neurais de Computação , Radiografia
8.
Nat Nanotechnol ; 17(6): 653-660, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35606441

RESUMO

Light scattering by biological tissues sets a limit to the penetration depth of high-resolution optical microscopy imaging of live mammals in vivo. An effective approach to reduce light scattering and increase imaging depth is to extend the excitation and emission wavelengths to the second near-infrared window (NIR-II) at >1,000 nm, also called the short-wavelength infrared window. Here we show biocompatible core-shell lead sulfide/cadmium sulfide quantum dots emitting at ~1,880 nm and superconducting nanowire single-photon detectors for single-photon detection up to 2,000 nm, enabling a one-photon excitation fluorescence imaging window in the 1,700-2,000 nm (NIR-IIc) range with 1,650 nm excitation-the longest one-photon excitation and emission for in vivo mouse imaging so far. Confocal fluorescence imaging in NIR-IIc reached an imaging depth of ~1,100 µm through an intact mouse head, and enabled non-invasive cellular-resolution imaging in the inguinal lymph nodes of mice without any surgery. We achieve in vivo molecular imaging of high endothelial venules with diameters as small as ~6.6 µm, as well as CD169 + macrophages and CD3 + T cells in the lymph nodes, opening the possibility of non-invasive intravital imaging of immune trafficking in lymph nodes at the single-cell/vessel-level longitudinally.


Assuntos
Nanofios , Pontos Quânticos , Animais , Mamíferos , Camundongos , Microscopia de Fluorescência/métodos , Imagem Óptica/métodos , Fótons , Pontos Quânticos/química
9.
Med Image Anal ; 78: 102424, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35390737

RESUMO

Collaborative learning, which enables collaborative and decentralized training of deep neural networks at multiple institutions in a privacy-preserving manner, is rapidly emerging as a valuable technique in healthcare applications. However, its distributed nature often leads to significant heterogeneity in data distributions across institutions. In this paper, we present a novel generative replay strategy to address the challenge of data heterogeneity in collaborative learning methods. Different from traditional methods that directly aggregating the model parameters, we leverage generative adversarial learning to aggregate the knowledge from all the local institutions. Specifically, instead of directly training a model for task performance, we develop a novel dual model architecture: a primary model learns the desired task, and an auxiliary "generative replay model" allows aggregating knowledge from the heterogenous clients. The auxiliary model is then broadcasted to the central sever, to regulate the training of primary model with an unbiased target distribution. Experimental results demonstrate the capability of the proposed method in handling heterogeneous data across institutions. On highly heterogeneous data partitions, our model achieves ∼4.88% improvement in the prediction accuracy on a diabetic retinopathy classification dataset, and ∼49.8% reduction of mean absolution value on a Bone Age prediction dataset, respectively, compared to the state-of-the art collaborative learning methods.


Assuntos
Práticas Interdisciplinares , Diagnóstico por Imagem , Humanos , Redes Neurais de Computação , Radiografia
10.
Proc Natl Acad Sci U S A ; 119(15): e2123111119, 2022 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-35380898

RESUMO

In vivo fluorescence/luminescence imaging in the near-infrared-IIb (NIR-IIb, 1,500 to 1,700 nm) window under <1,000 nm excitation can afford subcentimeter imaging depth without any tissue autofluorescence, promising high-precision intraoperative navigation in the clinic. Here, we developed a compact imager for concurrent visible photographic and NIR-II (1,000 to 3,000 nm) fluorescence imaging for preclinical image-guided surgery. Biocompatible erbium-based rare-earth nanoparticles (ErNPs) with bright down-conversion luminescence in the NIR-IIb window were conjugated to TRC105 antibody for molecular imaging of CD105 angiogenesis markers in 4T1 murine breast tumors. Under a ∼940 ± 38 nm light-emitting diode (LED) excitation, NIR-IIb imaging of 1,500- to 1,700-nm emission afforded noninvasive tumor­to­normal tissue (T/NT) signal ratios of ∼40 before surgery and an ultrahigh intraoperative tumor-to-muscle (T/M) ratio of ∼300, resolving tumor margin unambiguously without interfering background signal from surrounding healthy tissues. High-resolution imaging resolved small numbers of residual cancer cells during surgery, allowing thorough and nonexcessive tumor removal at the few-cell level. NIR-IIb molecular imaging afforded 10-times-higher and 100-times-higher T/NT and T/M ratios, respectively, than imaging with IRDye800CW-TRC105 in the ∼900- to 1,300-nm range. The vastly improved resolution of tumor margin and diminished background open a paradigm of molecular imaging-guided surgery.


Assuntos
Érbio , Neoplasias Mamárias Experimentais , Nanopartículas Metálicas , Imagem Óptica , Espectroscopia de Luz Próxima ao Infravermelho , Cirurgia Assistida por Computador , Animais , Anticorpos Monoclonais/química , Anticorpos Monoclonais/imunologia , Fluorescência , Corantes Fluorescentes/química , Neoplasias Mamárias Experimentais/diagnóstico por imagem , Neoplasias Mamárias Experimentais/cirurgia , Camundongos , Neoplasia Residual/diagnóstico por imagem , Imagem Óptica/métodos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Cirurgia Assistida por Computador/métodos
11.
Artigo em Inglês | MEDLINE | ID: mdl-36624800

RESUMO

Federated learning is an emerging research paradigm enabling collaborative training of machine learning models among different organizations while keeping data private at each institution. Despite recent progress, there remain fundamental challenges such as the lack of convergence and the potential for catastrophic forgetting across real-world heterogeneous devices. In this paper, we demonstrate that self-attention-based architectures (e.g., Transformers) are more robust to distribution shifts and hence improve federated learning over heterogeneous data. Concretely, we conduct the first rigorous empirical investigation of different neural architectures across a range of federated algorithms, real-world benchmarks, and heterogeneous data splits. Our experiments show that simply replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices, accelerate convergence, and reach a better global model, especially when dealing with heterogeneous data. We release our code and pretrained models to encourage future exploration in robust architectures as an alternative to current research efforts on the optimization front.

12.
Pattern Recognit ; 1242022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38469076

RESUMO

Accurate segmentation of the brain into gray matter, white matter, and cerebrospinal fluid using magnetic resonance (MR) imaging is critical for visualization and quantification of brain anatomy. Compared to 3T MR images, 7T MR images exhibit higher tissue contrast that is contributive to accurate tissue delineation for training segmentation models. In this paper, we propose a cascaded nested network (CaNes-Net) for segmentation of 3T brain MR images, trained by tissue labels delineated from the corresponding 7T images. We first train a nested network (Nes-Net) for a rough segmentation. The second Nes-Net uses tissue-specific geodesic distance maps as contextual information to refine the segmentation. This process is iterated to build CaNes-Net with a cascade of Nes-Net modules to gradually refine the segmentation. To alleviate the misalignment between 3T and corresponding 7T MR images, we incorporate a correlation coefficient map to allow well-aligned voxels to play a more important role in supervising the training process. We compared CaNes-Net with SPM and FSL tools, as well as four deep learning models on 18 adult subjects and the ADNI dataset. Our results indicate that CaNes-Net reduces segmentation errors caused by the misalignment and improves segmentation accuracy substantially over the competing methods.

13.
Proc Natl Acad Sci U S A ; 118(6)2021 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-33526701

RESUMO

Noninvasive optical imaging with deep tissue penetration depth and high spatiotemporal resolution is important to longitudinally studying the biology at the single-cell level in live mammals, but has been challenging due to light scattering. Here, we developed near-infrared II (NIR-II) (1,000 to 1,700 nm) structured-illumination light-sheet microscopy (NIR-II SIM) with ultralong excitation and emission wavelengths up to ∼1,540 and ∼1,700 nm, respectively, suppressing light scattering to afford large volumetric three-dimensional (3D) imaging of tissues with deep-axial penetration depths. Integrating structured illumination into NIR-II light-sheet microscopy further diminished background and improved spatial resolution by approximately twofold. In vivo oblique NIR-II SIM was performed noninvasively for 3D volumetric multiplexed molecular imaging of the CT26 tumor microenvironment in mice, longitudinally mapping out CD4, CD8, and OX40 at the single-cell level in response to immunotherapy by cytosine-phosphate-guanine (CpG), a Toll-like receptor 9 (TLR-9) agonist combined with OX40 antibody treatment. NIR-II SIM affords an additional tool for noninvasive volumetric molecular imaging of immune cells in live mammals.


Assuntos
Imageamento Tridimensional , Imagem Óptica/métodos , Análise de Célula Única , Receptor Toll-Like 9/isolamento & purificação , Animais , Linhagem Celular Tumoral , Microambiente Celular/genética , Camundongos , Microscopia de Fluorescência/métodos , Receptor Toll-Like 9/genética
14.
IEEE J Biomed Health Inform ; 25(2): 514-525, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32750912

RESUMO

Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Endoscopia , Trato Gastrointestinal/diagnóstico por imagem , Humanos
15.
J Magn Reson Imaging ; 52(6): 1852-1858, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32656955

RESUMO

BACKGROUND: A generative adversarial network could be used for high-resolution (HR) medical image synthesis with reduced scan time. PURPOSE: To evaluate the potential of using a deep convolutional generative adversarial network (DCGAN) for generating HRpre and HRpost images based on their corresponding low-resolution (LR) images (LRpre and LRpost ). STUDY TYPE: This was a retrospective analysis of a prospectively acquired cohort. POPULATION: In all, 224 subjects were randomly divided into 200 training subjects and an independent 24 subjects testing set. FIELD STRENGTH/SEQUENCE: Dynamic contrast-enhanced (DCE) MRI with a 1.5T scanner. ASSESSMENT: Three breast radiologists independently ranked the image datasets, using the DCE images as the ground truth, and reviewed the image quality of both the original LR images and the generated HR images. The BI-RADS category and conspicuity of lesions were also ranked. The inter/intracorrelation coefficients (ICCs) of mean image quality scores, lesion conspicuity scores, and Breast Imaging Reporting and Data System (BI-RADS) categories were calculated between the three readers. STATISTICAL TEST: Wilcoxon signed-rank tests evaluated differences among the multireader ranking scores. RESULTS: The mean overall image quality scores of the generated HRpre and HRpost were significantly higher than those of the original LRpre and LRpost (4.77 ± 0.41 vs. 3.27 ± 0.43 and 4.72 ± 0.44 vs. 3.23 ± 0.43, P < 0.0001, respectively, in the multireader study). The mean lesion conspicuity scores of the generated HRpre and HRpost were significantly higher than those of the original LRpre and LRpost (4.18 ± 0.70 vs. 3.49 ± 0.58 and 4.35 ± 0.59 vs. 3.48 ± 0.61, P < 0.001, respectively, in the multireader study). The ICCs of the image quality scores, lesion conspicuity scores, and BI-RADS categories had good agreements among the three readers (all ICCs >0.75). DATA CONCLUSION: DCGAN was capable of generating HR of the breast from fast pre- and postcontrast LR and achieved superior quantitative and qualitative performance in a multireader study. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1852-1858.


Assuntos
Mama , Imageamento por Ressonância Magnética , Mama/diagnóstico por imagem , Redes Neurais de Computação , Radiografia , Estudos Retrospectivos
16.
Med Image Anal ; 62: 101663, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32120269

RESUMO

Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this paper, we introduce a novel deep learning network that fuses complementary information from spatial and wavelet domains to synthesize 7T T1-weighted images from their 3T counterparts. Our deep learning network leverages wavelet transformation to facilitate effective multi-scale reconstruction, taking into account both low-frequency tissue contrast and high-frequency anatomical details. Our network utilizes a novel wavelet-based affine transformation (WAT) layer, which modulates feature maps from the spatial domain with information from the wavelet domain. Extensive experimental results demonstrate the capability of the proposed method in synthesizing high-quality 7T images with better tissue contrast and greater details, outperforming state-of-the-art methods.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos
17.
IEEE Trans Med Imaging ; 39(6): 2151-2162, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31940526

RESUMO

Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Masculino , Pelve/diagnóstico por imagem
18.
IEEE Trans Biomed Eng ; 67(10): 2710-2720, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-31995472

RESUMO

Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the segmentation model. To this end, we propose the label denoising module and embed it into the iterative training scheme of the label denoising network (LDnet) for segmentation. The labels of the training voxels are predicted by the tentative LDnet, while the label denoising module identifies the voxels with unreliable labels. As only the good training voxels are preserved, the iteratively re-trained LDnet can refine its segmentation capability gradually. Our results are remarkable, i.e., reaching  âˆ¼ 94% (prostate),  âˆ¼ 91% (bladder), and  âˆ¼ 86% (rectum) of the Dice Similarity Coefficients (DSCs), compared to the case of fully supervised learning upon high-quality voxel-wise annotations and also superior to several state-of-the-art approaches. To our best knowledge, this is the first work to achieve voxel-wise segmentation in CT images from simple 3D bounding box annotations, which can greatly reduce many labeling efforts and meet the demands of the practical clinical applications.


Assuntos
Neoplasias da Próstata , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Bexiga Urinária
19.
Magn Reson Imaging ; 64: 90-100, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31175927

RESUMO

We propose a novel dual-domain convolutional neural network framework to improve structural information of routine 3 T images. We introduce a parameter-efficient butterfly network that involves two complementary domains: a spatial domain and a frequency domain. The butterfly network allows the interaction of these two domains in learning the complex mapping from 3 T to 7 T images. We verified the efficacy of the dual-domain strategy and butterfly network using 3 T and 7 T image pairs. Experimental results demonstrate that the proposed framework generates synthetic 7 T-like images and achieves performance superior to state-of-the-art methods.


Assuntos
Encéfalo/diagnóstico por imagem , Epilepsia/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Algoritmos , Humanos
20.
Nat Methods ; 16(6): 545-552, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31086342

RESUMO

Non-invasive deep-tissue three-dimensional optical imaging of live mammals with high spatiotemporal resolution is challenging owing to light scattering. We developed near-infrared II (1,000-1,700 nm) light-sheet microscopy with excitation and emission of up to approximately 1,320 nm and 1,700 nm, respectively, for optical sectioning at a penetration depth of approximately 750 µm through live tissues without invasive surgery and at a depth of approximately 2 mm in glycerol-cleared brain tissues. Near-infrared II light-sheet microscopy in normal and oblique configurations enabled in vivo imaging of live mice through intact tissue, revealing abnormal blood flow and T-cell motion in tumor microcirculation and mapping out programmed-death ligand 1 and programmed cell death protein 1 in tumors with cellular resolution. Three-dimensional imaging through the intact mouse head resolved vascular channels between the skull and brain cortex, and allowed monitoring of recruitment of macrophages and microglia to the traumatic brain injury site.


Assuntos
Lesões Encefálicas Traumáticas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Neoplasias Colorretais/diagnóstico por imagem , Microscopia de Fluorescência/métodos , Imagem Óptica/métodos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Animais , Encéfalo/irrigação sanguínea , Lesões Encefálicas Traumáticas/patologia , Neoplasias Colorretais/irrigação sanguínea , Neoplasias Colorretais/patologia , Feminino , Corantes Fluorescentes , Humanos , Imageamento Tridimensional , Raios Infravermelhos , Camundongos , Camundongos Endogâmicos BALB C , Camundongos Endogâmicos C57BL , Células Tumorais Cultivadas , Ensaios Antitumorais Modelo de Xenoenxerto
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA