Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
IEEE Trans Image Process ; 33: 1199-1210, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38315584

RESUMO

Many deep learning based methods have been proposed for brain tumor segmentation. Most studies focus on deep network internal structure to improve the segmentation accuracy, while valuable external information, such as normal brain appearance, is often ignored. Inspired by the fact that radiologists often screen lesion regions with normal appearance as reference in mind, in this paper, we propose a novel deep framework for brain tumor segmentation, where normal brain images are adopted as reference to compare with tumor brain images in a learned feature space. In this way, features at tumor regions, i.e., tumor-related features, can be highlighted and enhanced for accurate tumor segmentation. It is known that routine tumor brain images are multimodal, while normal brain images are often monomodal. This causes the feature comparison a big issue, i.e., multimodal vs. monomodal. To this end, we present a new feature alignment module (FAM) to make the feature distribution of monomodal normal brain images consistent/inconsistent with multimodal tumor brain images at normal/tumor regions, making the feature comparison effective. Both public (BraTS2022) and in-house tumor brain image datasets are used to evaluate our framework. Experimental results demonstrate that for both datasets, our framework can effectively improve the segmentation accuracy and outperforms the state-of-the-art segmentation methods. Codes are available at https://github.com/hb-liu/Normal-Brain-Boost-Tumor-Segmentation.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
2.
Se Pu ; 41(9): 760-770, 2023 Sep.
Artigo em Chinês | MEDLINE | ID: mdl-37712540

RESUMO

Mycotoxins are secondary metabolites produced by toxigenic fungi under specific environmental conditions. Fruits, owing to their high moisture content, rich nutrition, and improper harvest or storage conditions, are highly susceptible to various mycotoxins, such as ochratoxin A (OTA), zearalenone (ZEN), patulin (PAT), Alternaria toxins, etc. These mycotoxins can cause acute and chronic toxic effects (teratogenicity, mutagenicity, and carcinogenicity, etc) in animals and humans. Given the high toxicity and wide prevalence of mycotoxins, establishing an efficient analytical method to detect multiple mycotoxins simultaneously in different types of fruits is of great importance. Conventional mycotoxin detection methods rely on high performance liquid chromatography (HPLC) coupled with mass spectrometry (MS). However, fruit sample matrices contain large amounts of pigments, cellulose, and minerals, all of which dramatically impede the detection of trace mycotoxins in fruits. Therefore, the efficient enrichment and purification of multiple mycotoxins in fruit samples is crucial before instrumental analysis. In this study, a reliable method based on a QuEChERs sample preparation approach coupled with ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) was established to determine 36 mycotoxins in fruits. In the optimal extraction method, 2.0 g of a sample was extracted with 10 mL of acetic acid-acetonitrile-water (1∶79∶20, v/v/v) in a 50 mL centrifuge tube, vortexed for 30 s, and ultrasonicated for 40 min. The mixture was then salted out with 2.0 g of anhydrous MgSO4 and 0.5 g of NaCl and centrifuged for 5 min. Next, 6 mL of the supernatant was purified using 85 mg of octadecylsilane-bonded silica gel (C18) and 15 mg of N-propylethylenediamine (PSA). After vigorous shaking and centrifugation, the supernatant was collected and dried with nitrogen at 40 ℃. Finally, the residues were redissolved in 1 mL of 5 mmol/L ammonium acetate aqueous solution-acetonitrile (50∶50, v/v) and passed through a 0.22 µm nylon filter before analysis. The mycotoxins were separated on a Waters XBridge BEH C18 column using a binary gradient mixture of ammonium acetate aqueous solution and methanol. The injection volume was 3 µL. The mycotoxins were analyzed in multiple reaction monitoring (MRM) mode under both positive and negative electrospray ionization. Quantitative analysis was performed using an external standard method with matrix-matched calibration curves. Under optimal conditions, good linear relationships were obtained in the respective linear ranges, with correlation coefficients (R2) no less than 0.990. The limits of detection (LODs) and quantification (LOQs) were 0.02-5 and 0.1-10 µg/kg, respectively. The recoveries of the 36 mycotoxins in fruits ranged from 77.0% to 118.9% at low, medium, and high spiked levels, with intra- and inter-day precisions in the range of 1.3%-14.9% and 0.2%-17.3%, respectively. The validated approach was employed to investigate mycotoxin contamination in actual fruit samples, including strawberry, grape, pear, and peach (15 samples of each type). Eleven mycotoxins, namely, altenuene (ALT), altenusin (ALS), alternariol-methyl ether (AME), tenuazonic acid (TeA), tentoxin (Ten), OTA, beauvericin (BEA), PAT, zearalanone (ZAN), T-2 toxin (T2), and mycophenolic acid (MPA), were found in the samples; three samples were contaminated with multiple mycotoxins. The incidence rates of mycotoxins in strawberry, grape, pear, and peach were 27%, 40%, 40%, and 33%, respectively. In particular, Alternaria toxins were the most frequently found mycotoxins in these fruits, with an incidence of 15%. The proposed method is simple, rapid, accurate, sensitive, reproducible, and stable; thus, it is suitable for the simultaneous detection of the 36 mycotoxins in different fruits.


Assuntos
Frutas , Patulina , Animais , Humanos , Cromatografia Líquida , Espectrometria de Massas em Tandem , Acetonitrilas
3.
World J Gastrointest Oncol ; 15(7): 1262-1270, 2023 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-37546558

RESUMO

BACKGROUND: Although the current conventional treatment strategies for esophageal carcinoma (EC) have been proven effective, they are often accompanied by serious adverse events. Therefore, it is still necessary to continue to explore new therapeutic strategies for EC to improve the clinical outcome of patients. AIM: To elucidate the clinical efficacy of concurrent chemoradiotherapy (CCRT) with thalidomide (THAL) and S-1 (tegafur, gimeracil, and oteracil potassium capsules) in the treatment of EC as well as its influence on serum tumor markers (STMs). METHODS: First, 62 patients with EC treated at the Zibo 148 Hospital between November 2019 and November 2022 were selected and grouped according to the received treatment. Among these, 30 patients undergoing CCRT with cis-platinum and 5-fluorouracil were assigned to the control group (Con), and 32 patients receiving CCRT with THAL and S-1 were assigned to the research group (Res). Second, inter-group comparisons were carried out with respect to curative efficacy, incidence of drug toxicities, STMs [carbohydrate antigen 125 (CA125) and macrophage inflammatory protein-3α (MIP-3α)], angiogenesis-related indicators [vascular endothelial growth factor (VEGF); VEGF receptor-1 (VEGFR-1); basic fibroblast growth factor (bFGF); angiogenin-2 (Ang-2)], and quality of life (QoL) [QoL core 30 (QLQ-C30)] after one month of treatment. RESULTS: The analysis showed no statistical difference in the overall response rate and disease control rate between the two patient cohorts; however, the incidences of grade I-II myelosuppression and gastrointestinal reactions were significantly lower in the Res than in the Con. Besides, the post-treatment CA125, MIP-3α, VEGF, VEGFR-1, bFGF, and Ang-2 Levels in the Res were markedly lower compared with the pre-treatment levels and the corresponding post-treatment levels in the Con. Furthermore, more evident improvements in QLQ-C30 scores from the dimensions of physical, role, emotional, and social functions were determined in the Res. CONCLUSION: The above results demonstrate the effectiveness of THAL + S-1 CCRT for EC, which contributes to mild side effects and significant reduction of CA125, MIP-3α, VEGF, VEGFR-1, bFGF, and Ang-2 Levels, thus inhibiting tumors from malignant progression and enhancing patients' QoL.

4.
Med Image Anal ; 89: 102902, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37482033

RESUMO

Radiotherapy is a mainstay treatment for cancer in clinic. An excellent radiotherapy treatment plan is always based on a high-quality dose distribution map which is produced by repeated manual trial-and-errors of experienced experts. To accelerate the radiotherapy planning process, many automatic dose distribution prediction methods have been proposed recently and achieved considerable fruits. Nevertheless, these methods require certain auxiliary inputs besides CT images, such as segmentation masks of the tumor and organs at risk (OARs), which limits their prediction efficiency and application potential. To address this issue, we design a novel approach named as TransDose for dose distribution prediction that treats CT images as the unique input in this paper. Specifically, instead of inputting the segmentation masks to provide the prior anatomical information, we utilize a super-pixel-based graph convolutional network (GCN) to extract category-specific features, thereby compensating the network for the necessary anatomical knowledge. Besides, considering the strong continuous dependency between adjacent CT slices as well as adjacent dose maps, we embed the Transformer into the backbone, and make use of its superior ability of long-range sequence modeling to endow input features with inter-slice continuity message. To our knowledge, this is the first network that specially designed for the task of dose prediction from only CT images without ignoring necessary anatomical structure. Finally, we evaluate our model on two real datasets, and extensive experiments demonstrate the generalizability and advantages of our method.


Assuntos
Neoplasias , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
5.
IEEE Trans Med Imaging ; 42(10): 2974-2987, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37141060

RESUMO

Positron Emission Tomography (PET) is an important nuclear medical imaging technique, and has been widely used in clinical applications, e.g., tumor detection and brain disease diagnosis. As PET imaging could put patients at risk of radiation, the acquisition of high-quality PET images with standard-dose tracers should be cautious. However, if dose is reduced in PET acquisition, the imaging quality could become worse and thus may not meet clinical requirement. To safely reduce the tracer dose and also maintain high quality of PET imaging, we propose a novel and effective approach to estimate high-quality Standard-dose PET (SPET) images from Low-dose PET (LPET) images. Specifically, to fully utilize both the rare paired and the abundant unpaired LPET and SPET images, we propose a semi-supervised framework for network training. Meanwhile, based on this framework, we further design a Region-adaptive Normalization (RN) and a structural consistency constraint to track the task-specific challenges. RN performs region-specific normalization in different regions of each PET image to suppress negative impact of large intensity variation across different regions, while the structural consistency constraint maintains structural details during the generation of SPET images from LPET images. Experiments on real human chest-abdomen PET images demonstrate that our proposed approach achieves state-of-the-art performance quantitatively and qualitatively.


Assuntos
Tomografia por Emissão de Pósitrons , Compostos Radiofarmacêuticos , Humanos , Tomografia por Emissão de Pósitrons/métodos , Doses de Radiação , Processamento de Imagem Assistida por Computador/métodos
6.
Med Image Anal ; 82: 102626, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36208573

RESUMO

Semantic instance segmentation is crucial for many medical image analysis applications, including computational pathology and automated radiation therapy. Existing methods for this task can be roughly classified into two categories: (1) proposal-based methods and (2) proposal-free methods. However, in medical images, the irregular shape-variations and crowding instances (e.g., nuclei and cells) make it hard for the proposal-based methods to achieve robust instance localization. On the other hand, ambiguous boundaries caused by the low-contrast nature of medical images (e.g., CT images) challenge the accuracy of the proposal-free methods. To tackle these issues, we propose a proposal-free segmentation network with discriminative deep supervision (DDS), which at the same time allows us to gain the power of the proposal-based method. The DDS module is interleaved with a carefully designed proposal-free segmentation backbone in our network. Consequently, the features learned by the backbone network become more sensitive to instance localization. Also, with the proposed DDS module, robust pixel-wise instance-level cues (especially structural information) are introduced for semantic segmentation. Extensive experiments on three datasets, i.e., a nuclei dataset, a pelvic CT image dataset, and a synthetic dataset, demonstrate the superior performance of the proposed algorithm compared to the previous works.


Assuntos
Algoritmos , Semântica , Humanos , Pelve
7.
Comput Biol Med ; 138: 104917, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34688037

RESUMO

PURPOSE: To create synthetic CTs and digital reconstructed radiographs (DRRs) from MR images that allow for fiducial visualization and accurate dose calculation for MR-only radiosurgery. METHODS: We developed a machine learning model to create synthetic CTs from pelvic MRs for prostate treatments. This model has been previously proven to generate synthetic CTs with accuracy on par or better than alternate methods, such as atlas-based registration. Our dataset consisted of 11 paired CT and conventional MR (T2) images used for previous CyberKnife (Accuray, Inc) radiotherapy treatments. The MR images were pre-processed to mimic the appearance of fiducial-enhancing images. Two models were trained for each parameter case, using a sub-set of the available image pairs, with the remaining images set aside for testing and validation of the model to identify the optimal patch size and number of image pairs used for training. Four models were then trained using the identified parameters and used to generate synthetic CTs, which in turn were used to generate DRRs at angles 45° and 315°, as would be used for a CyberKnife treatment. The synthetic CTs and DRRs were compared visually and using the mean squared error and peak signal-to-noise ratio against the ground-truth images to evaluate their similarity. RESULTS: The synthetic CTs, as well as the DRRs generated from them, gave similar visualization of the fiducial markers in the prostate as the true counterparts. There was no significant difference found for the fiducial localization for the CTs and DRRs. Across the 8 DRRs analyzed, the mean MSE between the normalized true and synthetic DRRs was 0.66 ± 0.42% and the mean PSNR for this region was 22.9 ± 3.7 dB. For the full CTs, the mean MAE was 72.9 ± 88.1 HU and the mean PSNR was 31.2 ± 2.2 dB. CONCLUSIONS: Our machine learning-based method provides a proof of concept of a way to generate synthetic CTs and DRRs for accurate dose calculation and fiducial localization for use in radiation treatment of the prostate.


Assuntos
Radiocirurgia , Procedimentos Cirúrgicos Robóticos , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pelve/diagnóstico por imagem , Planejamento da Radioterapia Assistida por Computador , Tomografia Computadorizada por Raios X
8.
IEEE Trans Med Imaging ; 40(8): 2118-2128, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33848243

RESUMO

Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.


Assuntos
Próstata , Tomografia Computadorizada por Raios X , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Próstata/diagnóstico por imagem
9.
IEEE Trans Cybern ; 51(4): 2153-2165, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31869812

RESUMO

Automatic pancreas segmentation is crucial to the diagnostic assessment of diabetes or pancreatic cancer. However, the relatively small size of the pancreas in the upper body, as well as large variations of its location and shape in retroperitoneum, make the segmentation task challenging. To alleviate these challenges, in this article, we propose a cascaded multitask 3-D fully convolution network (FCN) to automatically segment the pancreas. Our cascaded network is composed of two parts. The first part focuses on fast locating the region of the pancreas, and the second part uses a multitask FCN with dense connections to refine the segmentation map for fine voxel-wise segmentation. In particular, our multitask FCN with dense connections is implemented to simultaneously complete tasks of the voxel-wise segmentation and skeleton extraction from the pancreas. These two tasks are complementary, that is, the extracted skeleton provides rich information about the shape and size of the pancreas in retroperitoneum, which can boost the segmentation of pancreas. The multitask FCN is also designed to share the low- and mid-level features across the tasks. A feature consistency module is further introduced to enhance the connection and fusion of different levels of feature maps. Evaluations on two pancreas datasets demonstrate the robustness of our proposed method in correctly segmenting the pancreas in various settings. Our experimental results outperform both baseline and state-of-the-art methods. Moreover, the ablation study shows that our proposed parts/modules are critical for effective multitask learning.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Pâncreas/diagnóstico por imagem , Humanos , Neoplasias Pancreáticas/diagnóstico por imagem
10.
IEEE Trans Med Imaging ; 39(9): 2794-2805, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32091997

RESUMO

Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.


Assuntos
Neoplasias de Cabeça e Pescoço , Órgãos em Risco , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
11.
Artigo em Inglês | MEDLINE | ID: mdl-31226074

RESUMO

Automatic image segmentation is an essential step for many medical image analysis applications, include computer-aided radiation therapy, disease diagnosis, and treatment effect evaluation. One of the major challenges for this task is the blurry nature of medical images (e.g., CT, MR and, microscopic images), which can often result in low-contrast and vanishing boundaries. With the recent advances in convolutional neural networks, vast improvements have been made for image segmentation, mainly based on the skip-connection-linked encoder-decoder deep architectures. However, in many applications (with adjacent targets in blurry images), these models often fail to accurately locate complex boundaries and properly segment tiny isolated parts. In this paper, we aim to provide a method for blurry medical image segmentation and argue that skip connections are not enough to help accurately locate indistinct boundaries. Accordingly, we propose a novel high-resolution multi-scale encoder-decoder network (HMEDN), in which multi-scale dense connections are introduced for the encoder-decoder structure to finely exploit comprehensive semantic information. Besides skip connections, extra deeply-supervised high-resolution pathways (comprised of densely connected dilated convolutions) are integrated to collect high-resolution semantic information for accurate boundary localization. These pathways are paired with a difficulty-guided cross-entropy loss function and a contour regression task to enhance the quality of boundary detection. Extensive experiments on a pelvic CT image dataset, a multi-modal brain tumor dataset, and a cell segmentation dataset show the effectiveness of our method for 2D/3D semantic segmentation and 2D instance segmentation, respectively. Our experimental results also show that besides increasing the network complexity, raising the resolution of semantic feature maps can largely affect the overall model performance. For different tasks, finding a balance between these two factors can further improve the performance of the corresponding network.

12.
Med Image Anal ; 54: 168-178, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30928830

RESUMO

Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Imageamento Tridimensional , Masculino , Órgãos em Risco/efeitos da radiação , Reto/efeitos da radiação , Bexiga Urinária/efeitos da radiação
13.
Sci Rep ; 9(1): 1103, 2019 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-30705340

RESUMO

High-grade gliomas are the most aggressive malignant brain tumors. Accurate pre-operative prognosis for this cohort can lead to better treatment planning. Conventional survival prediction based on clinical information is subjective and could be inaccurate. Recent radiomics studies have shown better prognosis by using carefully-engineered image features from magnetic resonance images (MRI). However, feature engineering is usually time consuming, laborious and subjective. Most importantly, the engineered features cannot effectively encode other predictive but implicit information provided by multi-modal neuroimages. We propose a two-stage learning-based method to predict the overall survival (OS) time of high-grade gliomas patient. At the first stage, we adopt deep learning, a recently dominant technique of artificial intelligence, to automatically extract implicit and high-level features from multi-modal, multi-channel preoperative MRI such that the features are competent of predicting survival time. Specifically, we utilize not only contrast-enhanced T1 MRI, but also diffusion tensor imaging (DTI) and resting-state functional MRI (rs-fMRI), for computing multiple metric maps (including various diffusivity metric maps derived from DTI, and also the frequency-specific brain fluctuation amplitude maps and local functional connectivity anisotropy-related metric maps derived from rs-fMRI) from 68 high-grade glioma patients with different survival time. We propose a multi-channel architecture of 3D convolutional neural networks (CNNs) for deep learning upon those metric maps, from which high-level predictive features are extracted for each individual patch of these maps. At the second stage, those deeply learned features along with the pivotal limited demographic and tumor-related features (such as age, tumor size and histological type) are fed into a support vector machine (SVM) to generate the final prediction result (i.e., long or short overall survival time). The experimental results demonstrate that this multi-model, multi-channel deep survival prediction framework achieves an accuracy of 90.66%, outperforming all the competing methods. This study indicates highly demanded effectiveness on prognosis of deep learning technique in neuro-oncological applications for better individualized treatment planning towards precision medicine.


Assuntos
Algoritmos , Neoplasias Encefálicas , Bases de Dados Factuais , Aprendizado Profundo , Imagem de Tensor de Difusão , Glioma , Adolescente , Adulto , Idoso , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/mortalidade , Intervalo Livre de Doença , Feminino , Glioma/diagnóstico por imagem , Glioma/mortalidade , Humanos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Taxa de Sobrevida
14.
IEEE Trans Neural Netw Learn Syst ; 30(5): 1552-1564, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30307879

RESUMO

Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiation therapy starts to use a magnetic resonance image (MRI) as an alternative to computed tomography image because of its superior soft tissue contrast and also free of risk from radiation exposure. However, segmentation of pelvic organs from MRI is a challenging problem due to inconsistent organ appearance across patients and also large intrapatient anatomical variations across treatment days. To address such challenges, we propose a novel deep network architecture, called "Spatially varying sTochastic Residual AdversarIal Network" (STRAINet), to delineate pelvic organs from MRI in an end-to-end fashion. Compared to the traditional fully convolutional networks (FCN), the proposed architecture has two main contributions: 1) inspired by the recent success of residual learning, we propose an evolutionary version of the residual unit, i.e., stochastic residual unit, and use it to the plain convolutional layers in the FCN. We further propose long-range stochastic residual connections to pass features from shallow layers to deep layers; and 2) we propose to integrate three previously proposed network strategies to form a new network for better medical image segmentation: a) we apply dilated convolution in the smallest resolution feature maps, so that we can gain a larger receptive field without overly losing spatial information; b) we propose a spatially varying convolutional layer that adapts convolutional filters to different regions of interest; and c) an adversarial network is proposed to further correct the segmented organ structures. Finally, STRAINet is used to iteratively refine the segmentation probability maps in an autocontext manner. Experimental results show that our STRAINet achieved the state-of-the-art segmentation accuracy. Further analysis also indicates that our proposed network components contribute most to the performance.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/tendências , Imageamento por Ressonância Magnética/tendências , Processos Estocásticos
15.
IEEE Trans Med Imaging ; 38(2): 585-595, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30176583

RESUMO

Accurate segmentation of pelvic organs (i.e., prostate, bladder, and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to: 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning-based method, with a novel distinctive curve-guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely, distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT data set for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Pelve/diagnóstico por imagem , Próstata/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem
16.
Proc IEEE Int Symp Biomed Imaging ; 2018: 885-888, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30344892

RESUMO

Accurate segmentation of pelvic organs from magnetic resonance (MR) images plays an important role in image-guided radiotherapy. However, it is a challenging task due to inconsistent organ appearances and large shape variations. Fully convolutional network (FCN) has recently achieved state-of-the-art performance in medical image segmentation, but it requires a large amount of labeled data for training, which is usually difficult to obtain in real situation. To address these challenges, we propose a deep learning based semi-supervised learning framework. Specifically, we first train an initial multi-task residual fully convolutional network (FCN) based on a limited number of labeled MRI data. Based on the initially trained FCN, those unlabeled new data can be automatically segmented and some reasonable segmentations (after manual/automatic checking) can be included into the training data to fine-tune the network. This step can be repeated to progressively improve the training of our network, until no reasonable segmentations of new data can be included. Experimental results demonstrate the effectiveness of our proposed progressive semi-supervised learning fashion as well as its advantage in terms of accuracy.

17.
Artigo em Inglês | MEDLINE | ID: mdl-30106714

RESUMO

Accurate segmentation of pelvic organs (i.e., prostate, bladder and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning based method, with a novel distinctive curve guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately, and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT dataset for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.

18.
Med Image Anal ; 47: 31-44, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29674235

RESUMO

Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Algoritmos , Mapeamento Encefálico/métodos , Feminino , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Fatores de Tempo
19.
Med Phys ; 45(5): 2063-2075, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-29480928

RESUMO

PURPOSE: Accurate 3D image segmentation is a crucial step in radiation therapy planning of head and neck tumors. These segmentation results are currently obtained by manual outlining of tissues, which is a tedious and time-consuming procedure. Automatic segmentation provides an alternative solution, which, however, is often difficult for small tissues (i.e., chiasm and optic nerves in head and neck CT images) because of their small volumes and highly diverse appearance/shape information. In this work, we propose to interleave multiple 3D Convolutional Neural Networks (3D-CNNs) to attain automatic segmentation of small tissues in head and neck CT images. METHOD: A 3D-CNN was designed to segment each structure of interest. To make full use of the image appearance information, multiscale patches are extracted to describe the center voxel under consideration and then input to the CNN architecture. Next, as neighboring tissues are often highly related in the physiological and anatomical perspectives, we interleave the CNNs designated for the individual tissues. In this way, the tentative segmentation result of a specific tissue can contribute to refine the segmentations of other neighboring tissues. Finally, as more CNNs are interleaved and cascaded, a complex network of CNNs can be derived, such that all tissues can be jointly segmented and iteratively refined. RESULT: Our method was validated on a set of 48 CT images, obtained from the Medical Image Computing and Computer Assisted Intervention (MICCAI) Challenge 2015. The Dice coefficient (DC) and the 95% Hausdorff Distance (95HD) are computed to measure the accuracy of the segmentation results. The proposed method achieves higher segmentation accuracy (with the average DC: 0.58 ± 0.17 for optic chiasm, and 0.71 ± 0.08 for optic nerve; 95HD: 2.81 ± 1.56 mm for optic chiasm, and 2.23 ± 0.90 mm for optic nerve) than the MICCAI challenge winner (with the average DC: 0.38 for optic chiasm, and 0.68 for optic nerve; 95HD: 3.48 for optic chiasm, and 2.48 for optic nerve). CONCLUSION: An accurate and automatic segmentation method has been proposed for small tissues in head and neck CT images, which is important for the planning of radiotherapy.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Imageamento Tridimensional/métodos , Articulações/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos
20.
Neurocomputing (Amst) ; 267: 406-416, 2017 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-29217875

RESUMO

Positron emission tomography (PET) is an essential technique in many clinical applications such as tumor detection and brain disorder diagnosis. In order to obtain high-quality PET images, a standard-dose radioactive tracer is needed, which inevitably causes the risk of radiation exposure damage. For reducing the patient's exposure to radiation and maintaining the high quality of PET images, in this paper, we propose a deep learning architecture to estimate the high-quality standard-dose PET (SPET) image from the combination of the low-quality low-dose PET (LPET) image and the accompanying T1-weighted acquisition from magnetic resonance imaging (MRI). Specifically, we adapt the convolutional neural network (CNN) to account for the two channel inputs of LPET and T1, and directly learn the end-to-end mapping between the inputs and the SPET output. Then, we integrate multiple CNN modules following the auto-context strategy, such that the tentatively estimated SPET of an early CNN can be iteratively refined by subsequent CNNs. Validations on real human brain PET/MRI data show that our proposed method can provide competitive estimation quality of the PET images, compared to the state-of-the-art methods. Meanwhile, our method is highly efficient to test on a new subject, e.g., spending ~2 seconds for estimating an entire SPET image in contrast to ~16 minutes by the state-of-the-art method. The results above demonstrate the potential of our method in real clinical applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA