Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38964419

RESUMO

PURPOSE: To investigate the potential of virtual contrast-enhanced MRI (VCE-MRI) for gross-tumor-volume (GTV) delineation of nasopharyngeal carcinoma (NPC) using multi-institutional data. METHODS AND MATERIALS: This study retrospectively retrieved T1-weighted (T1w), T2-weighted (T2w) MRI, gadolinium-based contrast-enhanced MRI (CE-MRI) and planning CT of 348 biopsy-proven NPC patients from three oncology centers. A multimodality-guided synergistic neural network (MMgSN-Net) was trained using 288 patients to leverage complementary features in T1w and T2w MRI for VCE-MRI synthesis, which was independently evaluated using 60 patients. Three board-certified radiation oncologists and two medical physicists participated in clinical evaluations in three aspects: image quality assessment of the synthetic VCE-MRI, VCE-MRI in assisting target volume delineation, and effectiveness of VCE-MRI-based contours in treatment planning. The image quality assessment includes distinguishability between VCE-MRI and CE-MRI, clarity of tumor-to-normal tissue interface and veracity of contrast enhancement in tumor invasion risk areas. Primary tumor delineation and treatment planning were manually performed by radiation oncologists and medical physicists, respectively. RESULTS: The mean accuracy to distinguish VCE-MRI from CE-MRI was 31.67%; no significant difference was observed in the clarity of tumor-to-normal tissue interface between VCE-MRI and CE-MRI; for the veracity of contrast enhancement in tumor invasion risk areas, an accuracy of 85.8% was obtained. The image quality assessment results suggest that the image quality of VCE-MRI is highly similar to real CE-MRI. The mean dosimetric difference of planning target volumes were less than 1Gy. CONCLUSIONS: The VCE-MRI is highly promising to replace the use of gadolinium-based CE-MRI in tumor delineation of NPC patients.

2.
Front Med (Lausanne) ; 11: 1386161, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38784232

RESUMO

Background: Fungal infections are associated with high morbidity and mortality in the intensive care unit (ICU), but their diagnosis is difficult. In this study, machine learning was applied to design and define the predictive model of ICU-acquired fungi (ICU-AF) in the early stage of fungal infections using Random Forest. Objectives: This study aimed to provide evidence for the early warning and management of fungal infections. Methods: We analyzed the data of patients with culture-positive fungi during their admission to seven ICUs of the First Affiliated Hospital of Chongqing Medical University from January 1, 2015, to December 31, 2019. Patients whose first culture was positive for fungi longer than 48 h after ICU admission were included in the ICU-AF cohort. A predictive model of ICU-AF was obtained using the Least Absolute Shrinkage and Selection Operator and machine learning, and the relationship between the features within the model and the disease severity and mortality of patients was analyzed. Finally, the relationships between the ICU-AF model, antifungal therapy and empirical antifungal therapy were analyzed. Results: A total of 1,434 cases were included finally. We used lasso dimensionality reduction for all features and selected six features with importance ≥0.05 in the optimal model, namely, times of arterial catheter, enteral nutrition, corticosteroids, broadspectrum antibiotics, urinary catheter, and invasive mechanical ventilation. The area under the curve of the model for predicting ICU-AF was 0.981 in the test set, with a sensitivity of 0.960 and specificity of 0.990. The times of arterial catheter (p = 0.011, OR = 1.057, 95% CI = 1.053-1.104) and invasive mechanical ventilation (p = 0.007, OR = 1.056, 95%CI = 1.015-1.098) were independent risk factors for antifungal therapy in ICU-AF. The times of arterial catheter (p = 0.004, OR = 1.098, 95%CI = 0.855-0.970) were an independent risk factor for empirical antifungal therapy. Conclusion: The most important risk factors for ICU-AF are the six time-related features of clinical parameters (arterial catheter, enteral nutrition, corticosteroids, broadspectrum antibiotics, urinary catheter, and invasive mechanical ventilation), which provide early warning for the occurrence of fungal infection. Furthermore, this model can help ICU physicians to assess whether empiric antifungal therapy should be administered to ICU patients who are susceptible to fungal infections.

3.
Comput Med Imaging Graph ; 115: 102378, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38640621

RESUMO

Current methods of digital pathological images typically employ small image patches to learn local representative features to overcome the issues of computationally heavy and memory limitations. However, the global contextual features are not fully considered in whole-slide images (WSIs). Here, we designed a hybrid model that utilizes Graph Neural Network (GNN) module and Transformer module for the representation of global contextual features, called TransGNN. GNN module built a WSI-Graph for the foreground area of a WSI for explicitly capturing structural features, and the Transformer module through the self-attention mechanism implicitly learned the global context information. The prognostic markers of hepatocellular carcinoma (HCC) prognostic biomarkers were used to illustrate the importance of global contextual information in cancer histopathological analysis. Our model was validated using 362 WSIs from 355 HCC patients diagnosed from The Cancer Genome Atlas (TCGA). It showed impressive performance with a Concordance Index (C-Index) of 0.7308 (95% Confidence Interval (CI): (0.6283-0.8333)) for overall survival prediction and achieved the best performance among all models. Additionally, our model achieved an area under curve of 0.7904, 0.8087, and 0.8004 for 1-year, 3-year, and 5-year survival predictions, respectively. We further verified the superior performance of our model in HCC risk stratification and its clinical value through Kaplan-Meier curve and univariate and multivariate COX regression analysis. Our research demonstrated that TransGNN effectively utilized the context information of WSIs and contributed to the clinical prognostic evaluation of HCC.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Redes Neurais de Computação , Neoplasias Hepáticas/diagnóstico por imagem , Humanos , Prognóstico , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Feminino
4.
Org Biomol Chem ; 22(15): 2968-2973, 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38529682

RESUMO

An Fe-catalyzed visible-light induced condensation of alkylbenzenes with anthranilamides has been developed. Upon irradiation, the trivalent iron complex could generate chlorine radicals, which successfully abstracted the hydrogen of benzylic C-H bonds to form benzyl radicals. And these benzyl radicals were converted into oxygenated products under air conditions, which subsequently reacted with anthranilamides for the synthesis of quinazolinones.

5.
Comput Methods Programs Biomed ; 248: 108116, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38518408

RESUMO

BACKGROUND AND OBJECTIVE: Mutations in isocitrate dehydrogenase 1 (IDH1) play a crucial role in the prognosis, diagnosis, and treatment of gliomas. However, current methods for determining its mutation status, such as immunohistochemistry and gene sequencing, are difficult to implement widely in routine clinical diagnosis. Recent studies have shown that using deep learning methods based on pathological images of glioma can predict the mutation status of the IDH1 gene. However, our research focuses on utilizing multi-scale information in pathological images to improve the accuracy of predicting IDH1 gene mutations, thereby providing an accurate and cost-effective prediction method for routine clinical diagnosis. METHODS: In this paper, we propose a multi-scale fusion gene identification network (MultiGeneNet). The network first uses two feature extractors to obtain feature maps at different scale images, and then by employing a bilinear pooling layer based on Hadamard product to realize the fusion of multi-scale features. Through fully exploiting the complementarity among features at different scales, we are able to obtain a more comprehensive and rich representation of multi-scale features. RESULTS: Based on the Hematoxylin and Eosin stained pathological section dataset of 296 patients, our method achieved an accuracy of 83.575 % and an AUC of 0.886, thus significantly outperforming other single-scale methods. CONCLUSIONS: Our method can be deployed in medical aid systems at very low cost, serving as a diagnostic or prognostic tool for glioma patients in medically underserved areas.


Assuntos
Neoplasias Encefálicas , Glioma , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/genética , Imageamento por Ressonância Magnética/métodos , Glioma/diagnóstico por imagem , Glioma/genética , Mutação , Prognóstico , Isocitrato Desidrogenase/genética
6.
J Org Chem ; 89(7): 4395-4405, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38501298

RESUMO

A visible-light-induced chemodivergent synthesis of tetracyclic quinazolinones and 3-iminoisoindoliones has been developed. This chemodivergent reaction afforded two kinds of different products by substrate control. A detailed investigation of the reaction mechanism revealed that this consecutive photoinduced electron transfer (ConPET) cascade cyclization involved a radical process, and the aryl radical was the crucial intermediate. This method employed 4-DPAIPN as a photocatalyst and i-Pr2NEt as a sacrificial electron donor leading to metal-free conditions.

7.
Cancers (Basel) ; 15(19)2023 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37835518

RESUMO

Histopathologic whole-slide images (WSI) are generally considered the gold standard for cancer diagnosis and prognosis. Survival prediction based on WSI has recently attracted substantial attention. Nevertheless, it remains a central challenge owing to the inherent difficulties of predicting patient prognosis and effectively extracting informative survival-specific representations from WSI with highly compounded gigapixels. In this study, we present a fully automated cellular-level dual global fusion pipeline for survival prediction. Specifically, the proposed method first describes the composition of different cell populations on WSI. Then, it generates dimension-reduced WSI-embedded maps, allowing for efficient investigation of the tumor microenvironment. In addition, we introduce a novel dual global fusion network to incorporate global and inter-patch features of cell distribution, which enables the sufficient fusion of different types and locations of cells. We further validate the proposed pipeline using The Cancer Genome Atlas lung adenocarcinoma dataset. Our model achieves a C-index of 0.675 (±0.05) in the five-fold cross-validation setting and surpasses comparable methods. Further, we extensively analyze embedded map features and survival probabilities. These experimental results manifest the potential of our proposed pipeline for applications using WSI in lung adenocarcinoma and other malignancies.

8.
Front Genet ; 14: 1260531, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37811144

RESUMO

With the increasing throughput of modern sequencing instruments, the cost of storing and transmitting sequencing data has also increased dramatically. Although many tools have been developed to compress sequencing data, there is still a need to develop a compressor with a higher compression ratio. We present a two-step framework for compressing sequencing data in this paper. The first step is to repack original data into a binary stream, while the second step is to compress the stream with a LZMA encoder. We develop a new strategy to encode the original file into a LZMA highly compressed stream. In addition an FPGA-accelerated of LZMA was implemented to speedup the second step. As a demonstration, we present repaq as a lossless non-reference compressor of FASTQ format files. We introduced a multifile redundancy elimination method, which is very useful for compressing paired-end sequencing data. According to our test results, the compression ratio of repaq is much higher than other FASTQ compressors. For some deep sequencing data, the compression ratio of repaq can be higher than 25, almost four times of Gzip. The framework presented in this paper can also be applied to develop new tools for compressing other sequencing data. The open-source code of repaq is available at: https://github.com/OpenGene/repaq.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37021898

RESUMO

Precise classification of histopathological images is crucial to computer-aided diagnosis in clinical practice. Magnification-based learning networks have attracted considerable attention for their ability to improve performance in histopathological classification. However, the fusion of pyramids of histopathological images at different magnifications is an under-explored area. In this paper, we proposed a novel deep multi-magnification similarity learning (DSML) approach that can be useful for the interpretation of multi-magnification learning framework and easy to visualize feature representation from low-dimension (e.g., cell-level) to high-dimension (e.g., tissue-level), which has overcome the difficulty of understanding cross-magnification information propagation. It uses a similarity cross entropy loss function designation to simultaneously learn the similarity of the information among cross-magnifications. In order to verify the effectiveness of DMSL, experiments with different network backbones and different magnification combinations were designed, and its ability to interpret was also investigated through visualization. Our experiments were performed on two different histopathological datasets: a clinical nasopharyngeal carcinoma and a public breast cancer BCSS2021 dataset. The results show that our method achieved outstanding performance in classification with a higher value of area under curve, accuracy, and F-score than other comparable methods. Moreover, the reasons behind multi-magnification effectiveness were discussed.

11.
IEEE J Biomed Health Inform ; 27(7): 3258-3269, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37099476

RESUMO

Anatomical resection (AR) based on anatomical sub-regions is a promising method of precise surgical resection, which has been proven to improve long-term survival by reducing local recurrence. The fine-grained segmentation of an organ's surgical anatomy (FGS-OSA), i.e., segmenting an organ into multiple anatomic regions, is critical for localizing tumors in AR surgical planning. However, automatically obtaining FGS-OSA results in computer-aided methods faces the challenges of appearance ambiguities among sub-regions (i.e., inter-sub-region appearance ambiguities) caused by similar HU distributions in different sub-regions of an organ's surgical anatomy, invisible boundaries, and similarities between anatomical landmarks and other anatomical information. In this paper, we propose a novel fine-grained segmentation framework termed the "anatomic relation reasoning graph convolutional network" (ARR-GCN), which incorporates prior anatomic relations into the framework learning. In ARR-GCN, a graph is constructed based on the sub-regions to model the class and their relations. Further, to obtain discriminative initial node representations of graph space, a sub-region center module is designed. Most importantly, to explicitly learn the anatomic relations, the prior anatomic-relations among the sub-regions are encoded in the form of an adjacency matrix and embedded into the intermediate node representations to guide framework learning. The ARR-GCN was validated on two FGS-OSA tasks: i) liver segments segmentation, and ii) lung lobes segmentation. Experimental results on both tasks outperformed other state-of-the-art segmentation methods and yielded promising performances by ARR-GCN for suppressing ambiguities among sub-regions.


Assuntos
Fígado , Humanos , Fígado/anatomia & histologia , Fígado/diagnóstico por imagem , Fígado/cirurgia , Neoplasias
12.
Comput Biol Med ; 158: 106875, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37058759

RESUMO

Glioma is heterogeneous disease that requires classification into subtypes with similar clinical phenotypes, prognosis or treatment responses. Metabolic-protein interaction (MPI) can provide meaningful insights into cancer heterogeneity. Moreover, the potential of lipids and lactate for identifying prognostic subtypes of glioma remains relatively unexplored. Therefore, we proposed a method to construct an MPI relationship matrix (MPIRM) based on a triple-layer network (Tri-MPN) combined with mRNA expression, and processed the MPIRM by deep learning to identify glioma prognostic subtypes. These Subtypes with significant differences in prognosis were detected in glioma (p-value < 2e-16, 95% CI). These subtypes had a strong correlation in immune infiltration, mutational signatures and pathway signatures. This study demonstrated the effectiveness of node interaction from MPI networks in understanding the heterogeneity of glioma prognosis.


Assuntos
Aprendizado Profundo , Glioma , Humanos , Perfilação da Expressão Gênica/métodos , Glioma/genética , Glioma/metabolismo
13.
Med Phys ; 50(5): 2971-2984, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36542423

RESUMO

PURPOSE: Reducing the radiation exposure experienced by patients in total-body computed tomography (CT) imaging has attracted extensive attention in the medical imaging community. A low radiation dose may result in increased noise and artifacts that greatly affect the subsequent clinical diagnosis. To obtain high-quality total-body low-dose CT (LDCT) images, previous deep learning-based research works developed various network architectures. However, most of these methods only employ normal-dose CT (NDCT) images as ground truths to guide the training process of the constructed denoising network. As a result of this simple restriction, the reconstructed images tend to lose favorable image details and easily generate oversmoothed textures. This study explores how to better utilize the information contained in the feature spaces of NDCT images to guide the LDCT image reconstruction process and achieve high-quality results. METHODS: We propose a novel intratask knowledge transfer (KT) method that leverages the knowledge distilled from NDCT images as an auxiliary component of the LDCT image reconstruction process. Our proposed architecture is named the teacher-student consistency network (TSC-Net), which consists of teacher and student networks with identical architectures. By employing the designed KT loss, the student network is encouraged to emulate the teacher network in the representation space and gain robust prior content. In addition, to further exploit the information contained in CT scans, a contrastive regularization mechanism (CRM) built upon contrastive learning is introduced. The CRM aims to minimize and maximize the L2 distances from the predicted CT images to the NDCT samples and to the LDCT samples in the latent space, respectively. Moreover, based on attention and the deformable convolution approach, we design a dynamic enhancement module (DEM) to improve the network capability to transform input information flows. RESULTS: By conducting ablation studies, we prove the effectiveness of the proposed KT loss, CRM, and DEM. Extensive experimental results demonstrate that the TSC-Net outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Additionally, the excellent results obtained for clinical readings also prove that our proposed method can reconstruct high-quality CT images for clinical applications. CONCLUSIONS: Based on the experimental results and clinical readings, the TSC-Net has better performance than other approaches. In our future work, we may explore the reconstruction of LDCT images by fusing the positron emission tomography (PET) and CT modalities to further improve the visual quality of the reconstructed CT images.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Tomografia por Emissão de Pósitrons , Artefatos , Razão Sinal-Ruído
14.
Genes (Basel) ; 13(10)2022 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-36292654

RESUMO

Cancer prognosis analysis is of essential interest in clinical practice. In order to explore the prognostic power of computational histopathology and genomics, this paper constructs a multi-modality prognostic model for survival prediction. We collected 346 patients diagnosed with hepatocellular carcinoma (HCC) from The Cancer Genome Atlas (TCGA), each patient has 1-3 whole slide images (WSIs) and an mRNA expression file. WSIs were processed by a multi-instance deep learning model to obtain the patient-level survival risk scores; mRNA expression data were processed by weighted gene co-expression network analysis (WGCNA), and the top hub genes of each module were extracted as risk factors. Information from two modalities was integrated by Cox proportional hazard model to predict patient outcomes. The overall survival predictions of the multi-modality model (Concordance index (C-index): 0.746, 95% confidence interval (CI): ±0.077) outperformed these based on histopathology risk score or hub genes, respectively. Furthermore, in the prediction of 1-year and 3-year survival, the area under curve of the model achieved 0.816 and 0.810. In conclusion, this paper provides an effective workflow for multi-modality prognosis of HCC, the integration of histopathology and genomic information has the potential to assist clinical prognosis management.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico , Carcinoma Hepatocelular/genética , Neoplasias Hepáticas/diagnóstico , Neoplasias Hepáticas/genética , Regulação Neoplásica da Expressão Gênica , Perfilação da Expressão Gênica , Prognóstico , Genômica/métodos , RNA Mensageiro/genética
15.
Radiother Oncol ; 170: 198-204, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35351537

RESUMO

BACKGROUND AND PURPOSE: Geometric information such as distance information is essential for dose calculations in radiotherapy. However, state-of-the-art dose prediction methods use only binary masks without distance information. This study aims to develop a dose prediction deep learning method for nasopharyngeal carcinoma radiotherapy by taking advantage of the distance information as well as the mask information. MATERIALS AND METHODS: A novel transformation method based on boundary distance was proposed to facilitate the prediction of dose distributions. Radiotherapy datasets of 161 nasopharyngeal carcinoma patients were retrospectively collected, including binary masks of organs-at-risk (OARs) and targets, planning CT, and clinical plans. The patients were randomly divided into 130, 11 and 20 cases for training, validating, and testing the models, respectively. Furthermore, 40 patients from an external cohort were used to test the generalizability of the models. RESULTS: The proposed method shows superior performance. The predicted dose error and dose-volume histogram (DVH) error of our method were 7.51% and 11.6% lower than the mask-based method, respectively. For the inverse planning, compared with mask-based methods, our method provided similar performances on the GTVnx and OARs and outperformed on the GTVnd and the CTV, the pass rates of which increased from 89.490% and 90.016% to 96.694% and 91.189%, respectively. CONCLUSION: The preliminary results on nasopharyngeal carcinoma radiotherapy cases showed that our proposed distance-guided method for dose prediction achieved better performance than mask-based methods. Further studies with more patients and on other cancer sites are warranted to fully validate the proposed method.


Assuntos
Aprendizado Profundo , Neoplasias Nasofaríngeas , Radioterapia de Intensidade Modulada , Humanos , Carcinoma Nasofaríngeo/radioterapia , Neoplasias Nasofaríngeas/patologia , Neoplasias Nasofaríngeas/radioterapia , Órgãos em Risco/patologia , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos , Estudos Retrospectivos
16.
Am J Pathol ; 192(3): 553-563, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34896390

RESUMO

Visual inspection of hepatocellular carcinoma cancer regions by experienced pathologists in whole-slide images (WSIs) is a challenging, labor-intensive, and time-consuming task because of the large scale and high resolution of WSIs. Therefore, a weakly supervised framework based on a multiscale attention convolutional neural network (MSAN-CNN) was introduced into this process. Herein, patch-based images with image-level normal/tumor annotation (rather than images with pixel-level annotation) were fed into a classification neural network. To further improve the performances of cancer region detection, multiscale attention was introduced into the classification neural network. A total of 100 cases were obtained from The Cancer Genome Atlas and divided into 70 training and 30 testing data sets that were fed into the MSAN-CNN framework. The experimental results showed that this framework significantly outperforms the single-scale detection method according to the area under the curve and accuracy, sensitivity, and specificity metrics. When compared with the diagnoses made by three pathologists, MSAN-CNN performed better than a junior- and an intermediate-level pathologist, and slightly worse than a senior pathologist. Furthermore, MSAN-CNN provided a very fast detection time compared with the pathologists. Therefore, a weakly supervised framework based on MSAN-CNN has great potential to assist pathologists in the fast and accurate detection of cancer regions of hepatocellular carcinoma on WSIs.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Atenção , Humanos , Redes Neurais de Computação , Patologistas
17.
Quant Imaging Med Surg ; 11(12): 4709-4720, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34888183

RESUMO

BACKGROUND: In the radiotherapy of nasopharyngeal carcinoma (NPC), magnetic resonance imaging (MRI) is widely used to delineate tumor area more accurately. While MRI offers the higher soft tissue contrast, patient positioning and couch correction based on bony image fusion of computed tomography (CT) is also necessary. There is thus an urgent need to obtain a high image contrast between bone and soft tissue to facilitate target delineation and patient positioning for NPC radiotherapy. In this paper, our aim is to develop a novel image conversion between the CT and MRI modalities to obtain clear bone and soft tissue images simultaneously, here called bone-enhanced MRI (BeMRI). METHODS: Thirty-five patients were retrospectively selected for this study. All patients underwent clinical CT simulation and 1.5T MRI within the same week in Shenzhen Second People's Hospital. To synthesize BeMRI, two deep learning networks, U-Net and CycleGAN, were constructed to transform MRI to synthetic CT (sCT) images. Each network used 28 patients' images as the training set, while the remaining 7 patients were used as the test set (~1/5 of all datasets). The bone structure from the sCT was then extracted by the threshold-based method and embedded in the corresponding part of the MRI image to generate the BeMRI image. To evaluate the performance of these networks, the following metrics were applied: mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). RESULTS: In our experiments, both deep learning models achieved good performance and were able to effectively extract bone structure from MRI. Specifically, the supervised U-Net model achieved the best results with the lowest overall average MAE of 125.55 (P<0.05) and produced the highest SSIM of 0.89 and PSNR of 23.84. These results indicate that BeMRI can display bone structure in higher contrast than conventional MRI. CONCLUSIONS: A new image modality BeMRI, which is a composite image of CT and MRI, was proposed. With high image contrast of both bone structure and soft tissues, BeMRI will facilitate tumor localization and patient positioning and eliminate the need to frequently check between separate MRI and CT images during NPC radiotherapy.

18.
Quant Imaging Med Surg ; 11(12): 4881-4894, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34888196

RESUMO

Modern conformal beam delivery techniques require image-guidance to ensure the prescribed dose to be delivered as planned. Recent advances in artificial intelligence (AI) have greatly augmented our ability to accurately localize the treatment target while sparing the normal tissues. In this paper, we review the applications of AI-based algorithms in image-guided radiotherapy (IGRT), and discuss the indications of these applications to the future of clinical practice of radiotherapy. The benefits, limitations and some important trends in research and development of the AI-based IGRT techniques are also discussed. AI-based IGRT techniques have the potential to monitor tumor motion, reduce treatment uncertainty and improve treatment precision. Particularly, these techniques also allow more healthy tissue to be spared while keeping tumor coverage the same or even better.

19.
Comput Med Imaging Graph ; 94: 101989, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34741846

RESUMO

BACKGROUND AND OBJECTIVE: Real time localization and shape extraction of guide wire in fluoroscopic images plays a significant role in the image guided navigation during cerebral and cardiovascular interventions. Given the complexity of the non-rigid and sparse characteristics of guide wire structures, and the low SNR(Signal Noise Ratio) of fluoroscopic images, traditional handcrafted guide wire tracking methods such as Frangi filter, Hessian Matrix, or open active contour usually produce insufficient accuracy with high computational cost, and may require extra human intervention for proper initialization or correction. The application of deep learning techniques to guide wire tracking is reported to produce significant improvement in guide wire localization accuracy, but the heavy calculation cost is still a concern. METHOD: In this paper we propose a two phase deep learning scheme for accurate and real time guide wire shape extraction in fluoroscopic sequences. In the first phase we train a guide wire localization network to pick image regions containing guide wire structures. From the picked image regions, we train a guide wire shape extraction network in the second phase to mark the guide wire pixels. RESULTS: We report that our proposed method can accurately distinguish about 99% of the guide wire structure pixels, and the falsely detected pixels in the background are close to 0, the average offset from the ground truth is less than 1 pixel. For extreme cases where traditional handcrafted method may fail, our proposed method can still extract guide wire completely and accurately. The processing time for a 512 × 512 pixels image is 78 ms. CONCLUSION: Compared with the traditional filtering based method from our previous work, we show that our proposed method can achieve more accurate and stable performance. Compared with other deep learning methods, our proposed method significantly improve calculation efficiency to meet the real time requirement of clinical applications.


Assuntos
Aprendizado Profundo , Fluoroscopia/métodos , Humanos
20.
Front Oncol ; 11: 751223, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34765555

RESUMO

Whole slide imaging enables scanning entire stained-glass slides with high resolution into digital images for the tissue morphology/molecular pathology assessment and analysis, which has increased in adoption for both clinical and research applications. As an alternative to conventional optical microscopy, lensfree holography imaging, which offers high resolution and a wide field of view (FOV) with digital focus, has been widely used in various types of biomedical imaging. However, accurate colour holographic imaging with pixel super-resolution reconstruction has remained a great challenge due to its coherent characteristic. In this work, we propose a wide-field pixel super-resolution colour lensfree microscopy by performing wavelength scanning pixel super-resolution and phase retrieval simultaneously on the three channels of red, green and blue (RGB), respectively. High-resolution RGB three-channel composite colour image is converted to the YUV space for separating the colour component and the brightness component, keeping the brightness component unchanged as well as enhancing the colour component through average filter, which not only eliminates the common rainbow artifacts of holographic colour reconstruction but also maintains the high-resolution details collected under different colour illuminations. We conducted experiments on the reconstruction of a USAF1951, stained lotus root and red bone marrow smear for performance evaluation of the spatial resolution and colour reconstruction with an imaging FOV >40 mm2.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...