Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Org Chem ; 89(7): 4395-4405, 2024 Apr 05.
Artículo en Inglés | MEDLINE | ID: mdl-38501298

RESUMEN

A visible-light-induced chemodivergent synthesis of tetracyclic quinazolinones and 3-iminoisoindoliones has been developed. This chemodivergent reaction afforded two kinds of different products by substrate control. A detailed investigation of the reaction mechanism revealed that this consecutive photoinduced electron transfer (ConPET) cascade cyclization involved a radical process, and the aryl radical was the crucial intermediate. This method employed 4-DPAIPN as a photocatalyst and i-Pr2NEt as a sacrificial electron donor leading to metal-free conditions.

2.
Org Biomol Chem ; 22(15): 2968-2973, 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38529682

RESUMEN

An Fe-catalyzed visible-light induced condensation of alkylbenzenes with anthranilamides has been developed. Upon irradiation, the trivalent iron complex could generate chlorine radicals, which successfully abstracted the hydrogen of benzylic C-H bonds to form benzyl radicals. And these benzyl radicals were converted into oxygenated products under air conditions, which subsequently reacted with anthranilamides for the synthesis of quinazolinones.

3.
Am J Pathol ; 192(3): 553-563, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34896390

RESUMEN

Visual inspection of hepatocellular carcinoma cancer regions by experienced pathologists in whole-slide images (WSIs) is a challenging, labor-intensive, and time-consuming task because of the large scale and high resolution of WSIs. Therefore, a weakly supervised framework based on a multiscale attention convolutional neural network (MSAN-CNN) was introduced into this process. Herein, patch-based images with image-level normal/tumor annotation (rather than images with pixel-level annotation) were fed into a classification neural network. To further improve the performances of cancer region detection, multiscale attention was introduced into the classification neural network. A total of 100 cases were obtained from The Cancer Genome Atlas and divided into 70 training and 30 testing data sets that were fed into the MSAN-CNN framework. The experimental results showed that this framework significantly outperforms the single-scale detection method according to the area under the curve and accuracy, sensitivity, and specificity metrics. When compared with the diagnoses made by three pathologists, MSAN-CNN performed better than a junior- and an intermediate-level pathologist, and slightly worse than a senior pathologist. Furthermore, MSAN-CNN provided a very fast detection time compared with the pathologists. Therefore, a weakly supervised framework based on MSAN-CNN has great potential to assist pathologists in the fast and accurate detection of cancer regions of hepatocellular carcinoma on WSIs.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Atención , Humanos , Redes Neurales de la Computación , Patólogos
4.
Bioinformatics ; 36(19): 4968-4969, 2020 12 08.
Artículo en Inglés | MEDLINE | ID: mdl-32637981

RESUMEN

SUMMARY: Nowadays, it is feasible to collect massive features for quantitative representation and precision medicine, and thus, automatic ranking to figure out the most informative and discriminative ones becomes increasingly important. To address this issue, 42 feature ranking (FR) methods are integrated to form a MATLAB toolbox (matFR). The methods apply mutual information, statistical analysis, structure clustering and other principles to estimate the relative importance of features in specific measure spaces. Specifically, these methods are summarized, and an example shows how to apply a FR method to sort mammographic breast lesion features. The toolbox is easy to use and flexible to integrate additional methods. Importantly, it provides a tool to compare, investigate and interpret the features selected for various applications. AVAILABILITY AND IMPLEMENTATION: The toolbox is freely available at http://github.com/NicoYuCN/matFR. A tutorial and an example with a dataset are provided.


Asunto(s)
Programas Informáticos , Análisis por Conglomerados
5.
Am J Pathol ; 190(8): 1691-1700, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32360568

RESUMEN

The pathologic diagnosis of nasopharyngeal carcinoma (NPC) by different pathologists is often inefficient and inconsistent. We have therefore introduced a deep learning algorithm into this process and compared the performance of the model with that of three pathologists with different levels of experience to demonstrate its clinical value. In this retrospective study, a total of 1970 whole slide images of 731 cases were collected and divided into training, validation, and testing sets. Inception-v3, which is a state-of-the-art convolutional neural network, was trained to classify images into three categories: chronic nasopharyngeal inflammation, lymphoid hyperplasia, and NPC. The mean area under the curve (AUC) of the deep learning model is 0.936 based on the testing set, and its AUCs for the three image categories are 0.905, 0.972, and 0.930, respectively. In the comparison with the three pathologists, the model outperforms the junior and intermediate pathologists, and has only a slightly lower performance than the senior pathologist when considered in terms of accuracy, specificity, sensitivity, AUC, and consistency. To our knowledge, this is the first study about the application of deep learning to NPC pathologic diagnosis. In clinical practice, the deep learning model can potentially assist pathologists by providing a second opinion on their NPC diagnoses.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador , Carcinoma Nasofaríngeo/diagnóstico , Neoplasias Nasofaríngeas/diagnóstico , Bases de Datos Factuales , Humanos , Carcinoma Nasofaríngeo/patología , Neoplasias Nasofaríngeas/patología , Redes Neurales de la Computación , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
6.
BMC Med Imaging ; 21(1): 97, 2021 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-34098896

RESUMEN

BACKGROUND: Conventional dynamic contrast enhanced (DCE) magnetic resonance (MR) hardly achieves a good imaging performance of arteries and lymph nodes in the breast area. Therefore, a new imaging method is needed for the assessment of breast arteries and lymph nodes. METHODS: We performed prospective research. The research included 52 patients aged from 25 to 64 between June 2019 and April 2020. The isotropic e-THRIVE sequence scanned in the coronal direction after DCE-THRIVE. Reconstructed images obtained by DCE-THRIVE and the coronal e-THRIVE were compared mainly in terms of the completeness of the lateral thoracic artery, thoracodorsal artery, and lymph nodes. We proposed a criterion for evaluating image quality. According to the criterion, images were assigned a score from 1 to 5 according to the grade from low to high. Two board-certified doctors evaluated images individually, and their average score was taken as the final result. The chi-square test was used to assess the difference. RESULTS: The coronal e-THRIVE score is 4.60, which is higher than the DCE-THRIVE score of 3.48, there are significant differences between the images obtained by two sequences (P = 1.2712e-8). According to the score of images, 44 patients (84.61%) had high-quality images on the bilateral breast. Only 3 patients' (5.77%) images were not ideal on both sides. The improved method is effective for most patients to get better images. CONCLUSIONS: The proposed coronal e-THRIVE scan can get higher quality reconstruction images than the conventional method to visualize the course of arteries and the distribution of lymph nodes in most patients, which will be helpful for the clinical follow-up treatment.


Asunto(s)
Mama/diagnóstico por imagen , Imagenología Tridimensional/métodos , Ganglios Linfáticos/diagnóstico por imagen , Angiografía por Resonancia Magnética/métodos , Arterias Torácicas/diagnóstico por imagen , Adulto , Mama/anatomía & histología , Mama/irrigación sanguínea , Neoplasias de la Mama/irrigación sanguínea , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Distribución de Chi-Cuadrado , Estudios de Factibilidad , Femenino , Humanos , Persona de Mediana Edad , Proyectos Piloto , Estudios Prospectivos
7.
J Xray Sci Technol ; 28(3): 541-553, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32176675

RESUMEN

PURPOSE: Segmentation of magnetic resonance images (MRI) of the left ventricle (LV) plays a key role in quantifying the volumetric functions of the heart, such as the area, volume, and ejection fraction. Traditionally, LV segmentation is performed manually by experienced experts, which is both time-consuming and prone to subjective bias. This study aims to develop a novel capsule-based automated segmentation method to automatically segment the LV from images obtained by cardiac MRI. METHOD: The technique applied for segmentation uses Fourier analysis and the circular Hough transform (CHT) to indicate the approximate location of the LV and a network capsule to precisely segment the LV. The neurons of the capsule network output a vector and preserve much of the information about the input by replacing the largest pooling layer with convolutional strides and dynamic routing. Finally, the segmentation result is postprocessed by threshold segmentation and morphological processing to increase the accuracy of LV segmentation. RESULTS: We fully exploit the capsule network to achieve the segmentation goal and combine LV detection and capsule concepts to complete LV segmentation. In the experiments, the tested methods achieved LV Dice scores of 0.922±0.05 end-diastolic (ED) and 0.898±0.11 end-systolic (ES) on the ACDC 2017 data set. The experimental results confirm that the algorithm can effectively perform LV segmentation from a cardiac magnetic resonance image. To verify the performance of the proposed method, visual and quantitative comparisons are also performed, which show that the proposed method exhibits improved segmentation accuracy compared with the traditional method. CONCLUSIONS: The evaluation metrics of medical image segmentation indicate that the proposed method in combination with postprocessing and feature detection effectively improves segmentation accuracy for cardiac MRI. To the best of our knowledge, this study is the first to use a deep learning model based on capsule networks to systematically evaluate end-to-end LV segmentation.


Asunto(s)
Aprendizaje Profundo , Ventrículos Cardíacos/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Humanos
8.
Ultrason Imaging ; 39(4): 240-259, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28627330

RESUMEN

Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.


Asunto(s)
Gráficos por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Ultrasonografía/métodos , Algoritmos , Humanos , Hígado/diagnóstico por imagen , Fantasmas de Imagen
9.
Sensors (Basel) ; 17(1)2017 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-28075375

RESUMEN

In this paper, an approach to biometric verification based on human body communication (HBC) is presented for wearable devices. For this purpose, the transmission gain S21 of volunteer's forearm is measured by vector network analyzer (VNA). Specifically, in order to determine the chosen frequency for biometric verification, 1800 groups of data are acquired from 10 volunteers in the frequency range 0.3 MHz to 1500 MHz, and each group includes 1601 sample data. In addition, to achieve the rapid verification, 30 groups of data for each volunteer are acquired at the chosen frequency, and each group contains only 21 sample data. Furthermore, a threshold-adaptive template matching (TATM) algorithm based on weighted Euclidean distance is proposed for rapid verification in this work. The results indicate that the chosen frequency for biometric verification is from 650 MHz to 750 MHz. The false acceptance rate (FAR) and false rejection rate (FRR) based on TATM are approximately 5.79% and 6.74%, respectively. In contrast, the FAR and FRR were 4.17% and 37.5%, 3.37% and 33.33%, and 3.80% and 34.17% using K-nearest neighbor (KNN) classification, support vector machines (SVM), and naive Bayesian method (NBM) classification, respectively. In addition, the running time of TATM is 0.019 s, whereas the running times of KNN, SVM and NBM are 0.310 s, 0.0385 s, and 0.168 s, respectively. Therefore, TATM is suggested to be appropriate for rapid verification use in wearable devices.

10.
Ultrason Imaging ; 38(4): 254-75, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-26316172

RESUMEN

Ultrasound is one of the most important medical imaging modalities for its real-time and portable imaging advantages. However, the contrast resolution and important details are degraded by the speckle in ultrasound images. Many speckle filtering methods have been developed, but they are suffered from several limitations, difficult to reach a balance between speckle reduction and edge preservation. In this paper, an adaptation of the nonlocal total variation (NLTV) filter is proposed for speckle reduction in ultrasound images. The speckle is modeled via a signal-dependent noise distribution for the log-compressed ultrasound images. Instead of the Euclidian distance, the statistical Pearson distance is introduced in this study for the similarity calculation between image patches via the Bayesian framework. And the Split-Bregman fast algorithm is used to solve the adapted NLTV despeckling functional. Experimental results on synthetic and clinical ultrasound images and comparisons with some classical and recent algorithms are used to demonstrate its improvements in both speckle noise reduction and tissue boundary preservation for ultrasound images.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía/métodos , Algoritmos , Teorema de Bayes , Simulación por Computador , Humanos , Relación Señal-Ruido
11.
Biomed Eng Online ; 13: 124, 2014 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-25168643

RESUMEN

INTRODUCTION: Freehand three-dimensional (3D) ultrasound has the advantages of flexibility for allowing clinicians to manipulate the ultrasound probe over the examined body surface with less constraint in comparison with other scanning protocols. Thus it is widely used in clinical diagnose and image-guided surgery. However, as the data scanning of freehand-style is subjective, the collected B-scan images are usually irregular and highly sparse. One of the key procedures in freehand ultrasound imaging system is the volume reconstruction, which plays an important role in improving the reconstructed image quality. SYSTEM AND METHODS: A novel freehand 3D ultrasound volume reconstruction method based on kernel regression model is proposed in this paper. Our method consists of two steps: bin-filling and regression. Firstly, the bin-filling step is used to map each pixel in the sampled B-scan images to its corresponding voxel in the reconstructed volume data. Secondly, the regression step is used to make the nonparametric estimation for the whole volume data from the previous sampled sparse data. The kernel penalizes distance away from the current approximation center within a local neighborhood. EXPERIMENTS AND RESULTS: To evaluate the quality and performance of our proposed kernel regression algorithm for freehand 3D ultrasound reconstruction, a phantom and an in-vivo liver organ of human subject are scanned with our freehand 3D ultrasound imaging system. Root mean square error (RMSE) is used for the quantitative evaluation. Both of the qualitative and quantitative experimental results demonstrate that our method can reconstruct image with less artifacts and higher quality. CONCLUSION: The proposed kernel regression based reconstruction method is capable of constructing volume data with improved accuracy from irregularly sampled sparse data for freehand 3D ultrasound imaging system.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Algoritmos , Artefactos , Simulación por Computador , Humanos , Hígado/diagnóstico por imagen , Hígado/ultraestructura , Modelos Teóricos , Fantasmas de Imagen , Programas Informáticos , Cirugía Asistida por Computador , Ultrasonido , Ultrasonografía
12.
Comput Med Imaging Graph ; 115: 102378, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38640621

RESUMEN

Current methods of digital pathological images typically employ small image patches to learn local representative features to overcome the issues of computationally heavy and memory limitations. However, the global contextual features are not fully considered in whole-slide images (WSIs). Here, we designed a hybrid model that utilizes Graph Neural Network (GNN) module and Transformer module for the representation of global contextual features, called TransGNN. GNN module built a WSI-Graph for the foreground area of a WSI for explicitly capturing structural features, and the Transformer module through the self-attention mechanism implicitly learned the global context information. The prognostic markers of hepatocellular carcinoma (HCC) prognostic biomarkers were used to illustrate the importance of global contextual information in cancer histopathological analysis. Our model was validated using 362 WSIs from 355 HCC patients diagnosed from The Cancer Genome Atlas (TCGA). It showed impressive performance with a Concordance Index (C-Index) of 0.7308 (95% Confidence Interval (CI): (0.6283-0.8333)) for overall survival prediction and achieved the best performance among all models. Additionally, our model achieved an area under curve of 0.7904, 0.8087, and 0.8004 for 1-year, 3-year, and 5-year survival predictions, respectively. We further verified the superior performance of our model in HCC risk stratification and its clinical value through Kaplan-Meier curve and univariate and multivariate COX regression analysis. Our research demonstrated that TransGNN effectively utilized the context information of WSIs and contributed to the clinical prognostic evaluation of HCC.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Redes Neurales de la Computación , Neoplasias Hepáticas/diagnóstico por imagen , Humanos , Pronóstico , Interpretación de Imagen Asistida por Computador/métodos , Masculino , Femenino
13.
Comput Methods Programs Biomed ; 248: 108116, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38518408

RESUMEN

BACKGROUND AND OBJECTIVE: Mutations in isocitrate dehydrogenase 1 (IDH1) play a crucial role in the prognosis, diagnosis, and treatment of gliomas. However, current methods for determining its mutation status, such as immunohistochemistry and gene sequencing, are difficult to implement widely in routine clinical diagnosis. Recent studies have shown that using deep learning methods based on pathological images of glioma can predict the mutation status of the IDH1 gene. However, our research focuses on utilizing multi-scale information in pathological images to improve the accuracy of predicting IDH1 gene mutations, thereby providing an accurate and cost-effective prediction method for routine clinical diagnosis. METHODS: In this paper, we propose a multi-scale fusion gene identification network (MultiGeneNet). The network first uses two feature extractors to obtain feature maps at different scale images, and then by employing a bilinear pooling layer based on Hadamard product to realize the fusion of multi-scale features. Through fully exploiting the complementarity among features at different scales, we are able to obtain a more comprehensive and rich representation of multi-scale features. RESULTS: Based on the Hematoxylin and Eosin stained pathological section dataset of 296 patients, our method achieved an accuracy of 83.575 % and an AUC of 0.886, thus significantly outperforming other single-scale methods. CONCLUSIONS: Our method can be deployed in medical aid systems at very low cost, serving as a diagnostic or prognostic tool for glioma patients in medically underserved areas.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Imagen por Resonancia Magnética/métodos , Glioma/diagnóstico por imagen , Glioma/genética , Mutación , Pronóstico , Isocitrato Deshidrogenasa/genética
14.
Med Image Anal ; 97: 103275, 2024 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-39032395

RESUMEN

Recent unsupervised domain adaptation (UDA) methods in medical image segmentation commonly utilize Generative Adversarial Networks (GANs) for domain translation. However, the translated images often exhibit a distribution deviation from the ideal due to the inherent instability of GANs, leading to challenges such as visual inconsistency and incorrect style, consequently causing the segmentation model to fall into the fixed wrong pattern. To address this problem, we propose a novel UDA framework known as Dual Domain Distribution Disruption with Semantics Preservation (DDSP). Departing from the idea of generating images conforming to the target domain distribution in GAN-based UDA methods, we make the model domain-agnostic and focus on anatomical structural information by leveraging semantic information as constraints to guide the model to adapt to images with disrupted distributions in both source and target domains. Furthermore, we introduce the inter-channel similarity feature alignment based on the domain-invariant structural prior information, which facilitates the shared pixel-wise classifier to achieve robust performance on target domain features by aligning the source and target domain features across channels. Without any exaggeration, our method significantly outperforms existing state-of-the-art UDA methods on three public datasets (i.e., the heart dataset, the brain dataset, and the prostate dataset). The code is available at https://github.com/MIXAILAB/DDSPSeg.

15.
Front Med (Lausanne) ; 11: 1386161, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38784232

RESUMEN

Background: Fungal infections are associated with high morbidity and mortality in the intensive care unit (ICU), but their diagnosis is difficult. In this study, machine learning was applied to design and define the predictive model of ICU-acquired fungi (ICU-AF) in the early stage of fungal infections using Random Forest. Objectives: This study aimed to provide evidence for the early warning and management of fungal infections. Methods: We analyzed the data of patients with culture-positive fungi during their admission to seven ICUs of the First Affiliated Hospital of Chongqing Medical University from January 1, 2015, to December 31, 2019. Patients whose first culture was positive for fungi longer than 48 h after ICU admission were included in the ICU-AF cohort. A predictive model of ICU-AF was obtained using the Least Absolute Shrinkage and Selection Operator and machine learning, and the relationship between the features within the model and the disease severity and mortality of patients was analyzed. Finally, the relationships between the ICU-AF model, antifungal therapy and empirical antifungal therapy were analyzed. Results: A total of 1,434 cases were included finally. We used lasso dimensionality reduction for all features and selected six features with importance ≥0.05 in the optimal model, namely, times of arterial catheter, enteral nutrition, corticosteroids, broadspectrum antibiotics, urinary catheter, and invasive mechanical ventilation. The area under the curve of the model for predicting ICU-AF was 0.981 in the test set, with a sensitivity of 0.960 and specificity of 0.990. The times of arterial catheter (p = 0.011, OR = 1.057, 95% CI = 1.053-1.104) and invasive mechanical ventilation (p = 0.007, OR = 1.056, 95%CI = 1.015-1.098) were independent risk factors for antifungal therapy in ICU-AF. The times of arterial catheter (p = 0.004, OR = 1.098, 95%CI = 0.855-0.970) were an independent risk factor for empirical antifungal therapy. Conclusion: The most important risk factors for ICU-AF are the six time-related features of clinical parameters (arterial catheter, enteral nutrition, corticosteroids, broadspectrum antibiotics, urinary catheter, and invasive mechanical ventilation), which provide early warning for the occurrence of fungal infection. Furthermore, this model can help ICU physicians to assess whether empiric antifungal therapy should be administered to ICU patients who are susceptible to fungal infections.

16.
Artículo en Inglés | MEDLINE | ID: mdl-38964419

RESUMEN

PURPOSE: To investigate the potential of virtual contrast-enhanced MRI (VCE-MRI) for gross-tumor-volume (GTV) delineation of nasopharyngeal carcinoma (NPC) using multi-institutional data. METHODS AND MATERIALS: This study retrospectively retrieved T1-weighted (T1w), T2-weighted (T2w) MRI, gadolinium-based contrast-enhanced MRI (CE-MRI) and planning CT of 348 biopsy-proven NPC patients from three oncology centers. A multimodality-guided synergistic neural network (MMgSN-Net) was trained using 288 patients to leverage complementary features in T1w and T2w MRI for VCE-MRI synthesis, which was independently evaluated using 60 patients. Three board-certified radiation oncologists and two medical physicists participated in clinical evaluations in three aspects: image quality assessment of the synthetic VCE-MRI, VCE-MRI in assisting target volume delineation, and effectiveness of VCE-MRI-based contours in treatment planning. The image quality assessment includes distinguishability between VCE-MRI and CE-MRI, clarity of tumor-to-normal tissue interface and veracity of contrast enhancement in tumor invasion risk areas. Primary tumor delineation and treatment planning were manually performed by radiation oncologists and medical physicists, respectively. RESULTS: The mean accuracy to distinguish VCE-MRI from CE-MRI was 31.67%; no significant difference was observed in the clarity of tumor-to-normal tissue interface between VCE-MRI and CE-MRI; for the veracity of contrast enhancement in tumor invasion risk areas, an accuracy of 85.8% was obtained. The image quality assessment results suggest that the image quality of VCE-MRI is highly similar to real CE-MRI. The mean dosimetric difference of planning target volumes were less than 1Gy. CONCLUSIONS: The VCE-MRI is highly promising to replace the use of gadolinium-based CE-MRI in tumor delineation of NPC patients.

17.
Biomed Eng Online ; 12: 124, 2013 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-24295198

RESUMEN

BACKGROUND: Abdominal organs segmentation of magnetic resonance (MR) images is an important but challenging task in medical image processing. Especially for abdominal tissues or organs, such as liver and kidney, MR imaging is a very difficult task due to the fact that MR images are affected by intensity inhomogeneity, weak boundary, noise and the presence of similar objects close to each other. METHOD: In this study, a novel method for tissue or organ segmentation in abdomen MR imaging is proposed; this method combines kernel graph cuts (KGC) with shape priors. First, the region growing algorithm and morphology operations are used to obtain the initial contour. Second, shape priors are obtained by training the shape templates, which were collected from different human subjects with kernel principle component analysis (KPCA) after the registration between all the shape templates and the initial contour. Finally, a new model is constructed by integrating the shape priors into the kernel graph cuts energy function. The entire process aims to obtain an accurate image segmentation. RESULTS: The proposed segmentation method has been applied to abdominal organs MR images. The results showed that a satisfying segmentation without boundary leakage and segmentation incorrect can be obtained also in presence of similar tissues. Quantitative experiments were conducted for comparing the proposed segmentation with other three methods: DRLSE, initial erosion contour and KGC without shape priors. The comparison is based on two quantitative performance measurements: the probabilistic rand index (PRI) and the variation of information (VoI). The proposed method has the highest PRI value (0.9912, 0.9983 and 0.9980 for liver, right kidney and left kidney respectively) and the lowest VoI values (1.6193, 0.3205 and 0.3217 for liver, right kidney and left kidney respectively). CONCLUSION: The proposed method can overcome boundary leakage. Moreover it can segment liver and kidneys in abdominal MR images without segmentation errors due to the presence of similar tissues. The shape priors based on KPCA was integrated into fully automatic graph cuts algorithm (KGC) to make the segmentation algorithm become more robust and accurate. Furthermore, if a shelter is placed onto the target boundary, the proposed method can still obtain satisfying segmentation results.


Asunto(s)
Abdomen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Algoritmos , Análisis de Componente Principal , Reproducibilidad de los Resultados
18.
Comput Biol Med ; 158: 106875, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37058759

RESUMEN

Glioma is heterogeneous disease that requires classification into subtypes with similar clinical phenotypes, prognosis or treatment responses. Metabolic-protein interaction (MPI) can provide meaningful insights into cancer heterogeneity. Moreover, the potential of lipids and lactate for identifying prognostic subtypes of glioma remains relatively unexplored. Therefore, we proposed a method to construct an MPI relationship matrix (MPIRM) based on a triple-layer network (Tri-MPN) combined with mRNA expression, and processed the MPIRM by deep learning to identify glioma prognostic subtypes. These Subtypes with significant differences in prognosis were detected in glioma (p-value < 2e-16, 95% CI). These subtypes had a strong correlation in immune infiltration, mutational signatures and pathway signatures. This study demonstrated the effectiveness of node interaction from MPI networks in understanding the heterogeneity of glioma prognosis.


Asunto(s)
Aprendizaje Profundo , Glioma , Humanos , Perfilación de la Expresión Génica/métodos , Glioma/genética , Glioma/metabolismo
19.
Med Phys ; 50(5): 2971-2984, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36542423

RESUMEN

PURPOSE: Reducing the radiation exposure experienced by patients in total-body computed tomography (CT) imaging has attracted extensive attention in the medical imaging community. A low radiation dose may result in increased noise and artifacts that greatly affect the subsequent clinical diagnosis. To obtain high-quality total-body low-dose CT (LDCT) images, previous deep learning-based research works developed various network architectures. However, most of these methods only employ normal-dose CT (NDCT) images as ground truths to guide the training process of the constructed denoising network. As a result of this simple restriction, the reconstructed images tend to lose favorable image details and easily generate oversmoothed textures. This study explores how to better utilize the information contained in the feature spaces of NDCT images to guide the LDCT image reconstruction process and achieve high-quality results. METHODS: We propose a novel intratask knowledge transfer (KT) method that leverages the knowledge distilled from NDCT images as an auxiliary component of the LDCT image reconstruction process. Our proposed architecture is named the teacher-student consistency network (TSC-Net), which consists of teacher and student networks with identical architectures. By employing the designed KT loss, the student network is encouraged to emulate the teacher network in the representation space and gain robust prior content. In addition, to further exploit the information contained in CT scans, a contrastive regularization mechanism (CRM) built upon contrastive learning is introduced. The CRM aims to minimize and maximize the L2 distances from the predicted CT images to the NDCT samples and to the LDCT samples in the latent space, respectively. Moreover, based on attention and the deformable convolution approach, we design a dynamic enhancement module (DEM) to improve the network capability to transform input information flows. RESULTS: By conducting ablation studies, we prove the effectiveness of the proposed KT loss, CRM, and DEM. Extensive experimental results demonstrate that the TSC-Net outperforms the state-of-the-art methods in both quantitative and qualitative evaluations. Additionally, the excellent results obtained for clinical readings also prove that our proposed method can reconstruct high-quality CT images for clinical applications. CONCLUSIONS: Based on the experimental results and clinical readings, the TSC-Net has better performance than other approaches. In our future work, we may explore the reconstruction of LDCT images by fusing the positron emission tomography (PET) and CT modalities to further improve the visual quality of the reconstructed CT images.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Tomografía de Emisión de Positrones , Artefactos , Relación Señal-Ruido
20.
Front Genet ; 14: 1260531, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37811144

RESUMEN

With the increasing throughput of modern sequencing instruments, the cost of storing and transmitting sequencing data has also increased dramatically. Although many tools have been developed to compress sequencing data, there is still a need to develop a compressor with a higher compression ratio. We present a two-step framework for compressing sequencing data in this paper. The first step is to repack original data into a binary stream, while the second step is to compress the stream with a LZMA encoder. We develop a new strategy to encode the original file into a LZMA highly compressed stream. In addition an FPGA-accelerated of LZMA was implemented to speedup the second step. As a demonstration, we present repaq as a lossless non-reference compressor of FASTQ format files. We introduced a multifile redundancy elimination method, which is very useful for compressing paired-end sequencing data. According to our test results, the compression ratio of repaq is much higher than other FASTQ compressors. For some deep sequencing data, the compression ratio of repaq can be higher than 25, almost four times of Gzip. The framework presented in this paper can also be applied to develop new tools for compressing other sequencing data. The open-source code of repaq is available at: https://github.com/OpenGene/repaq.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA