Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2387-2397, 2023.
Article in English | MEDLINE | ID: mdl-35025748

ABSTRACT

With the development of sensors, more and more multimodal data are accumulated, especially in biomedical and bioinformatics fields. Therefore, multimodal data analysis becomes very important and urgent. In this study, we combine multi-kernel learning and transfer learning, and propose a feature-level multi-modality fusion model with insufficient training samples. To be specific, we firstly extend kernel Ridge regression to its multi-kernel version under the lp-norm constraint to explore complementary patterns contained in multimodal data. Then we use marginal probability distribution adaption to minimize the distribution differences between the source domain and the target domain to solve the problem of insufficient training samples. Based on epilepsy EEG data provided by the University of Bonn, we construct 12 multi-modality & transfer scenarios to evaluate our model. Experimental results show that compared with baselines, our model performs better on most scenarios.

2.
J Appl Clin Med Phys ; 23(9): e13731, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35920116

ABSTRACT

Accurate coregistration of computed tomography (CT) and magnetic resonance (MR) imaging can provide clinically relevant and complementary information and can serve to facilitate multiple clinical tasks including surgical and radiation treatment planning, and generating a virtual Positron Emission Tomography (PET)/MR for the sites that do not have a PET/MR system available. Despite the long-standing interest in multimodality co-registration, a robust, routine clinical solution remains an unmet need. Part of the challenge may be the use of mutual information (MI) maximization and local phase difference (LPD) as similarity metrics, which have limited robustness, efficiency, and are difficult to optimize. Accordingly, we propose registering MR to CT by mapping the MR to a synthetic CT intermediate (sCT) and further using it in a sCT-CT deformable image registration (DIR) that minimizes the sum of squared differences. The resultant deformation field of a sCT-CT DIR is applied to the MRI to register it with the CT. Twenty-five sets of abdominopelvic imaging data are used for evaluation. The proposed method is compared to standard MI- and LPD-based methods, and the multimodality DIR provided by a state of the art, commercially available FDA-cleared clinical software package. The results are compared using global similarity metrics, Modified Hausdorff Distance, and Dice Similarity Index on six structures. Further, four physicians visually assessed and scored registered images for their registration accuracy. As evident from both quantitative and qualitative evaluation, the proposed method achieved registration accuracy superior to LPD- and MI-based methods and can refine the results of the commercial package DIR when using its results as a starting point. Supported by these, this manuscript concludes the proposed registration method is more robust, accurate, and efficient than the MI- and LPD-based methods.


Subject(s)
Magnetic Resonance Imaging , Tomography, X-Ray Computed , Algorithms , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Multimodal Imaging , Positron-Emission Tomography , Tomography, X-Ray Computed/methods
3.
Front Public Health ; 10: 898254, 2022.
Article in English | MEDLINE | ID: mdl-35677770

ABSTRACT

In this review, current studies on hospital readmission due to infection of COVID-19 were discussed, compared, and further evaluated in order to understand the current trends and progress in mitigation of hospital readmissions due to COVID-19. Boolean expression of ("COVID-19" OR "covid19" OR "covid" OR "coronavirus" OR "Sars-CoV-2") AND ("readmission" OR "re-admission" OR "rehospitalization" OR "rehospitalization") were used in five databases, namely Web of Science, Medline, Science Direct, Google Scholar and Scopus. From the search, a total of 253 articles were screened down to 26 articles. In overall, most of the research focus on readmission rates than mortality rate. On the readmission rate, the lowest is 4.2% by Ramos-Martínez et al. from Spain, and the highest is 19.9% by Donnelly et al. from the United States. Most of the research (n = 13) uses an inferential statistical approach in their studies, while only one uses a machine learning approach. The data size ranges from 79 to 126,137. However, there is no specific guide to set the most suitable data size for one research, and all results cannot be compared in terms of accuracy, as all research is regional studies and do not involve data from the multi region. The logistic regression is prevalent in the research on risk factors of readmission post-COVID-19 admission, despite each of the research coming out with different outcomes. From the word cloud, age is the most dominant risk factor of readmission, followed by diabetes, high length of stay, COPD, CKD, liver disease, metastatic disease, and CAD. A few future research directions has been proposed, including the utilization of machine learning in statistical analysis, investigation on dominant risk factors, experimental design on interventions to curb dominant risk factors and increase the scale of data collection from single centered to multi centered.


Subject(s)
COVID-19 , Patient Readmission , COVID-19/epidemiology , Humans , Logistic Models , Machine Learning , Risk Factors , United States
4.
Math Biosci Eng ; 19(6): 5925-5956, 2022 04 11.
Article in English | MEDLINE | ID: mdl-35603385

ABSTRACT

The closed-loop supply chain (CLSC) plays an important role in sustainable development and can help to increase the economic benefits of enterprises. The optimization for the CLSC network is a complicated problem, since it often has a large problem scale and involves multiple constraints. This paper proposes a general CLSC model to maximize the profits of enterprises by determining the transportation route and delivery volume. Due to the complexity of the multi-constrained and large-scale model, a genetic algorithm with two-step rank-based encoding (GA-TRE) is developed to solve the problem. Firstly, a two-step rank-based encoding is designed to handle the constraints and increase the algorithm efficiency, and the encoding scheme is also used to improve the genetic operators, including crossover and mutation. The first step of encoding is to plan the routes and predict their feasibility according to relevant constraints, and the second step is to set the delivery volume based on the feasible routes using a rank-based method to achieve greedy solutions. Besides, a new mutation operator and an adaptive population disturbance mechanism are designed to increase the diversity of the population. To validate the efficiency of the proposed algorithm, six heuristic algorithms are compared with GA-TRE by using different instances with three problem scales. The results show that GA-TRE can obtain better solutions than the competitors, especially on large-scale instances.


Subject(s)
Algorithms , Transportation
5.
Comput Intell Neurosci ; 2022: 9167707, 2022.
Article in English | MEDLINE | ID: mdl-35498184

ABSTRACT

In the late December of 2019, a novel coronavirus was discovered in Wuhan, China. In March 2020, WHO announced this epidemic had become a global pandemic and that the novel coronavirus may be mild to most people. However, some people may experience a severe illness that results in hospitalization or maybe death. COVID-19 classification remains challenging due to the ambiguity and similarity with other known respiratory diseases such as SARS, MERS, and other viral pneumonia. The typical symptoms of COVID-19 are fever, cough, chills, shortness of breath, loss of smell and taste, headache, sore throat, chest pains, confusion, and diarrhoea. This research paper suggests the concept of transfer learning using the deterministic algorithm in all binary classification models and evaluates the performance of various CNN architectures. The datasets of 746 CT images of COVID-19 and non-COVID-19 were divided for training, validation, and testing. Various augmentation techniques were applied to increase the number of datasets except for testing images. The images were then pretrained using CNN to obtain a binary class. ResNeXt101 and ResNet152 have the best F1 score of 0.978 and 0.938, whereas GoogleNet has an F1 score of 0.762. ResNeXt101 and ResNet152 have an accuracy of 97.81% and 93.80%. ResNeXt101, DenseNet201, and ResNet152 have 95.71%, 93.81%, and 90% sensitivity, whereas ResNeXt101, ResNet101, and ResNet152 have 100%, 99.58%, and 98.33 specificity, respectively.


Subject(s)
COVID-19 , COVID-19/diagnostic imaging , Humans , Neural Networks, Computer , Pandemics , SARS-CoV-2 , Tomography, X-Ray Computed
6.
Comput Intell Neurosci ; 2022: 4926124, 2022.
Article in English | MEDLINE | ID: mdl-35341171

ABSTRACT

Deep learning-based image compression methods have made significant achievements recently, of which the two key components are the entropy model for latent representations and the encoder-decoder network. Both the inaccurate estimation of the entropy estimation model and the existence of information redundancy in latent representations lead to a reduction in the compression efficiency. To address these issues, the study suggests an image compression method based on a hybrid domain attention mechanism and postprocessing improvement. This study embeds hybrid domain attention modules as nonlinear transformers in both the main encoder-decoder network and the hyperprior network, aiming at constructing more compact latent features and hyperpriors and then model the latent features as parametric Gaussian-scale mixture models to obtain more precise entropy estimation. In addition, we propose a solution to the errors introduced by quantization in image compression by adding an inverse quantization module. On the decoding side, we also provide a postprocessing enhancement module to further increase image compression performance. The experimental results show that the peak signal-to-noise rate (PSNR) and multiscale structural similarity (MS-SSIM) of the proposed method are higher than those of traditional compression methods and advanced neural network-based methods.


Subject(s)
Data Compression , Electric Power Supplies , Entropy , Neural Networks, Computer , Normal Distribution
7.
J Healthc Eng ; 2022: 4138666, 2022.
Article in English | MEDLINE | ID: mdl-35222885

ABSTRACT

Knee osteoarthritis (OA) is a deliberating joint disorder characterized by cartilage loss that can be captured by imaging modalities and translated into imaging features. Observing imaging features is a well-known objective assessment for knee OA disorder. However, the variety of imaging features is rarely discussed. This study reviews knee OA imaging features with respect to different imaging modalities for traditional OA diagnosis and updates recent image-based machine learning approaches for knee OA diagnosis and prognosis. Although most studies recognized X-ray as standard imaging option for knee OA diagnosis, the imaging features are limited to bony changes and less sensitive to short-term OA changes. Researchers have recommended the usage of MRI to study the hidden OA-related radiomic features in soft tissues and bony structures. Furthermore, ultrasound imaging features should be explored to make it more feasible for point-of-care diagnosis. Traditional knee OA diagnosis mainly relies on manual interpretation of medical images based on the Kellgren-Lawrence (KL) grading scheme, but this approach is consistently prone to human resource and time constraints and less effective for OA prevention. Recent studies revealed the capability of machine learning approaches in automating knee OA diagnosis and prognosis, through three major tasks: knee joint localization (detection and segmentation), classification of OA severity, and prediction of disease progression. AI-aided diagnostic models improved the quality of knee OA diagnosis significantly in terms of time taken, reproducibility, and accuracy. Prognostic ability was demonstrated by several prediction models in terms of estimating possible OA onset, OA deterioration, progressive pain, progressive structural change, progressive structural change with pain, and time to total knee replacement (TKR) incidence. Despite research gaps, machine learning techniques still manifest huge potential to work on demanding tasks such as early knee OA detection and estimation of future disease events, as well as fundamental tasks such as discovering the new imaging features and establishment of novel OA status measure. Continuous machine learning model enhancement may favour the discovery of new OA treatment in future.


Subject(s)
Osteoarthritis, Knee , Humans , Knee Joint/diagnostic imaging , Machine Learning , Magnetic Resonance Imaging , Osteoarthritis, Knee/diagnostic imaging , Pain , Reproducibility of Results
8.
Math Biosci Eng ; 19(1): 271-286, 2022 01.
Article in English | MEDLINE | ID: mdl-34902991

ABSTRACT

Supply chain network is important for the enterprise to improve the operation and management, but has become more complicated to optimize in reality. With the consideration of multiple objectives and constraints, this paper proposes a constrained large-scale multi-objective supply chain network (CLMSCN) optimization model. This model is to minimize the total operation cost (including the costs of production, transportation, and inventory) and to maximize the customer satisfaction under the capacity constraints. Besides, a coevolutionary algorithm based on the auxiliary population (CAAP) is proposed, which uses two populations to solve the CLMSCN problem. One population is to solve the original complex problem, and the other population is to solve the problem without any constraints. If the infeasible solutions are generated in the first population, a linear repair operator will be used to improve the feasibility of these solutions. To validate the effectivity of the CAAP algorithm, the experiment is conducted on the randomly generated instances with three different problem scales. The results show that the CAAP algorithm can outperform other compared algorithms, especially on the large-scale instances.


Subject(s)
Algorithms , Transportation
9.
J Healthc Eng ; 2021: 9208138, 2021.
Article in English | MEDLINE | ID: mdl-34765104

ABSTRACT

Quality of care data has gained transparency captured through various measurements and reporting. Readmission measure is especially related to unfavorable patient outcomes that directly bends the curve of healthcare cost. Under the Hospital Readmission Reduction Program, payments to hospitals were reduced for those with excessive 30-day rehospitalization rates. These penalties have intensified efforts from hospital stakeholders to implement strategies to reduce readmission rates. One of the key strategies is the deployment of predictive analytics stratified by patient population. The recent research in readmission model is focused on making its prediction more accurate. As cost-saving improvements through artificial intelligent-based health solutions are expected, the broad economic impact of such digital tool remains unknown. Meanwhile, reducing readmission rate is associated with increased operating expenses due to targeted interventions. The increase in operating margin can surpass native readmission cost. In this paper, we propose a quantized evaluation metric to provide a methodological mean in assessing whether a predictive model represents cost-effective way of delivering healthcare. Herein, we evaluate the impact machine learning has had on transitional care and readmission with proposed metric. The final model was estimated to produce net healthcare savings at over $1 million given a 50% rate of successfully preventing a readmission.


Subject(s)
Hospitals , Patient Readmission , Cost-Benefit Analysis , Health Care Costs , Humans
10.
Comput Math Methods Med ; 2021: 9976440, 2021.
Article in English | MEDLINE | ID: mdl-34567237

ABSTRACT

Texture analysis (TA) techniques derived from T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps of rectal cancer can both achieve good diagnosis performance. This study was to compare TA from T2WI and ADC maps between different pathological T and N stages to confirm which TA analysis is better in diagnosis performance. 146 patients were enrolled in this study. Tumor TA was performed on every patient's T2WI and ADC maps, respectively; then, skewness, kurtosis, uniformity, entropy, energy, inertia, and correlation were calculated. Our results demonstrated that those significant different parameters derived from T2WI had better diagnostic performance than those from ADC maps in differentiating pT3b-4 and pN1-2 stage tumors. In particular, the energy derived from T2WI was an optimal parameter for diagnostic efficiency. High-resolution T2WI plays a key point in the local stage of rectal cancer; thus, TA derived from T2WI may be a more useful tool to aid radiologists and surgeons in selecting treatment.


Subject(s)
Diffusion Magnetic Resonance Imaging/statistics & numerical data , Image Interpretation, Computer-Assisted/statistics & numerical data , Magnetic Resonance Imaging/statistics & numerical data , Rectal Neoplasms/diagnostic imaging , Adult , Aged , Aged, 80 and over , China , Computational Biology , Female , Humans , Male , Middle Aged , ROC Curve , Rectal Neoplasms/pathology , Retrospective Studies
11.
EURASIP J Adv Signal Process ; 2021(1): 50, 2021.
Article in English | MEDLINE | ID: mdl-34335736

ABSTRACT

Coronavirus disease of 2019 or COVID-19 is a rapidly spreading viral infection that has affected millions all over the world. With its rapid spread and increasing numbers, it is becoming overwhelming for the healthcare workers to rapidly diagnose the condition and contain it from spreading. Hence it has become a necessity to automate the diagnostic procedure. This will improve the work efficiency as well as keep the healthcare workers safe from getting exposed to the virus. Medical image analysis is one of the rising research areas that can tackle this issue with higher accuracy. This paper conducts a comparative study of the use of the recent deep learning models (VGG16, VGG19, DenseNet121, Inception-ResNet-V2, InceptionV3, Resnet50, and Xception) to deal with the detection and classification of coronavirus pneumonia from pneumonia cases. This study uses 7165 chest X-ray images of COVID-19 (1536) and pneumonia (5629) patients. Confusion metrics and performance metrics were used to analyze each model. Results show DenseNet121 (99.48% of accuracy) showed better performance when compared with the other models in this study.

12.
IEEE Access ; 9: 17208-17221, 2021.
Article in English | MEDLINE | ID: mdl-33747682

ABSTRACT

Multi-modality imaging constitutes a foundation of precision medicine, especially in oncology where reliable and rapid imaging techniques are needed in order to insure adequate diagnosis and treatment. In cervical cancer, precision oncology requires the acquisition of 18F-labeled 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET), magnetic resonance (MR), and computed tomography (CT) images. Thereafter, images are co-registered to derive electron density attributes required for FDG-PET attenuation correction and radiation therapy planning. Nevertheless, this traditional approach is subject to MR-CT registration defects, expands treatment expenses, and increases the patient's radiation exposure. To overcome these disadvantages, we propose a new framework for cross-modality image synthesis which we apply on MR-CT image translation for cervical cancer diagnosis and treatment. The framework is based on a conditional generative adversarial network (cGAN) and illustrates a novel tactic that addresses, simplistically but efficiently, the paradigm of vanishing gradient vs. feature extraction in deep learning. Its contributions are summarized as follows: 1) The approach -termed sU-cGAN-uses, for the first time, a shallow U-Net (sU-Net) with an encoder/decoder depth of 2 as generator; 2) sU-cGAN's input is the same MR sequence that is used for radiological diagnosis, i.e. T2-weighted, Turbo Spin Echo Single Shot (TSE-SSH) MR images; 3) Despite limited training data and a single input channel approach, sU-cGAN outperforms other state of the art deep learning methods and enables accurate synthetic CT (sCT) generation. In conclusion, the suggested framework should be studied further in the clinical settings. Moreover, the sU-Net model is worth exploring in other computer vision tasks.

13.
Article in English | MEDLINE | ID: mdl-32175868

ABSTRACT

Computed tomography (CT) provides information for diagnosis, PET attenuation correction (AC), and radiation treatment planning (RTP). Disadvantages of CT include poor soft tissue contrast and exposure to ionizing radiation. While MRI can overcome these disadvantages, it lacks the photon absorption information needed for PET AC and RTP. Thus, an intelligent transformation from MR to CT, i.e., the MR-based synthetic CT generation, is of great interest as it would support PET/MR AC and MR-only RTP. Using an MR pulse sequence that combines ultra-short echo time (UTE) and modified Dixon (mDixon), we propose a novel method for synthetic CT generation jointly leveraging prior knowledge as well as partial supervision (SCT-PK-PS for short) on large-field-of-view images that span abdomen and pelvis. Two key machine learning techniques, i.e., the knowledge-leveraged transfer fuzzy c-means (KL-TFCM) and the Laplacian support vector machine (LapSVM), are used in SCT-PK-PS. The significance of our effort is threefold: 1) Using the prior knowledge-referenced KL-TFCM clustering, SCT-PK-PS is able to group the feature data of MR images into five initial clusters of fat, soft tissue, air, bone, and bone marrow. Via these initial partitions, clusters needing to be refined are observed and for each of them a few additionally labeled examples are given as the partial supervision for the subsequent semi-supervised classification using LapSVM; 2) Partial supervision is usually insufficient for conventional algorithms to learn the insightful classifier. Instead, exploiting not only the given supervision but also the manifold structure embedded primarily in numerous unlabeled data, LapSVM is capable of training multiple desired tissue-recognizers; 3) Benefiting from the joint use of KL-TFCM and LapSVM, and assisted by the edge detector filter based feature extraction, the proposed SCT-PK-PS method features good recognition accuracy of tissue types, which ultimately facilitates the good transformation from MR images to CT images of the abdomen-pelvis. Applying the method on twenty subjects' feature data of UTE-mDixon MR images, the average score of the mean absolute prediction deviation (MAPD) of all subjects is 140.72 ± 30.60 HU which is statistically significantly better than the 241.36 ± 21.79 HU obtained using the all-water method, the 262.77 ± 42.22 HU obtained using the four-cluster-partitioning (FCP, i.e., external-air, internal-air, fat, and soft tissue) method, and the 197.05 ± 76.53 HU obtained via the conventional SVM method. These results demonstrate the effectiveness of our method for the intelligent transformation from MR to CT on the body section of abdomen-pelvis.


Subject(s)
Image Processing, Computer-Assisted/methods , Machine Learning , Magnetic Resonance Imaging/methods , Pelvis/diagnostic imaging , Tomography, X-Ray Computed/methods , Abdomen/diagnostic imaging , Humans
14.
Comput Math Methods Med ; 2020: 2684851, 2020.
Article in English | MEDLINE | ID: mdl-32670390

ABSTRACT

Multimodal registration is a challenging task due to the significant variations exhibited from images of different modalities. CT and MRI are two of the most commonly used medical images in clinical diagnosis, since MRI with multicontrast images, together with CT, can provide complementary auxiliary information. The deformable image registration between MRI and CT is essential to analyze the relationships among different modality images. Here, we proposed an indirect multimodal image registration method, i.e., sCT-guided multimodal image registration and problematic image completion method. In addition, we also designed a deep learning-based generative network, Conditional Auto-Encoder Generative Adversarial Network, called CAE-GAN, combining the idea of VAE and GAN under a conditional process to tackle the problem of synthetic CT (sCT) synthesis. Our main contributions in this work can be summarized into three aspects: (1) We designed a new generative network called CAE-GAN, which incorporates the advantages of two popular image synthesis methods, i.e., VAE and GAN, and produced high-quality synthetic images with limited training data. (2) We utilized the sCT generated from multicontrast MRI as an intermediary to transform multimodal MRI-CT registration into monomodal sCT-CT registration, which greatly reduces the registration difficulty. (3) Using normal CT as guidance and reference, we repaired the abnormal MRI while registering the MRI to the normal CT.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Multimodal Imaging/methods , Algorithms , Brain/diagnostic imaging , Computational Biology/methods , Databases, Factual , Deep Learning , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Magnetic Resonance Imaging/statistics & numerical data , Multimodal Imaging/statistics & numerical data , Synthetic Biology , Tomography, X-Ray Computed/statistics & numerical data
15.
Comput Math Methods Med ; 2020: 4519483, 2020.
Article in English | MEDLINE | ID: mdl-32454883

ABSTRACT

We propose a new method for fast organ classification and segmentation of abdominal magnetic resonance (MR) images. Magnetic resonance imaging (MRI) is a new type of high-tech imaging examination fashion in recent years. Recognition of specific target areas (organs) based on MR images is one of the key issues in computer-aided diagnosis of medical images. Artificial neural network technology has made significant progress in image processing based on the multimodal MR attributes of each pixel in MR images. However, with the generation of large-scale data, there are few studies on the rapid processing of large-scale MRI data. To address this deficiency, we present a fast radial basis function artificial neural network (Fast-RBF) algorithm. The importance of our efforts is as follows: (1) The proposed algorithm achieves fast processing of large-scale image data by introducing the ε-insensitive loss function, the structural risk term, and the core-set principle. We apply this algorithm to the identification of specific target areas in MR images. (2) For each abdominal MRI case, we use four MR sequences (fat, water, in-phase (IP), and opposed-phase (OP)) and the position coordinates (x, y) of each pixel as the input of the algorithm. We use three classifiers to identify the liver and kidneys in the MR images. Experiments show that the proposed method achieves a higher precision in the recognition of specific regions of medical images and has better adaptability in the case of large-scale datasets than the traditional RBF algorithm.


Subject(s)
Algorithms , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Abdomen/diagnostic imaging , Computational Biology , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Kidney/diagnostic imaging , Liver/diagnostic imaging , Magnetic Resonance Imaging/statistics & numerical data , Organ Specificity , Support Vector Machine
16.
IEEE Trans Med Imaging ; 39(4): 819-832, 2020 04.
Article in English | MEDLINE | ID: mdl-31425065

ABSTRACT

We propose a new method for generating synthetic CT images from modified Dixon (mDixon) MR data. The synthetic CT is used for attenuation correction (AC) when reconstructing PET data on abdomen and pelvis. While MR does not intrinsically contain any information about photon attenuation, AC is needed in PET/MR systems in order to be quantitatively accurate and to meet qualification standards required for use in many multi-center trials. Existing MR-based synthetic CT generation methods either use advanced MR sequences that have long acquisition time and limited clinical availability or use matching of the MR images from a newly scanned subject to images in a library of MR-CT pairs which has difficulty in accounting for the diversity of human anatomy especially in patients that have pathologies. To address these deficiencies, we present a five-phase interlinked method that uses mDixon MR acquisition and advanced machine learning methods for synthetic CT generation. Both transfer fuzzy clustering and active learning-based classification (TFC-ALC) are used. The significance of our efforts is fourfold: 1) TFC-ALC is capable of better synthetic CT generation than methods currently in use on the challenging abdomen using only common Dixon-based scanning. 2) TFC partitions MR voxels initially into the four groups regarding fat, bone, air, and soft tissue via transfer learning; ALC can learn insightful classifiers, using as few but informative labeled examples as possible to precisely distinguish bone, air, and soft tissue. Combining them, the TFC-ALC method successfully overcomes the inherent imperfection and potential uncertainty regarding the co-registration between CT and MR images. 3) Compared with existing methods, TFC-ALC features not only preferable synthetic CT generation but also improved parameter robustness, which facilitates its clinical practicability. Applying the proposed approach on mDixon-MR data from ten subjects, the average score of the mean absolute prediction deviation (MAPD) was 89.78±8.76 which is significantly better than the 133.17±9.67 obtained using the all-water (AW) method (p=4.11E-9) and the 104.97±10.03 obtained using the four-cluster-partitioning (FCP, i.e., external-air, internal-air, fat, and soft tissue) method (p=0.002). 4) Experiments in the PET SUV errors of these approaches show that TFC-ALC achieves the highest SUV accuracy and can generally reduce the SUV errors to 5% or less. These experimental results distinctively demonstrate the effectiveness of our proposed TFCALC method for the synthetic CT generation on abdomen and pelvis using only the commonly-available Dixon pulse sequence.


Subject(s)
Abdomen/diagnostic imaging , Image Processing, Computer-Assisted/methods , Pelvis/diagnostic imaging , Positron-Emission Tomography/methods , Support Vector Machine , Cluster Analysis , Fuzzy Logic , Humans , Magnetic Resonance Imaging , Tomography, X-Ray Computed
17.
Med Phys ; 46(8): 3520-3531, 2019 Aug.
Article in English | MEDLINE | ID: mdl-31063248

ABSTRACT

PURPOSE: Accurate photon attenuation assessment from MR data remains an unmet challenge in the thorax due to tissue heterogeneity and the difficulty of MR lung imaging. As thoracic tissues encompass the whole physiologic range of photon absorption, large errors can occur when using, for example, a uniform, water-equivalent or a soft-tissue-only approximation. The purpose of this study was to introduce a method for voxel-wise thoracic synthetic CT (sCT) generation from MR data attenuation correction (AC) for PET/MR or for MR-only radiation treatment planning (RTP). METHODS: Acquisition: A radial stack-of-stars combining ultra-short-echo time (UTE) and modified Dixon (mDixon) sequence was optimized for thoracic imaging. The UTE-mDixon pulse sequence collects MR signals at three TE times denoted as UTE, Echo1, and Echo2. Three-point mDixon processing was used to reconstruct water and fat images. Bias field correction was applied in order to avoid artifacts caused by inhomogeneity of the MR magnetic field. ANALYSIS: Water fraction and R2* maps were estimated using the UTE-mDixon data to produce a total of seven MR features, that is UTE, Echo1, Echo2, Dixon water, Dixon fat, Water fraction, and R2*. A feature selection process was performed to determine the optimal feature combination for the proposed automatic, 6-tissue classification for sCT generation. Fuzzy c-means was used for the automatic classification which was followed by voxel-wise attenuation coefficient assignment as a weighted sum of those of the component tissues. Performance evaluation: MR data collected using the proposed pulse sequence were compared to those using a traditional two-point Dixon approach. Image quality measures, including image resolution and uniformity, were evaluated using an MR ACR phantom. Data collected from 25 normal volunteers were used to evaluate the accuracy of the proposed method compared to the template-based approach. Notably, the template approach is applicable here, that is normal volunteers, but may not be robust enough for patients with pathologies. RESULTS: The free breathing UTE-mDixon pulse sequence yielded images with quality comparable to those using the traditional breath holding mDixon sequence. Furthermore, by capturing the signal before T2* decay, the UTE-mDixon image provided lung and bone information which the mDixon image did not. The combination of Dixon water, Dixon fat, and the Water fraction was the most robust for tissue clustering and supported the classification of six tissues, that is, air, lung, fat, soft tissue, low-density bone, and dense bone, used to generate the sCT. The thoracic sCT had a mean absolute difference from the template-based (reference) CT of less than 50 HU and which was better agreement with the reference CT than the results produced using the traditional Dixon-based data. CONCLUSION: MR thoracic acquisition and analyses have been established to automatically provide six distinguishable tissue types to generate sCT for MR-based AC of PET/MR and for MR-only RTP.


Subject(s)
Image Processing, Computer-Assisted/methods , Thorax/diagnostic imaging , Tomography, X-Ray Computed , Cluster Analysis , Humans
18.
J Med Syst ; 43(5): 118, 2019 Mar 25.
Article in English | MEDLINE | ID: mdl-30911929

ABSTRACT

Artificial intelligence algorithms have been used in a wide range of applications in clinical aided diagnosis, such as automatic MR image segmentation and seizure EEG signal analyses. In recent years, many machine learning-based automatic MR brain image segmentation methods have been proposed as auxiliary methods of medical image analysis in clinical treatment. Nevertheless, many problems regarding precise medical images, which cannot be effectively utilized to improve partition performance, remain to be solved. Due to the poor contrast in grayscale images, the ambiguity and complexity of MR images, and individual variability, the performance of classic algorithms in medical image segmentation still needs improvement. In this paper, we introduce a distributed multitask fuzzy c-means (MT-FCM) clustering algorithm for MR brain image segmentation that can extract knowledge common among different clustering tasks. The proposed distributed MT-FCM algorithm can effectively exploit information common among different but related MR brain image segmentation tasks and can avoid the negative effects caused by noisy data that exist in some MR images. Experimental results on clinical MR brain images demonstrate that the distributed MT-FCM method demonstrates more desirable performance than the classic signal task method.


Subject(s)
Brain/diagnostic imaging , Fuzzy Logic , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Algorithms , Humans , Reproducibility of Results
19.
Artif Intell Med ; 90: 34-41, 2018 08.
Article in English | MEDLINE | ID: mdl-30054121

ABSTRACT

BACKGROUND: Manual contouring remains the most laborious task in radiation therapy planning and is a major barrier to implementing routine Magnetic Resonance Imaging (MRI) Guided Adaptive Radiation Therapy (MR-ART). To address this, we propose a new artificial intelligence-based, auto-contouring method for abdominal MR-ART modeled after human brain cognition for manual contouring. METHODS/MATERIALS: Our algorithm is based on two types of information flow, i.e. top-down and bottom-up. Top-down information is derived from simulation MR images. It grossly delineates the object based on its high-level information class by transferring the initial planning contours onto daily images. Bottom-up information is derived from pixel data by a supervised, self-adaptive, active learning based support vector machine. It uses low-level pixel features, such as intensity and location, to distinguish each target boundary from the background. The final result is obtained by fusing top-down and bottom-up outputs in a unified framework through artificial intelligence fusion. For evaluation, we used a dataset of four patients with locally advanced pancreatic cancer treated with MR-ART using a clinical system (MRIdian, Viewray, Oakwood Village, OH, USA). Each set included the simulation MRI and onboard T1 MRI corresponding to a randomly selected treatment session. Each MRI had 144 axial slices of 266 × 266 pixels. Using the Dice Similarity Index (DSI) and the Hausdorff Distance Index (HDI), we compared the manual and automated contours for the liver, left and right kidneys, and the spinal cord. RESULTS: The average auto-segmentation time was two minutes per set. Visually, the automatic and manual contours were similar. Fused results achieved better accuracy than either the bottom-up or top-down method alone. The DSI values were above 0.86. The spinal canal contours yielded a low HDI value. CONCLUSION: With a DSI significantly higher than the usually reported 0.7, our novel algorithm yields a high segmentation accuracy. To our knowledge, this is the first fully automated contouring approach using T1 MRI images for adaptive radiotherapy.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pancreatic Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods , Support Vector Machine , Humans , Multimodal Imaging , Pancreatic Neoplasms/diagnostic imaging , Pancreatic Neoplasms/pathology , Tomography, X-Ray Computed , Workflow
20.
Phys Med Biol ; 63(12): 125001, 2018 06 08.
Article in English | MEDLINE | ID: mdl-29787382

ABSTRACT

The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρ e), mean excitation energy (I x ), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 s. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency.


Subject(s)
Machine Learning , Tomography, X-Ray Computed/methods , Humans , Phantoms, Imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...