RESUMO
This study presents a new framework for obtaining personalized optimal treatment strategies targeting aberrant signaling pathways in esophageal cancer, such as the epidermal growth factor (EGF) and vascular endothelial growth factor (VEGF) signaling pathways. A new pharmacokinetic model is developed taking into account specific heterogeneities of these signaling mechanisms. The optimal therapies are designed to be obtained using a three step process. First, a finite-dimensional constrained optimization problem is solved to obtain the parameters of the pharmacokinetic model, using discrete patient data measurements. Next, a sensitivity analysis is carried out to determine which of the parameters are sensitive to the evolution of the variants of EGF receptors and VEGF receptors. Finally, a second optimal control problem is solved based on the sensitivity analysis results, using a modified pharmacokinetic model that incorporates two representative drugs Trastuzumab and Bevacizumab, targeting EGF and VEGF, respectively. Numerical results with the combination of the two drugs demonstrate the efficiency of the proposed framework.
Assuntos
Fator de Crescimento Epidérmico , Neoplasias Esofágicas , Humanos , Fator A de Crescimento do Endotélio Vascular , Transdução de Sinais , Neoplasias Esofágicas/tratamento farmacológicoRESUMO
KEY MESSAGE: A new genomic prediction method (RHPP) was developed via combining randomized Haseman-Elston regression (RHE-reg), PCR based on genomic information of core population, and preconditioned conjugate gradient (PCG) algorithm. Computational efficiency is becoming a hot issue in the practical application of genomic prediction due to the large number of data generated by the high-throughput genotyping technology. In this study, we developed a fast genomic prediction method RHPP via combining randomized Haseman-Elston regression (RHE-reg), PCR based on genomic information of core population, and preconditioned conjugate gradient (PCG) algorithm. The simulation results demonstrated similar prediction accuracy between RHPP and GBLUP, and significantly higher computational efficiency of the former with the increase of individuals. The results of real datasets of both bread wheat and loblolly pine demonstrated that RHPP had a similar or better predictive accuracy in most cases compared with GBLUP. In the future, RHPP may be an attractive choice for analyzing large-scale and high-dimensional data.
RESUMO
Compressed imaging reconstruction technology can reconstruct high-resolution images with a small number of observations by applying the theory of block compressed sensing to traditional optical imaging systems, and the reconstruction algorithm mainly determines its reconstruction accuracy. In this work, we design a reconstruction algorithm based on block compressed sensing with a conjugate gradient smoothed l0 norm termed BCS-CGSL0. The algorithm is divided into two parts. The first part, CGSL0, optimizes the SL0 algorithm by constructing a new inverse triangular fraction function to approximate the l0 norm and uses the modified conjugate gradient method to solve the optimization problem. The second part combines the BCS-SPL method under the framework of block compressed sensing to remove the block effect. Research shows that the algorithm can reduce the block effect while improving the accuracy and efficiency of reconstruction. Simulation results also verify that the BCS-CGSL0 algorithm has significant advantages in reconstruction accuracy and efficiency.
RESUMO
The gas sweetening process removes hydrogen sulfide (H2S) in an acid gas removal unit (AGRU) to meet the gas sales' specification, known as sweet gas. Monitoring the concentration of H2S in sweet gas is crucial to avoid operational and environmental issues. This study shows the capability of artificial neural networks (ANN) to predict the concentration of H2S in sweet gas. The concentration of N-methyldiethanolamine (MDEA) and Piperazine (PZ), temperature and pressure as inputs, and the concentration of H2S in sweet gas as outputs have been used to create the ANN network. Two distinct backpropagation techniques with various transfer functions and numbers of neurons were used to train the ANN models. Multiple linear regression (MLR) was used to compare the outcomes of the ANN models. The models' performance was assessed using the mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). The findings demonstrate that ANN trained by the Levenberg-Marquardt technique, equipped with a logistic sigmoid (logsig) transfer function with three neurons achieved the highest R2 (0.966) and the lowest MAE (0.066) and RMSE (0.122) values. The findings suggested that ANN can be a reliable and accurate prediction method in predicting the concentration of H2S in sweet gas.
Assuntos
Sulfeto de Hidrogênio , Redes Neurais de Computação , Solventes , Modelos Lineares , GasesRESUMO
In this study, the nonlinear mathematical model of COVID-19 is investigated by stochastic solver using the scaled conjugate gradient neural networks (SCGNNs). The nonlinear mathematical model of COVID-19 is represented by coupled system of ordinary differential equations and is studied for three different cases of initial conditions with suitable parametric values. This model is studied subject to seven class of human population N(t) and individuals are categorized as: susceptible S(t), exposed E(t), quarantined Q(t), asymptotically diseased IA (t), symptomatic diseased IS (t) and finally the persons removed from COVID-19 and are denoted by R(t). The stochastic numerical computing SCGNNs approach will be used to examine the numerical performance of nonlinear mathematical model of COVID-19. The stochastic SCGNNs approach is based on three factors by using procedure of verification, sample statistics, testing and training. For this purpose, large portion of data is considered, i.e., 70%, 16%, 14% for training, testing and validation, respectively. The efficiency, reliability and authenticity of stochastic numerical SCGNNs approach are analysed graphically in terms of error histograms, mean square error, correlation, regression and finally further endorsed by graphical illustrations for absolute errors in the range of 10-05 to 10-07 for each scenario of the system model.
RESUMO
In this paper, a new framework for obtaining personalized optimal treatment strategies in colon cancer-induced angiogenesis is presented. The dynamics of colon cancer is given by a Itó stochastic process, which helps in modeling the randomness present in the system. The stochastic dynamics is then represented by the Fokker-Planck (FP) partial differential equation that governs the evolution of the associated probability density function. The optimal therapies are obtained using a three step procedure. First, a finite dimensional FP-constrained optimization problem is formulated that takes input individual noisy patient data, and is solved to obtain the unknown parameters corresponding to the individual tumor characteristics. Next, a sensitivity analysis of the optimal parameter set is used to determine the parameters to be controlled, thus, helping in assessing the types of treatment therapies. Finally, a feedback FP control problem is solved to determine the optimal combination therapies. Numerical results with the combination drug, comprising of Bevacizumab and Capecitabine, demonstrate the efficiency of the proposed framework.
Assuntos
Neoplasias do Colo , Neoplasias do Colo/tratamento farmacológico , Retroalimentação , Humanos , Processos EstocásticosRESUMO
Saybolt color is a standard measurement scale used to determine the quality of petroleum products and the appropriate refinement process. However, the current color measurement methods are mostly laboratory-based, thereby consuming much time and being costly. Hence, we designed an automated model based on an artificial neural network to predict Saybolt color. The network has been built with five input variables, density, kinematic viscosity, sulfur content, cetane index, and total acid number; and one output, i.e., Saybolt color. Two backpropagation algorithms with different transfer functions and neurons number were tested. Mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) were used to assess the performance of the developed model. Additionally, the results of the ANN model are compared with the multiple linear regression (MLR). The results demonstrate that the ANN with the Levenberg-Marquart algorithm, tangent sigmoid transfer function, and three neurons achieved the highest performance (R2 = 0.995, MAE = 1.000, and RMSE = 1.658) in predicting the Saybolt color. The ANN model appeared to be superior to MLR (R2 = 0.830). Hence, this shows the potential of the ANN model as an effective method with which to predict Saybolt color in real time.
Assuntos
Redes Neurais de Computação , Petróleo , Algoritmos , Modelos Lineares , NeurôniosRESUMO
Phenol is one of the most commonly known chemical compound found as a pollutant in the chemical industrial wastewater. This pollutant has potential threat for human health and environment, as it can be easily absorbed by the skin and the mucous. Here, we prepared dual chambered microbial fuel cell (MFC) sensor for the detection of phenol. Varying concentration of phenol (100 mg/l, 250 mg/l, 500 mg/l, and 1000 mg/l) was applied as a substrate to the MFC and their change in output voltage was also measured. After adding 100 mg/l, 250 mg/l, 500 mg/l, and 1000 mg/l of phenol as sole substrate to the MFC, the maximum voltage output was obtained as 360 ± 10 mV, 395 ± 8 mV, 320 ± 7 mV, 350 ± 5 mV respectively. This biosensor was operated using industrial wastewater isolated microbes as a sensing element and phenol was used as a sole substrate. The topologies of ANN were analyzed to get the best model to predict the power output of MFCs and the training algorithms were compared with their convergence rates in training and test results. Time series model was used for regression analysis to predict the future values based on previously observed values. Two types of mathematical modeling i.e. Scaled Conjugate Gradient (SCG) algorithm and Time-series model was used with 44 experimental data with varying phenol concentration and varying synthetic wastewater concentration to optimize the biosensor performance. Both SCG and time series showing the best results with R2 value 0.98802 and 0.99115.
Assuntos
Fontes de Energia Bioelétrica , Eletricidade , Eletrodos , Humanos , Redes Neurais de Computação , Fenol , Fenóis , Águas ResiduáriasRESUMO
Robust optimization has been shown to be effective for stabilizing treatment planning in intensity modulated proton therapy (IMPT), but existing algorithms for the optimization process is time-consuming. This paper describes a fast robust optimization tool that takes advantage of the GPU parallel computing technologies. The new robust optimization model is based on nine boundary dose distributions - two for ±range uncertainties, six for ±set-up uncertainties along anteroposterior (A-P), lateral (R-L) and superior-inferior (S-I) directions, and one for nominal situation. The nine boundary influence matrices were calculated using an in-house finite size pencil beam dose engine, while the conjugate gradient method was applied to minimize the objective function. The proton dose calculation algorithm and the conjugate gradient method were tuned for heterogeneous platforms involving the CPU host and GPU device. Three clinical cases - one head and neck cancer case, one lung cancer case, and one prostate cancer case - were investigated to demonstrate the clinical feasibility of the proposed robust optimizer. Compared with results from Varian Eclipse (version 13.3), the proposed method is found to be conducive to robust treatment planning that is less sensitive to range and setup uncertainties. The three tested cases show that targets can achieve high dose uniformity while organs at risks (OARs) are in better protection against setup and range errors. Based on the CPU + GPU heterogeneous platform, the execution times of the head and neck cancer case and the prostate cancer case are much less than half of Eclipse, while the run time of the lung cancer case is similar to that of Eclipse. The fast robust optimizer developed in this study can improve the reliability of traditional proton treatment planning in a much faster speed, thus making it possible for clinical utility.
Assuntos
Algoritmos , Neoplasias de Cabeça e Pescoço/radioterapia , Neoplasias Pulmonares/radioterapia , Neoplasias da Próstata/radioterapia , Terapia com Prótons/normas , Garantia da Qualidade dos Cuidados de Saúde/normas , Planejamento da Radioterapia Assistida por Computador/métodos , Humanos , Masculino , Modelos Estatísticos , Órgãos em Risco/efeitos da radiação , Dosagem Radioterapêutica , Radioterapia de Intensidade Modulada/métodos , Fatores de Tempo , IncertezaRESUMO
This article introduces a new way of using a fibre Bragg grating (FBG) sensor for detecting the presence and number of occupants in the monitored space in a smart home (SH). CO2 sensors are used to determine the CO2 concentration of the monitored rooms in an SH. CO2 sensors can also be used for occupancy recognition of the monitored spaces in SH. To determine the presence of occupants in the monitored rooms of the SH, the newly devised method of CO2 prediction, by means of an artificial neural network (ANN) with a scaled conjugate gradient (SCG) algorithm using measurements of typical operational technical quantities (indoor temperature, relative humidity indoor and CO2 concentration in the SH) is used. The goal of the experiments is to verify the possibility of using the FBG sensor in order to unambiguously detect the number of occupants in the selected room (R104) and, at the same time, to harness the newly proposed method of CO2 prediction with ANN SCG for recognition of the SH occupancy status and the SH spatial location (rooms R104, R203, and R204) of an occupant. The designed experiments will verify the possibility of using a minimum number of sensors for measuring the non-electric quantities of indoor temperature and indoor relative humidity and the possibility of monitoring the presence of occupants in the SH using CO2 prediction by means of the ANN SCG method with ANN learning for the data obtained from only one room (R203). The prediction accuracy exceeded 90% in certain experiments. The uniqueness and innovativeness of the described solution lie in the integrated multidisciplinary application of technological procedures (the BACnet technology control SH, FBG sensors) and mathematical methods (ANN prediction with SCG algorithm, the adaptive filtration with an LMS algorithm) employed for the recognition of number persons and occupancy recognition of selected monitored rooms of SH.
RESUMO
In this letter, we propose a novel conjugate gradient (CG) adaptive filtering algorithm for online estimation of system responses that admit sparsity. Specifically, the Sparsity-promoting Conjugate Gradient (SCG) algorithm is developed based on iterative reweighting methods popular in the sparse signal recovery area. We propose an affine scaling transformation strategy within the reweighting framework, leading to an algorithm that allows the usage of a zero sparsity regularization coefficient. This enables SCG to leverage the sparsity of the system response if it already exists, while not compromising the optimization process. Simulation results show that SCG demonstrates improved convergence and steady-state properties over existing methods.
RESUMO
In the context of the Internet of Things, billions of devices-especially sensors-will be linked together in the next few years. A core component of wireless passive sensor nodes is the rectifier, which has to provide the circuit with sufficient operating voltage. In these devices, the rectifier has to be as energy efficient as possible in order to guarantee an optimal operation. Therefore, a numerical optimization scheme is proposed in this paper, which is able to find a unique optimal solution for an integrated Complementary Metal-Oxide-Semiconductor (CMOS) rectifier circuit with Self-Vth-Cancellation (SVC). An exploration of the parameter space is carried out in order to generate a meaningful target function for enhancing the rectified power for a fixed communication distance. In this paper, a mean conversion efficiency is introduced, which is a more valid target function for optimization than the Voltage Conversion Efficiency (VCE) and the commonly used Power Conversion Efficiency (PCE) and is defined as the arithmetic mean between PCE and VCE. Various trade-offs between output voltage, PCE, VCE and MCE are shown, which provide valuable information for low power rectifier designs. With the proposed method, a rectifier in a low power 55 nm process from Globalfoundries (GF55LPe) is optimized and simulated at -30 dBm input power. A mean PCE of 63.33% and a mean VCE of 63.40% is achieved.
RESUMO
Kernel adaptive filtering (KAF) is an effective nonlinear learning algorithm, which has been widely used in time series prediction. The traditional KAF is based on the stochastic gradient descent (SGD) method, which has slow convergence speed and low filtering accuracy. Hence, a kernel conjugate gradient (KCG) algorithm has been proposed with low computational complexity, while achieving comparable performance to some KAF algorithms, e.g., the kernel recursive least squares (KRLS). However, the robust learning performance is unsatisfactory, when using KCG. Meanwhile, correntropy as a local similarity measure defined in kernel space, can address large outliers in robust signal processing. On the basis of correntropy, the mixture correntropy is developed, which uses the mixture of two Gaussian functions as a kernel function to further improve the learning performance. Accordingly, this article proposes a novel KCG algorithm, named the kernel mixture correntropy conjugate gradient (KMCCG), with the help of the mixture correntropy criterion (MCC). The proposed algorithm has less computational complexity and can achieve better performance in non-Gaussian noise environments. To further control the growing radial basis function (RBF) network in this algorithm, we also use a simple sparsification criterion based on the angle between elements in the reproducing kernel Hilbert space (RKHS). The prediction simulation results on a synthetic chaotic time series and a real benchmark dataset show that the proposed algorithm can achieve better computational performance. In addition, the proposed algorithm is also successfully applied to the practical tasks of malware prediction in the field of malware analysis. The results demonstrate that our proposed algorithm not only has a short training time, but also can achieve high prediction accuracy.
RESUMO
PURPOSE: Most previous approaches to spiral Dixon water-fat imaging perform the water-fat separation and deblurring sequentially based on the assumption that the phase accumulation and blurring as a result of off-resonance are separable. This condition can easily be violated in regions where the B0 inhomogeneity varies rapidly. The goal of this work is to present a novel joint water-fat separation and deblurring method for spiral imaging. METHODS: The proposed approach is based on a more accurate signal model that takes into account the phase accumulation and blurring simultaneously. A conjugate gradient method is used in the image domain to reconstruct the deblurred water and fat iteratively. Spatially varying convolutions with a local convergence criterion are used to reduce the computational demand. RESULTS: Both simulation and high-resolution brain imaging have demonstrated that the proposed joint method consistently improves the quality of reconstructed water and fat images compared with the sequential approach, especially in regions where the field inhomogeneity changes rapidly in space. The loss of signal-to-noise-ratio as a result of deblurring is minor at optimal echo times. CONCLUSIONS: High-quality water-fat spiral imaging can be achieved with the proposed joint approach, provided that an accurate field map of B0 inhomogeneity is available. Magn Reson Med 79:3218-3228, 2018. © 2017 International Society for Magnetic Resonance in Medicine.
Assuntos
Tecido Adiposo/diagnóstico por imagem , Água Corporal/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Algoritmos , Encéfalo/diagnóstico por imagem , HumanosRESUMO
Kernel extreme learning machine (KELM) introduces kernel leaning into extreme learning machine (ELM) in order to improve the generalization ability and stability. But the Penalty parameter in KELM is randomly set and it has a strong impact on the performance of KELM. A fast KELM combining the conjugate gradient method (CG-KELM) is presented in this paper. The CG-KELM computes the output weights of the neural network by the conjugate gradient iteration method. There is no penalty parameter to be set in CG-KELM. Therefore, the CG-KELM has good generalization ability and fast learning speed. The simulations in image restoration show that CG-KELM outperforms KELM. The CG-KELM provides a balanced method between KELM and ELM.
Assuntos
Generalização Psicológica/fisiologia , Aprendizado de Máquina , Modelos Teóricos , Redes Neurais de Computação , Algoritmos , HumanosRESUMO
OBJECTIVE: Quantifying the function of the epiphyseal plate is worthwhile for the management of children with growth disorders. The aim of this retrospective study was to quantify the osteoblastic activity at the epiphyseal plate using the quantitative bone SPECT/CT. MATERIALS AND METHODS: We enrolled patients under the age of 20 years who received Tc-99m hydroxymethylene diphosphonate bone scintigraphy acquired by a quantitative SPECT/CT scanner. The images were reconstructed by ordered subset conjugate-gradient minimizer, and the uptake on the distal margin of the femur was quantified by peak standardized uptake value (SUVpeak). A public database of standard body height was used to calculate growth velocities (cm/year). RESULTS: Fifteen patients (6.9-19.7 years, 9 female, 6 male) were enrolled and a total of 25 legs were analyzed. SUVpeak in the epiphyseal plate was 18.9 ± 2.4 (average ± standard deviation) in the subjects under 15 years and decreased gradually by aging. The SUVpeak correlated significantly with the age- and sex-matched growth velocity obtained from the database (R2 = 0.83, p < 0.0001). CONCLUSION: The SUV measured by quantitative bone SPECT/CT was increased at the epiphyseal plates of children under the age of 15 years in comparison with the older group, corresponding to higher osteoblastic activity. Moreover, this study suggested a correlation between growth velocity and the SUV. Although this is a small retrospective pilot study, the objective and quantitative values measured by the quantitative bone SPECT/CT has the potential to improve the management of children with growth disorder.
Assuntos
Lâmina de Crescimento/diagnóstico por imagem , Extremidade Inferior/diagnóstico por imagem , Extremidade Inferior/crescimento & desenvolvimento , Imagem Multimodal , Osteoblastos/fisiologia , Adolescente , Criança , Feminino , Humanos , Masculino , Compostos Radiofarmacêuticos , Estudos Retrospectivos , Medronato de Tecnécio Tc 99m/análogos & derivados , Tomografia Computadorizada de Emissão de Fóton Único , Tomografia Computadorizada por Raios X , Adulto JovemRESUMO
The measurement accuracy of the intelligent flexible morphological sensor based on fiber Bragg grating (FBG) structure was limited in the application of geotechnical engineering and other fields. In order to improve the precision of intellisense for displacement, an FBG implantable flexible morphological sensor was designed in this study, and the classification morphological correction method based on conjugate gradient method and extreme learning machine (ELM) algorithm was proposed. This study utilized finite element simulations and experiments, in order to analyze the feasibility of the proposed method. Then, following the corrections, the results indicated that the maximum relative error percentages of the displacements at measuring points in different bending shapes were determined to be 6.39% (Type 1), 7.04% (Type 2), and 7.02% (Type 3), respectively. Therefore, it was confirmed that the proposed correction method was feasible, and could effectively improve the abilities of sensors for displacement intellisense. In this paper, the designed intelligent sensor was characterized by temperature self-compensation, bending shape self-classification, and displacement error self-correction, which could be used for real-time monitoring of deformation field in rock, subgrade, bridge, and other geotechnical engineering, presenting the vital significance and application promotion value.
RESUMO
The GaoFen-3 (GF-3) satellite is the only synthetic aperture radar (SAR) satellite in the High-Resolution Earth Observation System Project, which is the first C-band full-polarization SAR satellite in China. In this paper, we proposed some error sources-based weight strategies to improve the geometric performance of multi-mode GF-3 satellite SAR images without using ground control points (GCPs). To get enough tie points, a robust SAR image registration method and the SAR-features from accelerated segment test (SAR-FAST) method is used to achieve the image registration and tie point extraction. Then, the original position of these tie points in object-space is calculated with the help of the space intersection method. With the dataset clustered by the density-based spatial clustering of applications with noise (DBSCAN) algorithm, we undertake the block adjustment with a bias-compensated rational function model (RFM) aided to improve the geometric performance of these multi-mode GF-3 satellite SAR images. Different weight strategies are proposed to develop the normal equation matrix according to the error sources analysis of GF-3 satellite SAR images, and the preconditioned conjugate gradient (PCG) method is utilized to solve the normal equation. The experimental results indicate that our proposed method can improve the geometric positioning accuracy of GF-3 satellite SAR images within 2 pixels.
RESUMO
In the last two decades, significant progress has been made on developing new nanoscale mechanical property measurement techniques including instrumented indentation and atomic force microscopy based techniques. The changes in the tip-sample contact mechanics during measurements uniquely modify the displacement and force sensed by a measurement sensor and much effort is dedicated to correctly retrieve the sample mechanical properties from the measured signal. It turns out that in many cases, for the sake of simplicity, a simple contact mechanics model is adopted by overlooking the complexity of the actual contact geometry. In this work, a newly developed matrix formulation is used to solve the stress and strain equations for samples with edge geometries. Such sample geometries are often encountered in today's nanoscale integrated electronics in the form of high-aspect-ratio fins with widths in the range of tens of nanometers. In the matrix formulation, the fin geometries can be easily modeled as adjacent overlapped half-spaces and the contact problem can be solved by a numerical implementation of the conjugate gradient method. This method is very versatile in terms of contact geometry and contact interaction, either non-adhesive or adhesive. The discussion will incorporate a few model examples that are relevant for the nanoscale mechanics investigated by intermittent contact resonance AFM (ICR-AFM) on low-k dielectric fins of high-aspect-ratio. In such ICR-AFM measurements, distinct dependence of the contact stiffness was observed as a function of the applied force and distance from the edges of the fins. These dependences were correctly predicted by the model and used to retrieve the mechanical changes undergone by fins during fabrication and processing.
RESUMO
PURPOSE: To investigate the computational aspects of the prior term in quantitative susceptibility mapping (QSM) by (i) comparing the Gauss-Newton conjugate gradient (GNCG) algorithm that uses numerical conditioning (ie, modifies the prior term) with a primal-dual (PD) formulation that avoids this, and (ii) carrying out a comparison between a central and forward difference scheme for the discretization of the prior term. THEORY AND METHODS: A spatially continuous formulation of the regularized QSM inversion problem and its PD formulation were derived. The Chambolle-Pock algorithm for PD was implemented and its convergence behavior was compared with that of GNCG for the original QSM. Forward and central difference schemes were compared in terms of the presence of checkerboard artifacts. All methods were tested and validated on a gadolinium phantom, ex vivo brain blocks, and in vivo brain MRI data with respect to COSMOS. RESULTS: The PD approach provided a faster convergence rate than GNCG. The GNCG convergence rate slowed considerably with smaller (more accurate) values of the conditioning parameter. Using a forward difference suppressed the checkerboard artifacts in QSM, as compared with the central difference. The accuracy of PD and GNCG were validated based on excellent correlation with COSMOS. CONCLUSIONS: The PD approach with forward difference for the gradient showed improved convergence and accuracy over the GNCG method using central difference. Magn Reson Med 78:2416-2427, 2017. © 2017 International Society for Magnetic Resonance in Medicine.