Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Med Phys ; 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39194293

RESUMO

BACKGROUND: Intraoperative 2D quantitative angiography (QA) for intracranial aneurysms (IAs) has accuracy challenges due to the variability of hand injections. Despite the success of singular value decomposition (SVD) algorithms in reducing biases in computed tomography perfusion (CTP), their application in 2D QA has not been extensively explored. This study seeks to bridge this gap by investigating the potential of SVD-based deconvolution methods in 2D QA, particularly in addressing the variability of injection durations. PURPOSE: Building on the identified limitations in QA, the study aims to adapt SVD-based deconvolution techniques from CTP to QA for IAs. This adaptation seeks to capitalize on the high temporal resolution of QA, despite its two-dimensional nature, to enhance the consistency and accuracy of hemodynamic parameter assessment. The goal is to develop a method that can reliably assess hemodynamic conditions in IAs, independent of injection variables, for improved neurovascular diagnostics. MATERIALS AND METHODS: The study included three internal carotid aneurysm (ICA) cases. Virtual angiograms were generated using computational fluid dynamics (CFD) for three physiologically relevant inlet velocities to simulate contrast media injection durations. Time-density curves (TDCs) were produced for both the inlet and aneurysm dome. Various SVD variants, including standard SVD (sSVD) with and without classical Tikhonov regularization, block-circulant SVD (bSVD), and oscillation index SVD (oSVD), were applied to virtual angiograms. The method was applied on virtual angiograms to recover the aneurysmal dome impulse response function (IRF) and extract flow related parameters such as Peak Height PHIRF, Area Under the Curve AUCIRF, and Mean transit time MTT. Next, correlations between QA parameters, injection duration, and inlet velocity were assessed for unconvolved and deconvolved data for all SVD methods. Additionally, we performed an in vitro study, to complement our in silico investigation. We generated a 2D DSA using a flow circuit design for a patient-specific internal carotid artery phantom. The DSA showcases factors like x-ray artifacts, noise, and patient motion. We evaluated QA parameters for the in vitro phantoms using different SVD variants and established correlations between QA parameters, injection duration, and velocity for unconvolved and deconvolved data. RESULTS: The different SVD algorithm variants showed strong correlations between flow and deconvolution-adjusted QA parameters. Furthermore, we found that SVD can effectively reduce QA parameter variability across various injection durations, enhancing the potential of QA analysis parameters in neurovascular disease diagnosis and treatment. CONCLUSION: Implementing SVD-based deconvolution techniques in QA analysis can enhance the precision and reliability of neurovascular diagnostics by effectively reducing the impact of injection duration on hemodynamic parameters.

2.
Phys Med Biol ; 69(16)2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-38830366

RESUMO

Objective.In quantitative dynamic positron emission tomography (PET), time series of images, reflecting the tissue response to the arterial tracer supply, are reconstructed. This response is described by kinetic parameters, which are commonly determined on basis of the tracer concentration in tissue and the arterial input function. In clinical routine the latter is estimated by arterial blood sampling and analysis, which is a challenging process and thus, attempted to be derived directly from reconstructed PET images. However, a mathematical analysis about the necessity of measurements of the common arterial whole blood activity concentration, and the concentration of free non-metabolized tracer in the arterial plasma, for a successful kinetic parameter identification does not exist. Here we aim to address this problem mathematically.Approach.We consider the identification problem in simultaneous pharmacokinetic modeling of multiple regions of interests of dynamic PET data using the irreversible two-tissue compartment model analytically. In addition to this consideration, the situation of noisy measurements is addressed using Tikhonov regularization. Furthermore, numerical simulations with a regularization approach are carried out to illustrate the analytical results in a synthetic application example.Main results.We provide mathematical proofs showing that, under reasonable assumptions, all metabolic tissue parameters can be uniquely identified without requiring additional blood samples to measure the arterial input function. A connection to noisy measurement data is made via a consistency result, showing that exact reconstruction of the ground-truth tissue parameters is stably maintained in the vanishing noise limit. Furthermore, our numerical experiments suggest that an approximate reconstruction of kinetic parameters according to our analytic results is also possible in practice for moderate noise levels.Significance.The analytical result, which holds in the idealized, noiseless scenario, suggests that for irreversible tracers, fully quantitative dynamic PET imaging is in principle possible without costly arterial blood sampling and metabolite analysis.


Assuntos
Processamento de Imagem Assistida por Computador , Modelos Biológicos , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Cinética , Humanos
3.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732850

RESUMO

Standard beams are mainly used for the calibration of strain sensors using their load reconstruction models. However, as an ill-posed inverse problem, the solution to these models often fails to converge, especially when dealing with dynamic loads of different frequencies. To overcome this problem, a piecewise Tikhonov regularization method (PTR) is proposed to reconstruct dynamic loads. The transfer function matrix is built both using the denoised excitations and the corresponding responses. After singular value decomposition (SVD), the singular values are divided into submatrices of different sizes by utilizing a piecewise function. The regularization parameters are solved by optimizing the piecewise submatrices. The experimental result shows that the MREs of the PTR method are 6.20% at 70 Hz and 5.86% at 80 Hz. The traditional Tikhonov regularization method based on GCV exhibits MREs of 28.44% and 29.61% at frequencies of 70 Hz and 80 Hz, respectively, whereas the L-curve-based approach demonstrates MREs of 29.98% and 18.42% at the same frequencies. Furthermore, the PREs of the PTR method are 3.54% at 70 Hz and 3.73% at 80 Hz. The traditional Tikhonov regularization method based on GCV exhibits PREs of 27.01% and 26.88% at frequencies of 70 Hz and 80 Hz, respectively, whereas the L-curve-based approach demonstrates PREs of 29.50% and 15.56% at the same frequencies. All in all, the method proposed in this paper can be extensively applied to load reconstruction across different frequencies.

4.
Comput Optim Appl ; 87(2): 531-569, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38357400

RESUMO

In this paper we would like to address the classical optimization problem of minimizing a proper, convex and lower semicontinuous function via the second order in time dynamics, combining viscous and Hessian-driven damping with a Tikhonov regularization term. In our analysis we heavily exploit the Moreau envelope of the objective function and its properties as well as Tikhonov regularization properties, which we extend to a nonsmooth case. We introduce the setting, which at the same time guarantees the fast convergence of the function (and Moreau envelope) values and strong convergence of the trajectories of the system to a minimal norm solution-the element of the minimal norm of all the minimizers of the objective. Moreover, we deduce the precise rates of convergence of the values for the particular choice of parameters. Various numerical examples are also included as an illustration of the theoretical results.

5.
Multimed Tools Appl ; : 1-18, 2023 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-37362725

RESUMO

Text mining methods usually use statistical information to solve text and language-independent procedures. Text mining methods such as polarity detection based on stochastic patterns and rules need many samples to train. On the other hand, deterministic and non-probabilistic methods are easy to solve and faster than other methods but are not efficient in NLP data. In this article, a fast and efficient deterministic method for solving the problems is proposed. In the proposed method firstly we transform text and labels into a set of equations. In the second step, a mathematical solution of ill-posed equations known as Tikhonov regularization was used as a deterministic and non-probabilistic way including additional assumptions, such as smoothness of solution to assign a weight that can reflect the semantic information of each sentimental word. We confirmed the efficiency of the proposed method in the SemEval-2013 competition, ESWC Database and Taboada database as three different cases. We observed improvement of our method over negative polarity due to our proposed mathematical step. Moreover, we demonstrated the effectiveness of our proposed method over the most common and traditional machine learning, stochastic and fuzzy methods.

6.
Polymers (Basel) ; 15(4)2023 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-36850241

RESUMO

The viscoelastic relaxation spectrum provides deep insights into the complex behavior of polymers. The spectrum is not directly measurable and must be recovered from oscillatory shear or relaxation stress data. The paper deals with the problem of recovery of the relaxation spectrum of linear viscoelastic materials from discrete-time noise-corrupted measurements of relaxation modulus obtained in the stress relaxation test. A class of robust algorithms of approximation of the continuous spectrum of relaxation frequencies by finite series of orthonormal functions is proposed. A quadratic identification index, which refers to the measured relaxation modulus, is adopted. Since the problem of relaxation spectrum identification is an ill-posed inverse problem, Tikhonov regularization combined with generalized cross-validation is used to guarantee the stability of the scheme. It is proved that the accuracy of the spectrum approximation depends both on measurement noises and the regularization parameter and on the proper selection of the basis functions. The series expansions using the Laguerre, Legendre, Hermite and Chebyshev functions were studied in this paper as examples. The numerical realization of the scheme by the singular value decomposition technique is discussed and the resulting computer algorithm is outlined. Numerical calculations on model data and relaxation spectrum of polydisperse polymer are presented. Analytical analysis and numerical studies proved that by choosing an appropriate model through selection of orthonormal basis functions from the proposed class of models and using a developed algorithm of least-square regularized identification, it is possible to determine the relaxation spectrum model for a wide class of viscoelastic materials. The model is smoothed and robust on measurement noises; small model approximation errors are obtained. The identification scheme can be easily implemented in available computing environments.

7.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850438

RESUMO

The electrocardiogram (ECG) is the standard method in clinical practice to non-invasively analyze the electrical activity of the heart, from electrodes placed on the body's surface. The ECG can provide a cardiologist with relevant information to assess the condition of the heart and the possible presence of cardiac pathology. Nonetheless, the global view of the heart's electrical activity given by the ECG cannot provide fully detailed and localized information about abnormal electrical propagation patterns and corresponding substrates on the surface of the heart. Electrocardiographic imaging, also known as the inverse problem in electrocardiography, tries to overcome these limitations by non-invasively reconstructing the heart surface potentials, starting from the corresponding body surface potentials, and the geometry of the torso and the heart. This problem is ill-posed, and regularization techniques are needed to achieve a stable and accurate solution. The standard approach is to use zero-order Tikhonov regularization and the L-curve approach to choose the optimal value for the regularization parameter. However, different methods have been proposed for computing the optimal value of the regularization parameter. Moreover, regardless of the estimation method used, this may still lead to over-regularization or under-regularization. In order to gain a better understanding of the effects of the choice of regularization parameter value, in this study, we first focused on the regularization parameter itself, and investigated its influence on the accuracy of the reconstruction of heart surface potentials, by assessing the reconstruction accuracy with high-precision simultaneous heart and torso recordings from four dogs. For this, we analyzed a sufficiently large range of parameter values. Secondly, we evaluated the performance of five different methods for the estimation of the regularization parameter, also in view of the results of the first analysis. Thirdly, we investigated the effect of using a fixed value of the regularization parameter across all reconstructed beats. Accuracy was measured in terms of the quality of reconstruction of the heart surface potentials and estimation of the activation and recovery times, when compared with ground truth recordings from the experimental dog data. Results show that values of the regularization parameter in the range (0.01-0.03) provide the best accuracy, and that the three best-performing estimation methods (L-Curve, Zero-Crossing, and CRESO) give values in this range. Moreover, a fixed value of the regularization parameter could achieve very similar performance to the beat-specific parameter values calculated by the different estimation methods. These findings are relevant as they suggest that regularization parameter estimation methods may provide the accurate reconstruction of heart surface potentials only for specific ranges of regularization parameter values, and that using a fixed value of the regularization parameter may represent a valid alternative, especially when computational efficiency or consistency across time is required.


Assuntos
Eletrocardiografia , Coração , Animais , Cães , Coração/diagnóstico por imagem , Tronco , Eletricidade , Eletrodos
8.
J Environ Radioact ; 256: 107052, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36308943

RESUMO

Environmental contamination by radioactive materials can be characterized by in situ gamma surface measurements. During such measurements, the field of view of a gamma detector can be tens of meters wide, resulting in a count rate that integrates the signal over a large measurement support volume/area. The contribution of a specific point to the signal depends on various parameters, such as the height of the detector above the ground surface, the gamma energy and the detector properties, etc. To improve the spatial resolution of the activity concentration, contributions of a radionuclide from nearby areas to the count rate of a single measurement should be disentangled. The experiments described in this paper, deployed 2D inversion of in situ gamma spectrometric measurements using a non-negative least squares-based Tikhonov regularization method. Data were acquired using a portable LaBr3 gamma detector. The detector response as a function of the distance of the radioactive source, required for the inversion process, was simulated using the Monte Carlo N-Particle (MCNP) transport code. The uncertainty on activity concentration was calculated using the Monte Carlo error propagation method. The 2D inversion methodology was first satisfactorily assessed for 133Ba and 137Cs source activity distributions using reference pads. Secondly, this method was applied on a 137Cs contaminated site, making use of above-ground in-situ gamma spectrometry measurements, conducted on a regular grid. The inversion process results were compared with the results from in-situ borehole measurements and laboratory analyses of soil samples. The calculated 137Cs activity concentration levels were compared against the activity concentration value for exemption or clearance of materials which can be applied by default to any amount and any type of solid material. Using the 2D inversion and the Monte Carlo error propagation method, a high spatial resolution classification of the site, in terms of exceeding the exemption limit, could be made. The 137Cs activity concentrations obtained using the inversion process agreed well with the results from the in-situ borehole measurements and those from the soil samples, showing that the 2D inversion is a convenient approach to deconvolute the contribution of radioactive sources from nearby areas within a detector's field of view, and increases the resolution of spatial contamination mapping.


Assuntos
Monitoramento de Radiação , Espectrometria gama , Espectrometria gama/métodos , Monitoramento de Radiação/métodos , Radioisótopos de Césio/análise , Método de Monte Carlo , Solo
9.
Soft comput ; 27(5): 2493-2508, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36573103

RESUMO

Extreme learning machine (ELM) as a type of feedforward neural network has been widely used to obtain beneficial insights from various disciplines and real-world applications. Despite the advantages like speed and highly adaptability, instability drawbacks arise in case of multicollinearity, and to overcome this, additional improvements were needed. Regularization is one of the best choices to overcome these drawbacks. Although ridge and Liu regressions have been considered and seemed effective regularization methods on ELM algorithm, each one has own characteristic features such as the form of tuning parameter, the level of shrinkage or the norm of coefficients. Instead of focusing on one of these regularization methods, we propose a combination of ridge and Liu regressions in a unified form for the context of ELM as a remedy to aforementioned drawbacks. To investigate the performance of the proposed algorithm, comprehensive comparisons have been carried out by using various real-world data sets. Based on the results, it is obtained that the proposed algorithm is more effective than the ELM and its variants based on ridge and Liu regressions, RR-ELM and Liu-ELM, in terms of the capability of generalization. Generalization performance of proposed algorithm on ELM is remarkable when compared to RR-ELM and Liu-ELM, and the generalization performance of the proposed algorithm on ELM increases as the number of nodes increases. The proposed algorithm outperforms ELM in all data sets and all node numbers in that it has a smaller norm and standard deviation of the norm. Additionally, it should be noted that the proposed algorithm can be applied for both regression and classification problems.

10.
Sensors (Basel) ; 22(23)2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36501794

RESUMO

Imaging tasks today are being increasingly shifted toward deep learning-based solutions. Biomedical imaging problems are no exception toward this tendency. It is appealing to consider deep learning as an alternative to such a complex imaging task. Although research of deep learning-based solutions continues to thrive, challenges still remain that limits the availability of these solutions in clinical practice. Diffuse optical tomography is a particularly challenging field since the problem is both ill-posed and ill-conditioned. To get a reconstructed image, various regularization-based models and procedures have been developed in the last three decades. In this study, a sensor-to-image based neural network for diffuse optical imaging has been developed as an alternative to the existing Tikhonov regularization (TR) method. It also provides a different structure compared to previous neural network approaches. We focus on realizing a complete image reconstruction function approximation (from sensor to image) by combining multiple deep learning architectures known in imaging fields that gives more capability to learn than the fully connected neural networks (FCNN) and/or convolutional neural networks (CNN) architectures. We use the idea of transformation from sensor- to image-domain similarly with AUTOMAP, and use the concept of an encoder, which is to learn a compressed representation of the inputs. Further, a U-net with skip connections to extract features and obtain the contrast image, is proposed and implemented. We designed a branching-like structure of the network that fully supports the ring-scanning measurement system, which means it can deal with various types of experimental data. The output images are obtained by multiplying the contrast images with the background coefficients. Our network is capable of producing attainable performance in both simulation and experiment cases, and is proven to be reliable to reconstruct non-synthesized data. Its apparent superior performance was compared with the results of the TR method and FCNN models. The proposed and implemented model is feasible to localize the inclusions with various conditions. The strategy created in this paper can be a promising alternative solution for clinical breast tumor imaging applications.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Óptica , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
11.
Front Mol Biosci ; 9: 915167, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35720114

RESUMO

Pulsed dipolar electron paramagnetic resonance (PDEPR) spectroscopy experiments measure the dipolar coupling, and therefore nanometer-scale distances and distance distributions, between paramagnetic centers. Of the family of PDEPR experiments, the most commonly used pulsed sequence is four-pulse double electron resonance (DEER, also known as PELDOR). There are several ways to analyze DEER data to extract distance distributions, and this may appear overwhelming at first. This work compares and reviews six of the packages, and a brief getting started guide for each is provided.

12.
Chemphyschem ; 23(13): e202200012, 2022 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-35389549

RESUMO

Impedance spectroscopy is a powerful characterization method to evaluate the performance of electrochemical systems. However, overlapping signals in the resulting impedance spectra oftentimes cause misinterpretation of the data. The distribution of relaxation times (DRT) method overcomes this problem by transferring the impedance data from the frequency domain into the time domain, which yields DRT spectra with an increased resolution. Unfortunately, the determination of the DRT is an ill-posed problem, and appropriate mathematical regularizations become inevitable to find suitable solutions. The Tikhonov algorithm is a widespread method for computing DRT data, but it leads to unlikely spectra due to necessary boundaries. Therefore, we introduce the application of three alternative algorithms (Gold, Richardson Lucy, Sparse Spike) for the determination of stable DRT solutions and compare their performances. As the promising Sparse Spike deconvolution has a limited scope when using one single regularization parameter, we furthermore replaced the scalar regularization parameter with a vector. The resulting method is able to calculate well-resolved DRT spectra.


Assuntos
Algoritmos , Impedância Elétrica
13.
Materials (Basel) ; 15(5)2022 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-35269008

RESUMO

In order to monitor the synthesis processes or characterize nanoparticles for application, a new method that allows in situ determination of the two-dimensional size distribution and concentration of Au-Ag alloy nanospheroids, based on their extinction spectrum, is developed. Non-negative Tikhonov regularization and T-matrix method were used to solve the inverse problem. The effects of the two-dimensional size steps, wavelength range, and measurement errors of extinction spectrum on the retrieval results were analyzed to verify the feasibility and accuracy of the retrieval algorithm. Through comparative analysis, the size steps and wavelength range that make the retrieval error smaller are found. After adding 0.1% random noise to the extinction spectrum, a small variation in the retrieval error of the mean size is observed. The results showed that the error of the mean size is smaller than 2% and the error of the concentration is smaller than 3%. This method is simple, fast, cheap, nondestructive, and can be done in situ during the growth process of nanoparticles.

14.
Psychiatry Res Neuroimaging ; 321: 111448, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35124389

RESUMO

This paper introduces a novel algorithm for solving non-Gaussian mixture models of diffusion tensor imaging (DTI). In particular, these models are used for detecting the orientations of white matter fibers in brain. In our approach, any DT-MRI model (mathematically) is represented by an under-determined system of linear equations. The proposed algorithm uses an orthogonal matching pursuit (OMP) method coupled with Tikhonov regularization for solving such an under-determined system effectively, which results in better reconstruction of the fibers orientation. These linear systems depend on the number of the gradient directions used for generating the signals and for reconstruction process. OMP is a greedy iterative algorithm that picks the column of coefficient matrix that has the maximum correlation or projection on the residual at each stage. Using OMP with Tikhonov regularization shows tremendous reduction in the angular error when compared with an existing scheme where non-negative least square method (NNLS) is used. The proposed work is validated with both artificial simulations as well as real data experiments. The reduction in angular error is more pronounced when the angle of separation between the fibers is small.


Assuntos
Substância Branca , Algoritmos , Encéfalo/diagnóstico por imagem , Imagem de Tensor de Difusão/métodos , Humanos , Imageamento por Ressonância Magnética/métodos , Substância Branca/diagnóstico por imagem
15.
Phys Med Biol ; 67(2)2022 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-35008076

RESUMO

Positronium (Ps) lifetime imaging is gaining attention to bring out additional biomedical information from positron emission tomography (PET). The lifetime of Psin vivocan change depending on the physical and chemical environments related to some diseases. Due to the limited sensitivity, Ps lifetime imaging may require merging some voxels for statistical accuracy. This paper presents a method for separating the lifetime components in the voxel to avoid information loss due to averaging. The mathematics for this separation is the inverse Laplace transform (ILT), and the authors examined an iterative numerical ILT algorithm using Tikhonov regularization, namely CONTIN, to discriminate a small lifetime difference due to oxygen saturation. The separability makes it possible to merge voxels without missing critical information on whether they contain abnormally long or short lifetime components. The authors conclude that ILT can compensate for the weaknesses of Ps lifetime imaging and extract the maximum amount of information.


Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Tomografia por Emissão de Pósitrons
16.
J Environ Radioact ; 243: 106807, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34968949

RESUMO

An in situ borehole gamma logging method using a LaBr3 gamma detector has been developed to characterize a137Cs contaminated site. The activity-depth distribution of 137Cs was derived by inversion of the in situ measurement data using two different least squares methods, (i) Least squares optimization (LSO) and (ii) Tikhonov regularization. The regularization parameter (λ) of the Tikhonov regularization method was estimated using three different methods i.e. the L-curve, Generalized Cross Validation (GCV) and a prior information based method (PIBM). The considered inversion method variants were first validated for a137Cs contaminated pipe, and in most of the cases, the calculated activity of 137Cs was found to be within the acceptable range. The calculated 137Cs activity-depth profiles from in situ measurements were also in good agreement with the ones obtained from soil sample analysis, with an R2 ranging from 0.76 to 0.82. The GCV method for estimating λ appeared to perform better than the two other methods in terms of R2 and root mean squared error (RMSE). The L-curve method resulted in higher RMSE than the other Tikhonov regularization methods. Instability was observed in the activity concentration depth profile obtained from the LSO method. Therefore, we recommend the Tikhonov regularization with GCV for estimating λ for estimating the activity concentration-depth profile. The site studied showed 137Cs activity concentrations above the exemption limit down to depths of 0.50-0.90 m.


Assuntos
Monitoramento de Radiação , Radioatividade , Solo , Espectrometria gama
17.
Math Program ; 189(1-2): 151-186, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34720194

RESUMO

We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system with Hessian driven damping and a Tikhonov regularization term in connection with the minimization of a smooth convex function in Hilbert spaces. We obtain fast convergence results for the function values along the trajectories. The Tikhonov regularization term enables the derivation of strong convergence results of the trajectory to the minimizer of the objective function of minimum norm.

18.
Entropy (Basel) ; 23(11)2021 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-34828178

RESUMO

The main difficulty posed by the parameter inversion of partial differential equations lies in the presence of numerous local minima in the cost function. Inversion fails to converge to the global minimum point unless the initial estimate is close to the exact solution. Constraints can improve the convergence of the method, but ordinary iterative methods will still become trapped in local minima if the initial guess is far away from the exact solution. In order to overcome this drawback fully, this paper designs a homotopy strategy that makes natural use of constraints. Furthermore, due to the ill-posedness of inverse problem, the standard Tikhonov regularization is incorporated. The efficiency of the method is illustrated by solving the coefficient inversion of the saturation equation in the two-phase porous media.

19.
Sci Prog ; 104(2): 368504211023691, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34100331

RESUMO

This paper presents a study of aero-engine exhaust gas electrostatic sensor array to estimate the spatial position, charge amount and velocity of charged particle. Firstly, this study establishes a mathematical model to analyze the inducing characteristics and obtain the spatial sensitivity distribution of sensor array. Then, Tikhonov regularization and compressed sensing are used to estimate the spatial position and charge amount of particle based on the obtained sensitivity distribution; cross-correlation algorithm is used to determine particle's velocity. An oil calibration test rig is established to verify the proposed methods. Thirteen spatial positions are selected as the test points. The estimation errors of spatial positions and charge amounts are both within 5% when the particles are locating at central area. The errors are higher when the particles are closer to the wall and may exceed 10%. The estimation errors of velocities by using cross-correlation are all within 2%. An air-gun test rig is further established to simulate the high velocity condition and distinguish different kinds of particles such as metal particles and non-metal particles.

20.
Neuroimage ; 238: 118235, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34091032

RESUMO

Acceleration methods in fMRI aim to reconstruct high fidelity images from under-sampled k-space, allowing fMRI datasets to achieve higher temporal resolution, reduced physiological noise aliasing, and increased statistical degrees of freedom. While low levels of acceleration are typically part of standard fMRI protocols through parallel imaging, there exists the potential for approaches that allow much greater acceleration. One such existing approach is k-t FASTER, which exploits the inherent low-rank nature of fMRI. In this paper, we present a reformulated version of k-t FASTER which includes additional L2 constraints within a low-rank framework. We evaluated the effect of three different constraints against existing low-rank approaches to fMRI reconstruction: Tikhonov constraints, low-resolution priors, and temporal subspace smoothness. The different approaches are separately tested for robustness to under-sampling and thermal noise levels, in both retrospectively and prospectively-undersampled finger-tapping task fMRI data. Reconstruction quality is evaluated by accurate reconstruction of low-rank subspaces and activation maps. The use of L2 constraints was found to achieve consistently improved results, producing high fidelity reconstructions of statistical parameter maps at higher acceleration factors and lower SNR values than existing methods, but at a cost of longer computation time. In particular, the Tikhonov constraint proved very robust across all tested datasets, and the temporal subspace smoothness constraint provided the best reconstruction scores in the prospectively-undersampled dataset. These results demonstrate that regularized low-rank reconstruction of fMRI data can recover functional information at high acceleration factors without the use of any model-based spatial constraints.


Assuntos
Neuroimagem Funcional/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Aceleração , Conjuntos de Dados como Assunto , Humanos , Dinâmica não Linear , Estudos Prospectivos , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA