Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732850

RESUMO

Standard beams are mainly used for the calibration of strain sensors using their load reconstruction models. However, as an ill-posed inverse problem, the solution to these models often fails to converge, especially when dealing with dynamic loads of different frequencies. To overcome this problem, a piecewise Tikhonov regularization method (PTR) is proposed to reconstruct dynamic loads. The transfer function matrix is built both using the denoised excitations and the corresponding responses. After singular value decomposition (SVD), the singular values are divided into submatrices of different sizes by utilizing a piecewise function. The regularization parameters are solved by optimizing the piecewise submatrices. The experimental result shows that the MREs of the PTR method are 6.20% at 70 Hz and 5.86% at 80 Hz. The traditional Tikhonov regularization method based on GCV exhibits MREs of 28.44% and 29.61% at frequencies of 70 Hz and 80 Hz, respectively, whereas the L-curve-based approach demonstrates MREs of 29.98% and 18.42% at the same frequencies. Furthermore, the PREs of the PTR method are 3.54% at 70 Hz and 3.73% at 80 Hz. The traditional Tikhonov regularization method based on GCV exhibits PREs of 27.01% and 26.88% at frequencies of 70 Hz and 80 Hz, respectively, whereas the L-curve-based approach demonstrates PREs of 29.50% and 15.56% at the same frequencies. All in all, the method proposed in this paper can be extensively applied to load reconstruction across different frequencies.

2.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850438

RESUMO

The electrocardiogram (ECG) is the standard method in clinical practice to non-invasively analyze the electrical activity of the heart, from electrodes placed on the body's surface. The ECG can provide a cardiologist with relevant information to assess the condition of the heart and the possible presence of cardiac pathology. Nonetheless, the global view of the heart's electrical activity given by the ECG cannot provide fully detailed and localized information about abnormal electrical propagation patterns and corresponding substrates on the surface of the heart. Electrocardiographic imaging, also known as the inverse problem in electrocardiography, tries to overcome these limitations by non-invasively reconstructing the heart surface potentials, starting from the corresponding body surface potentials, and the geometry of the torso and the heart. This problem is ill-posed, and regularization techniques are needed to achieve a stable and accurate solution. The standard approach is to use zero-order Tikhonov regularization and the L-curve approach to choose the optimal value for the regularization parameter. However, different methods have been proposed for computing the optimal value of the regularization parameter. Moreover, regardless of the estimation method used, this may still lead to over-regularization or under-regularization. In order to gain a better understanding of the effects of the choice of regularization parameter value, in this study, we first focused on the regularization parameter itself, and investigated its influence on the accuracy of the reconstruction of heart surface potentials, by assessing the reconstruction accuracy with high-precision simultaneous heart and torso recordings from four dogs. For this, we analyzed a sufficiently large range of parameter values. Secondly, we evaluated the performance of five different methods for the estimation of the regularization parameter, also in view of the results of the first analysis. Thirdly, we investigated the effect of using a fixed value of the regularization parameter across all reconstructed beats. Accuracy was measured in terms of the quality of reconstruction of the heart surface potentials and estimation of the activation and recovery times, when compared with ground truth recordings from the experimental dog data. Results show that values of the regularization parameter in the range (0.01-0.03) provide the best accuracy, and that the three best-performing estimation methods (L-Curve, Zero-Crossing, and CRESO) give values in this range. Moreover, a fixed value of the regularization parameter could achieve very similar performance to the beat-specific parameter values calculated by the different estimation methods. These findings are relevant as they suggest that regularization parameter estimation methods may provide the accurate reconstruction of heart surface potentials only for specific ranges of regularization parameter values, and that using a fixed value of the regularization parameter may represent a valid alternative, especially when computational efficiency or consistency across time is required.


Assuntos
Eletrocardiografia , Coração , Animais , Cães , Coração/diagnóstico por imagem , Tronco , Eletricidade , Eletrodos
3.
Chemphyschem ; 23(13): e202200012, 2022 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-35389549

RESUMO

Impedance spectroscopy is a powerful characterization method to evaluate the performance of electrochemical systems. However, overlapping signals in the resulting impedance spectra oftentimes cause misinterpretation of the data. The distribution of relaxation times (DRT) method overcomes this problem by transferring the impedance data from the frequency domain into the time domain, which yields DRT spectra with an increased resolution. Unfortunately, the determination of the DRT is an ill-posed problem, and appropriate mathematical regularizations become inevitable to find suitable solutions. The Tikhonov algorithm is a widespread method for computing DRT data, but it leads to unlikely spectra due to necessary boundaries. Therefore, we introduce the application of three alternative algorithms (Gold, Richardson Lucy, Sparse Spike) for the determination of stable DRT solutions and compare their performances. As the promising Sparse Spike deconvolution has a limited scope when using one single regularization parameter, we furthermore replaced the scalar regularization parameter with a vector. The resulting method is able to calculate well-resolved DRT spectra.


Assuntos
Algoritmos , Impedância Elétrica
4.
Sensors (Basel) ; 22(23)2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36501794

RESUMO

Imaging tasks today are being increasingly shifted toward deep learning-based solutions. Biomedical imaging problems are no exception toward this tendency. It is appealing to consider deep learning as an alternative to such a complex imaging task. Although research of deep learning-based solutions continues to thrive, challenges still remain that limits the availability of these solutions in clinical practice. Diffuse optical tomography is a particularly challenging field since the problem is both ill-posed and ill-conditioned. To get a reconstructed image, various regularization-based models and procedures have been developed in the last three decades. In this study, a sensor-to-image based neural network for diffuse optical imaging has been developed as an alternative to the existing Tikhonov regularization (TR) method. It also provides a different structure compared to previous neural network approaches. We focus on realizing a complete image reconstruction function approximation (from sensor to image) by combining multiple deep learning architectures known in imaging fields that gives more capability to learn than the fully connected neural networks (FCNN) and/or convolutional neural networks (CNN) architectures. We use the idea of transformation from sensor- to image-domain similarly with AUTOMAP, and use the concept of an encoder, which is to learn a compressed representation of the inputs. Further, a U-net with skip connections to extract features and obtain the contrast image, is proposed and implemented. We designed a branching-like structure of the network that fully supports the ring-scanning measurement system, which means it can deal with various types of experimental data. The output images are obtained by multiplying the contrast images with the background coefficients. Our network is capable of producing attainable performance in both simulation and experiment cases, and is proven to be reliable to reconstruct non-synthesized data. Its apparent superior performance was compared with the results of the TR method and FCNN models. The proposed and implemented model is feasible to localize the inclusions with various conditions. The strategy created in this paper can be a promising alternative solution for clinical breast tumor imaging applications.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Óptica , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
5.
Neuroimage ; 238: 118235, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34091032

RESUMO

Acceleration methods in fMRI aim to reconstruct high fidelity images from under-sampled k-space, allowing fMRI datasets to achieve higher temporal resolution, reduced physiological noise aliasing, and increased statistical degrees of freedom. While low levels of acceleration are typically part of standard fMRI protocols through parallel imaging, there exists the potential for approaches that allow much greater acceleration. One such existing approach is k-t FASTER, which exploits the inherent low-rank nature of fMRI. In this paper, we present a reformulated version of k-t FASTER which includes additional L2 constraints within a low-rank framework. We evaluated the effect of three different constraints against existing low-rank approaches to fMRI reconstruction: Tikhonov constraints, low-resolution priors, and temporal subspace smoothness. The different approaches are separately tested for robustness to under-sampling and thermal noise levels, in both retrospectively and prospectively-undersampled finger-tapping task fMRI data. Reconstruction quality is evaluated by accurate reconstruction of low-rank subspaces and activation maps. The use of L2 constraints was found to achieve consistently improved results, producing high fidelity reconstructions of statistical parameter maps at higher acceleration factors and lower SNR values than existing methods, but at a cost of longer computation time. In particular, the Tikhonov constraint proved very robust across all tested datasets, and the temporal subspace smoothness constraint provided the best reconstruction scores in the prospectively-undersampled dataset. These results demonstrate that regularized low-rank reconstruction of fMRI data can recover functional information at high acceleration factors without the use of any model-based spatial constraints.


Assuntos
Neuroimagem Funcional/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Aceleração , Conjuntos de Dados como Assunto , Humanos , Dinâmica não Linear , Estudos Prospectivos , Estudos Retrospectivos
6.
Math Program ; 189(1-2): 151-186, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34720194

RESUMO

We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system with Hessian driven damping and a Tikhonov regularization term in connection with the minimization of a smooth convex function in Hilbert spaces. We obtain fast convergence results for the function values along the trajectories. The Tikhonov regularization term enables the derivation of strong convergence results of the trajectory to the minimizer of the objective function of minimum norm.

7.
Entropy (Basel) ; 23(11)2021 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-34828178

RESUMO

The main difficulty posed by the parameter inversion of partial differential equations lies in the presence of numerous local minima in the cost function. Inversion fails to converge to the global minimum point unless the initial estimate is close to the exact solution. Constraints can improve the convergence of the method, but ordinary iterative methods will still become trapped in local minima if the initial guess is far away from the exact solution. In order to overcome this drawback fully, this paper designs a homotopy strategy that makes natural use of constraints. Furthermore, due to the ill-posedness of inverse problem, the standard Tikhonov regularization is incorporated. The efficiency of the method is illustrated by solving the coefficient inversion of the saturation equation in the two-phase porous media.

8.
Sensors (Basel) ; 20(16)2020 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-32781628

RESUMO

To address the miniaturization of the spectral imaging system required by a mounted platform and to overcome the low luminous flux caused by current spectroscopic technology, we propose a method for the multichannel measurement of spectra using a broadband filter in this work. The broadband filter is placed in front of a lens, and the spectral absorption characteristics of the broadband filter are used to achieve the modulation of the incident spectrum of the detection target and to establish a mathematical model for the detection of the target. The spectral and spatial information of the target can be obtained by acquiring data using a push-broom method and reconstructing the spectrum using the GCV-based Tikhonov regularization algorithm. In this work, we compare the accuracy of the reconstructed spectra using the least-squares method and the Tikhonov algorithm based on the L-curve. The effect of errors in the spectral modulation function on the accuracy of the reconstructed spectra is analyzed. We also analyze the effect of the number of overdetermined equations on the accuracy of the reconstructed spectra and consider the effect of detector noise on the spectral recovery. A comparison between the known data cubes and our simulation results shows that the spectral image quality based on broadband filter reduction is better, which validates the feasibility of the method. The proposed method of combining broadband filter-based spectroscopy with a panchromatic imaging process for measurement modulation rather than spectroscopic modulation provides a new approach to spectral imaging.

9.
Magn Reson Med ; 81(4): 2588-2599, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30536764

RESUMO

PURPOSE: To quantitatively evaluate a superresolution technique for 3D, one-millimeter isotropic diffusion-weighted imaging (DWI) of the whole breasts. METHODS: Isotropic 3D DWI datasets are obtained using a combination of (i) a readout-segmented diffusion-weighted-echo-planar imaging (DW-EPI) sequence (rs-EPI), providing high in-plane resolution, and (ii) a superresolution (SR) strategy, which consists of acquiring 3 datasets with thick slices (3 mm) and 1-mm shifts in the slice direction, and combining them into a 1 × 1 × 1-mm3 dataset using a dedicated reconstruction. Two SR reconstruction schemes were investigated, based on different regularization schemes: conventional Tikhonov or Beltrami (an edge-preserving constraint). The proposed SR strategy was compared to native 1 × 1 × 1-mm3 acquisitions (i.e. with 1-mm slice thickness) in 8 healthy subjects, in terms of signal-to-noise ratio (SNR) efficiency, using a theoretical framework, Monte Carlo simulations and region-of-interest (ROI) measurements, and image sharpness metrics. Apparent diffusion coefficient (ADC) values in normal breast tissue were also compared. RESULTS: The SR images resulted in an SNR gain above 3 compared to native 1 × 1 × 1-mm3 using the same acquisition duration (acquisition gain 3 and reconstruction gain >1). Beltrami-SR provided the best results in terms of SNR and image sharpness. The ADC values in normal breast measured from Beltrami-SR were preserved compared to low-resolution images (1.91 versus 1.97 ×10-3 mm2 /s, P = .1). CONCLUSION: A combination of rs-EPI and SR allows 3D, 1-mm isotropic breast DWI data to be obtained with better SNR than a native 1-mm isotropic acquisition. The proposed DWI protocol might be of interest for breast cancer monitoring/screening without injection.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Mama/diagnóstico por imagem , Imagem de Difusão por Ressonância Magnética , Imagem Ecoplanar/métodos , Adulto , Bases de Dados Factuais , Feminino , Voluntários Saudáveis , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Método de Monte Carlo , Razão Sinal-Ruído
10.
Optim Methods Softw ; 34(3): 489-514, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31057305

RESUMO

Proximal splitting algorithms for monotone inclusions (and convex optimization problems) in Hilbert spaces share the common feature to guarantee for the generated sequences in general weak convergence to a solution. In order to achieve strong convergence, one usually needs to impose more restrictive properties for the involved operators, like strong monotonicity (respectively, strong convexity for optimization problems). In this paper, we propose a modified Krasnosel'skii-Mann algorithm in connection with the determination of a fixed point of a nonexpansive mapping and show strong convergence of the iteratively generated sequence to the minimal norm solution of the problem. Relying on this, we derive a forward-backward and a Douglas-Rachford algorithm, both endowed with Tikhonov regularization terms, which generate iterates that strongly converge to the minimal norm solution of the set of zeros of the sum of two maximally monotone operators. Furthermore, we formulate strong convergent primal-dual algorithms of forward-backward and Douglas-Rachford-type for highly structured monotone inclusion problems involving parallel-sums and compositions with linear operators. The resulting iterative schemes are particularized to the solving of convex minimization problems. The theoretical results are illustrated by numerical experiments on the split feasibility problem in infinite dimensional spaces.

11.
Neuroimage ; 179: 166-175, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29906634

RESUMO

A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods.


Assuntos
Mapeamento Encefálico/métodos , Processamento de Imagem Assistida por Computador/métodos , Adulto , Algoritmos , Artefatos , Feminino , Cabeça , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino
12.
Biomed Eng Online ; 17(1): 3, 2018 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-29335011

RESUMO

BACKGROUND: The multiple-breath washout (MBW) is able to provide information about the distribution of ventilation-to-volume (v/V) ratios in the lungs. However, the classical, all-parallel model may return skewed results due to the mixing effect of a common dead space. The aim of this work is to examine whether a novel mathematical model and algorithm is able to estimate v/V of a physical model, and to compare its results with those of the classical model. The novel model takes into account a dead space in series with the parallel ventilated compartments, allows for variable tidal volume (VT) and end-expiratory lung volume (EELV), and does not require a ideal step change of the inert gas concentration. METHODS: Two physical models with preset v/V units and a common series dead space (vd) were built and mechanically ventilated. The models underwent MBW with N2 as inert gas, throughout which flow and N2 concentration signals were acquired. Distribution of v/V was estimated-via nonnegative least squares, with Tikhonov regularization-with the classical, all-parallel model (with and without correction for non-ideal inspiratory N2 step) and with the new, generalized model including breath-by-breath vd estimates given by the Fowler method (with and without constrained VT and EELV). RESULTS: The v/V distributions estimated with constrained EELV and VT by the generalized model were practically coincident with the actual v/V distribution for both physical models. The v/V distributions calculated with the classical model were shifted leftwards and broader as compared to the reference. CONCLUSIONS: The proposed model and algorithm provided better estimates of v/V than the classical model, particularly with constrained VT and EELV.


Assuntos
Modelos Biológicos , Respiração Artificial , Respiração , Expiração/fisiologia , Nitrogênio/metabolismo , Volume de Ventilação Pulmonar
13.
Sensors (Basel) ; 18(12)2018 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-30558375

RESUMO

Cables are the main load-bearing structural components of long-span bridges, such as suspension bridges and cable-stayed bridges. When relative slip occurs among the wires in a cable, the local bending stiffness of the cable will significantly decrease, and the cable enters a local interlayer slip damage state. The decrease in the local bending stiffness caused by the local interlayer slip damage to the cable is symmetric or approximately symmetric for multiple elements at both the fixed end and the external load position. An eigenpair sensitivity identification method is introduced in this study to identify the interlayer slip damage to the cable. First, an eigenparameter sensitivity calculation formula is deduced. Second, the cable is discretized as a mass-spring-damping structural system considering stiffness and damping, and the magnitude of the cable interlayer slip damage is simulated based on the degree of stiffness reduction. The Tikhonov regularization method is introduced to solve the damage identification equation of the inverse problem, and artificial white noise is introduced to evaluate the robustness of the method to noise. Numerical examples of stayed cables are investigated to illustrate the efficiency and accuracy of the method proposed in this study.

14.
J Chemom ; 31(4)2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30369716

RESUMO

Tikhonov regularization was proposed for multivariate calibration by Andries and Kalivas [1]. We use this framework for modeling the statistical association between spectroscopy data and a scalar outcome. In both the calibration and regression settings this regularization process has advantages over methods of spectral pre-processing and dimension-reduction approaches such as feature extraction or principal component regression. We propose an extension of this penalized regression framework by adaptively refining the penalty term to optimally focus the regularization process. We illustrate the approach using simulated spectra and compare it with other penalized regression models and with a two-step method that first pre-processes the spectra then fits a dimension-reduced model using the processed data. The methods are also applied to magnetic resonance spectroscopy data to identify brain metabolites that are associated with cognitive function.

15.
J Xray Sci Technol ; 25(1): 113-134, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-27834789

RESUMO

Singular value decomposition (SVD)-based 2D image reconstruction methods are developed and evaluated for a broad class of inverse problems for which there are no analytical solutions. The proposed methods are fast and accurate for reconstructing images in a non-iterative fashion. The multi-resolution strategy is adopted to reduce the size of the system matrix to reconstruct large images using limited memory capacity. A modified high-contrast Shepp-Logan phantom, a low-contrast FORBILD head phantom, and a physical phantom are employed to evaluate the proposed methods with different system configurations. The results show that the SVD methods can accurately reconstruct images from standard scan and interior scan projections and that they outperform other benchmark methods. The general SVD method outperforms the other SVD methods. The truncated SVD and Tikhonov regularized SVD methods accurately reconstruct a region-of-interest (ROI) from an internal scan with a known sub-region inside the ROI. Furthermore, the SVD methods are much faster and more flexible than the benchmark algorithms, especially in the ROI reconstructions in our experiments.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Humanos , Modelos Biológicos , Imagens de Fantasmas
16.
Biomed Eng Online ; 15(1): 89, 2016 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-27480332

RESUMO

BACKGROUND: This work presents a generalized technique to estimate pulmonary ventilation-to-volume (v/V) distributions using the multiple-breath nitrogen washout, in which both tidal volume (V T ) and the end-expiratory lung volume (EELV) are allowed to vary during the maneuver. In addition, the volume of the series dead space (v d ), unlike the classical model, is considered a common series unit connected to a set of parallel alveolar units. METHODS: The numerical solution for simulated data, either error-free or with the N2 measurement contaminated with the addition of Gaussian random noise of 3 or 5 % standard deviation was tested under several conditions in a computational model constituted by 50 alveolar units with unimodal and bimodal distributions of v/V. Non-negative least squares regression with Tikhonov regularization was employed for parameter retrieval. The solution was obtained with either unconstrained or constrained (V T , EELV and v d ) conditions. The Tikhonov gain was fixed or estimated and a weighting matrix (WM) was considered. The quality of estimation was evaluated by the sum of the squared errors (SSE) (between reference and recovered distributions) and by the deviations of the first three moments calculated for both distributions. Additionally, a shape classification method was tested to identify the solution as unimodal or bimodal, by counting the number of shape agreements after 1000 repetitions. RESULTS: The accuracy of the results showed a high dependence on the noise amplitude. The best algorithm for SSE and moments included the constrained and the WM solvers, whereas shape agreement improved without WM, resulting in 97.2 % for unimodal and 90.0 % for bimodal distributions in the highest noise condition. CONCLUSIONS: In conclusion this generalized method was able to identify v/V distributions from a lung model with a common series dead space even with variable V T . Although limitations remain in presence of experimental noise, appropriate combination of processing steps were also found to reduce estimation errors.


Assuntos
Modelos Biológicos , Nitrogênio/metabolismo , Ventilação Pulmonar , Respiração , Humanos , Análise dos Mínimos Quadrados , Volume de Ventilação Pulmonar
17.
J Microsc ; 255(2): 94-103, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24917510

RESUMO

Confocal microscopy has become an essential tool to explore biospecimens in 3D. Confocal microcopy images are still degraded by out-of-focus blur and Poisson noise. Many deconvolution methods including the Richardson-Lucy (RL) method, Tikhonov method and split-gradient (SG) method have been well received. The RL deconvolution method results in enhanced image quality, especially for Poisson noise. Tikhonov deconvolution method improves the RL method by imposing a prior model of spatial regularization, which encourages adjacent voxels to appear similar. The SG method also contains spatial regularization and is capable of incorporating many edge-preserving priors resulting in improved image quality. The strength of spatial regularization is fixed regardless of spatial location for the Tikhonov and SG method. The Tikhonov and the SG deconvolution methods are improved upon in this study by allowing the strength of spatial regularization to differ for different spatial locations in a given image. The novel method shows improved image quality. The method was tested on phantom data for which ground truth and the point spread function are known. A Kullback-Leibler (KL) divergence value of 0.097 is obtained with applying spatially variable regularization to the SG method, whereas KL value of 0.409 is obtained with the Tikhonov method. In tests on a real data, for which the ground truth is unknown, the reconstructed data show improved noise characteristics while maintaining the important image features such as edges.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Microscopia Confocal/métodos , Microscopia Confocal/instrumentação
18.
J Electrocardiol ; 47(1): 20-8, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24369741

RESUMO

A widely used approach to solving the inverse problem in electrocardiography involves computing potentials on the epicardium from measured electrocardiograms (ECGs) on the torso surface. The main challenge of solving this electrocardiographic imaging (ECGI) problem lies in its intrinsic ill-posedness. While many regularization techniques have been developed to control wild oscillations of the solution, the choice of proper regularization methods for obtaining clinically acceptable solutions is still a subject of ongoing research. However there has been little rigorous comparison across methods proposed by different groups. This study systematically compared various regularization techniques for solving the ECGI problem under a unified simulation framework, consisting of both 1) progressively more complex idealized source models (from single dipole to triplet of dipoles), and 2) an electrolytic human torso tank containing a live canine heart, with the cardiac source being modeled by potentials measured on a cylindrical cage placed around the heart. We tested 13 different regularization techniques to solve the inverse problem of recovering epicardial potentials, and found that non-quadratic methods (total variation algorithms) and first-order and second-order Tikhonov regularizations outperformed other methodologies and resulted in similar average reconstruction errors.


Assuntos
Potenciais de Ação/fisiologia , Mapeamento Potencial de Superfície Corporal/métodos , Diagnóstico por Computador/métodos , Sistema de Condução Cardíaco/fisiologia , Frequência Cardíaca/fisiologia , Modelos Cardiovasculares , Simulação por Computador , Interpretação Estatística de Dados , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Med Phys ; 2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39194293

RESUMO

BACKGROUND: Intraoperative 2D quantitative angiography (QA) for intracranial aneurysms (IAs) has accuracy challenges due to the variability of hand injections. Despite the success of singular value decomposition (SVD) algorithms in reducing biases in computed tomography perfusion (CTP), their application in 2D QA has not been extensively explored. This study seeks to bridge this gap by investigating the potential of SVD-based deconvolution methods in 2D QA, particularly in addressing the variability of injection durations. PURPOSE: Building on the identified limitations in QA, the study aims to adapt SVD-based deconvolution techniques from CTP to QA for IAs. This adaptation seeks to capitalize on the high temporal resolution of QA, despite its two-dimensional nature, to enhance the consistency and accuracy of hemodynamic parameter assessment. The goal is to develop a method that can reliably assess hemodynamic conditions in IAs, independent of injection variables, for improved neurovascular diagnostics. MATERIALS AND METHODS: The study included three internal carotid aneurysm (ICA) cases. Virtual angiograms were generated using computational fluid dynamics (CFD) for three physiologically relevant inlet velocities to simulate contrast media injection durations. Time-density curves (TDCs) were produced for both the inlet and aneurysm dome. Various SVD variants, including standard SVD (sSVD) with and without classical Tikhonov regularization, block-circulant SVD (bSVD), and oscillation index SVD (oSVD), were applied to virtual angiograms. The method was applied on virtual angiograms to recover the aneurysmal dome impulse response function (IRF) and extract flow related parameters such as Peak Height PHIRF, Area Under the Curve AUCIRF, and Mean transit time MTT. Next, correlations between QA parameters, injection duration, and inlet velocity were assessed for unconvolved and deconvolved data for all SVD methods. Additionally, we performed an in vitro study, to complement our in silico investigation. We generated a 2D DSA using a flow circuit design for a patient-specific internal carotid artery phantom. The DSA showcases factors like x-ray artifacts, noise, and patient motion. We evaluated QA parameters for the in vitro phantoms using different SVD variants and established correlations between QA parameters, injection duration, and velocity for unconvolved and deconvolved data. RESULTS: The different SVD algorithm variants showed strong correlations between flow and deconvolution-adjusted QA parameters. Furthermore, we found that SVD can effectively reduce QA parameter variability across various injection durations, enhancing the potential of QA analysis parameters in neurovascular disease diagnosis and treatment. CONCLUSION: Implementing SVD-based deconvolution techniques in QA analysis can enhance the precision and reliability of neurovascular diagnostics by effectively reducing the impact of injection duration on hemodynamic parameters.

20.
J Optim Theory Appl ; 202(3): 1385-1420, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39246431

RESUMO

In a Hilbert setting we aim to study a second order in time differential equation, combining viscous and Hessian-driven damping, containing a time scaling parameter function and a Tikhonov regularization term. The dynamical system is related to the problem of minimization of a nonsmooth convex function. In the formulation of the problem as well as in our analysis we use the Moreau envelope of the objective function and its gradient and heavily rely on their properties. We show that there is a setting where the newly introduced system preserves and even improves the well-known fast convergence properties of the function and Moreau envelope along the trajectories and also of the gradient of Moreau envelope due to the presence of time scaling. Moreover, in a different setting we prove strong convergence of the trajectories to the element of minimal norm from the set of all minimizers of the objective. The manuscript concludes with various numerical results.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA