Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 504
Filtrar
1.
J Comput Biol ; 2024 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-39387260

RESUMEN

The Turnpike problem aims to reconstruct a set of one-dimensional points from their unordered pairwise distances. Turnpike arises in biological applications such as molecular structure determination, genomic sequencing, tandem mass spectrometry, and molecular error-correcting codes. Under noisy observation of the distances, the Turnpike problem is NP-hard and can take exponential time and space to solve when using traditional algorithms. To address this, we reframe the noisy Turnpike problem through the lens of optimization, seeking to simultaneously find the unknown point set and a permutation that maximizes similarity to the input distances. Our core contribution is a suite of algorithms that robustly solve this new objective. This includes a bilevel optimization framework that can efficiently solve Turnpike instances with up to 100,000 points. We show that this framework can be extended to scenarios with domain-specific constraints that include duplicated, missing, and partially labeled distances. Using these, we also extend our algorithms to work for points distributed on a circle (the Beltway problem). For small-scale applications that require global optimality, we formulate an integer linear program (ILP) that (i) accepts an objective from a generic family of convex functions and (ii) uses an extended formulation to reduce the number of binary variables. On synthetic and real partial digest data, our bilevel algorithms achieved state-of-the-art scalability across challenging scenarios with performance that matches or exceeds competing baselines. On small-scale instances, our ILP efficiently recovered ground-truth assignments and produced reconstructions that match or exceed our alternating algorithms. Our implementations are available at https://github.com/Kingsford-Group/turnpikesolvermm.

2.
Comput Biol Med ; 182: 109141, 2024 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-39293337

RESUMEN

BACKGROUND: In electrocardiographic imaging (ECGI), selecting an optimal regularization parameter (λ) is crucial for obtaining accurate inverse electrograms. The effects of signal and geometry uncertainties on the inverse problem regularization have not been thoroughly quantified, and there is no established methodology to identify when λ is sub-optimal due to these uncertainties. This study introduces a novel approach to λ selection using Tikhonov regularization and L-curve optimization, specifically addressing the impact of electrical noise in body surface potential map (BSPM) signals and geometrical inaccuracies in the cardiac mesh. METHODS: Nineteen atrial simulations (5 of regular rhythms and 14 of atrial fibrillation) ensuring variability in substrate complexity and activation patterns were used for computing the ECGI with added white Gaussian noise from 40 dB to -3dB. Cardiac mesh displacements (1-3 cm) were applied to simulate the uncertainty of atrial positioning and study its impact on the L-curve shape. The regularization parameter, the maximum curvature, and the most horizontal angle of the L-curve (ß) were quantified. In addition, BSPM signals from real patients were used to validate our findings. RESULTS: The maximum curvature of the L-curve was found to be inversely related to signal-to-noise ratio and atrial positioning errors. In contrast, the ß angle is directly related to electrical noise and remains unaffected by geometrical errors. Our proposed adjustment of λ, based on the ß angle, provides a more reliable ECGI solution than traditional corner-based methods. Our findings have been validated with simulations and real patient data, demonstrating practical applicability. CONCLUSION: Adjusting λ based on the amount of noise in the data (or on the ß angle) allows finding optimal ECGI solutions than a λ purely found at the corner of the L-curve. It was observed that the relevant information in ECGI activation maps is preserved even under the presence of uncertainties when the regularization parameter is correctly selected. The proposed criteria for regularization parameter selection have the potential to enhance the accuracy and reliability of ECGI solutions.

3.
Adv Sci (Weinh) ; : e2406793, 2024 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-39246254

RESUMEN

Across diverse domains of science and technology, electromagnetic (EM) inversion problems benefit from the ability to account for multimodal prior information to regularize their inherent ill-posedness. Indeed, besides priors that are formulated mathematically or learned from quantitative data, valuable prior information may be available in the form of text or images. Besides handling semantic multimodality, it is furthermore important to minimize the cost of adapting to a new physical measurement operator and to limit the requirements for costly labeled data. Here, these challenges are tackled with a frugal and multimodal semantic-EM inversion technique. The key ingredient is a multimodal generator of reconstruction results that can be pretrained, being agnostic to the physical measurement operator. The generator is fed by a multimodal foundation model encoding the multimodal semantic prior and a physical adapter encoding the measured data. For a new physical setting, only the lightweight physical adapter is retrained. The authors' architecture also enables a flexible iterative step-by-step solution to the inverse problem where each step can be semantically controlled. The feasibility and benefits of this methodology are demonstrated for three EM inverse problems: a canonical two-dimensional inverse-scattering problem in numerics, as well as three-dimensional and four-dimensional compressive microwave meta-imaging experiments.

4.
Neuroimage ; 299: 120802, 2024 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-39173694

RESUMEN

Electroencephalography (EEG) or Magnetoencephalography (MEG) source imaging aims to estimate the underlying activated brain sources to explain the observed EEG/MEG recordings. Solving the inverse problem of EEG/MEG Source Imaging (ESI) is challenging due to its ill-posed nature. To achieve a unique solution, it is essential to apply sophisticated regularization constraints to restrict the solution space. Traditionally, the design of regularization terms is based on assumptions about the spatiotemporal structure of the underlying source dynamics. In this paper, we propose a novel paradigm for ESI via an Explainable Deep Learning framework, termed as XDL-ESI, which connects the iterative optimization algorithm with deep learning architecture by unfolding the iterative updates with neural network modules. The proposed framework has the advantages of (1) establishing a data-driven approach to model the source solution structure instead of using hand-crafted regularization terms; (2) improving the robustness of source solutions by introducing a topological loss that leverages the geometric spatial information applying varying penalties on distinct localization errors; (3) improving the reconstruction efficiency and interpretability as it inherits the advantages from both the iterative optimization algorithms (interpretability) and deep learning approaches (function approximation). The proposed XDL-ESI framework provides an efficient, accurate, and interpretable paradigm to solve the ESI inverse problem with satisfactory performance in both simulated data and real clinical data. Specially, this approach is further validated using simultaneous EEG and intracranial EEG (iEEG).


Asunto(s)
Aprendizaje Profundo , Electroencefalografía , Magnetoencefalografía , Humanos , Electroencefalografía/métodos , Magnetoencefalografía/métodos , Magnetoencefalografía/normas , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Electrocorticografía/métodos , Electrocorticografía/normas , Algoritmos
5.
J Biomech Eng ; 146(12)2024 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-39196594

RESUMEN

This study proposes a numerical approach for simulating bone remodeling in lumbar interbody fusion (LIF). It employs a topology optimization method to drive the remodeling process and uses a pixel function to describe the structural topology and bone density distribution. Unlike traditional approaches based on strain energy density or compliance, this study adopts von Mises stress to guide the remodeling of LIF. A novel pixel interpolation scheme associated with stress criteria is applied to the physical properties of the bone, directly addressing the stress shielding effect caused by the implanted cage, which significantly influences the bone remodeling outcome in LIF. Additionally, a boundary inverse approach is utilized to reconstruct a simplified analysis model. To reduce computational cost while maintaining high structural resolution and accuracy, the scaled boundary finite element method (SBFEM) is introduced. The proposed numerical approach successfully generates results that closely resemble human lumbar interbody fusion.


Asunto(s)
Remodelación Ósea , Análisis de Elementos Finitos , Vértebras Lumbares , Fusión Vertebral , Vértebras Lumbares/cirugía , Humanos , Estrés Mecánico , Fenómenos Biomecánicos
6.
Neurophysiol Clin ; 54(5): 103005, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39029213

RESUMEN

In patients with refractory epilepsy, the clinical interpretation of stereoelectroencephalographic (SEEG) signals is crucial to delineate the epileptogenic network that should be targeted by surgery. We propose a pipeline of patient-specific computational modeling of interictal epileptic activity to improve the definition of regions of interest. Comparison between the computationally defined regions of interest and the resected region confirmed the efficiency of the pipeline. This result suggests that computational modeling can be used to reconstruct signals and aid clinical interpretation.


Asunto(s)
Encéfalo , Electroencefalografía , Humanos , Electroencefalografía/métodos , Encéfalo/fisiopatología , Epilepsia/fisiopatología , Simulación por Computador , Masculino , Femenino , Adulto , Epilepsia Refractaria/fisiopatología
7.
Neural Netw ; 179: 106515, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39032393

RESUMEN

Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct PA images with a supervised scheme, which requires high-quality images as ground truth labels. However, practical implementations encounter inevitable trade-offs between cost and performance due to the expensive nature of employing additional channels for accessing more measurements. Here, we propose a masked cross-domain self-supervised (CDSS) reconstruction strategy to overcome the lack of ground truth labels from limited PA measurements. We implement the self-supervised reconstruction in a model-based form. Simultaneously, we take advantage of self-supervision to enforce the consistency of measurements and images across three partitions of the measured PA data, achieved by randomly masking different channels. Our findings indicate that dynamically masking a substantial proportion of channels, such as 80%, yields meaningful self-supervisors in both the image and signal domains. Consequently, this approach reduces the multiplicity of pseudo solutions and enables efficient image reconstruction using fewer PA measurements, ultimately minimizing reconstruction error. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our self-supervised framework. Moreover, our method exhibits impressive performance, achieving a structural similarity index (SSIM) of 0.87 in an extreme sparse case utilizing only 13 channels, which outperforms the performance of the supervised scheme with 16 channels (0.77 SSIM). Adding to its advantages, our method can be deployed on different trainable models in an end-to-end manner, further enhancing its versatility and applicability.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Técnicas Fotoacústicas , Tomografía Computarizada por Rayos X , Técnicas Fotoacústicas/métodos , Animales , Procesamiento de Imagen Asistido por Computador/métodos , Ratones , Tomografía Computarizada por Rayos X/métodos , Aprendizaje Automático Supervisado , Redes Neurales de la Computación , Algoritmos
8.
Sensors (Basel) ; 24(14)2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-39065856

RESUMEN

Contactless inductive flow tomography (CIFT) is a flow measurement technique allowing for visualization of the global flow in electrically conducting fluids. The method is based on the principle of induction by motion: very weak induced magnetic fields arise from the fluid motion under the influence of a primary excitation magnetic field and can be measured precisely outside of the fluid volume. The structure of the causative flow field can be reconstructed from the induced magnetic field values by solving the according linear inverse problem using appropriate regularization methods. The concurrent use of more than one excitation magnetic field is necessary to fully reconstruct three-dimensional liquid metal flows. In our laboratory demonstrator experiment, we impose two excitation magnetic fields perpendicular to each other to a mechanically driven flow of the liquid metal alloy GaInSn. In the first approach, the excitation fields are multiplexed. Here, the temporal resolution of the measurement needs to be kept as high as possible. Consecutive application by multiplexing enables determining the flow structure in the liquid with a temporal resolution down to 3 s with the existing equipment. In another approach, we concurrently apply two sinusoidal excitation fields with different frequencies. The signals are disentangled on the basis of the lock-in principle, enabling a successful reconstruction of the liquid metal flow.

9.
Philos Trans A Math Phys Eng Sci ; 382(2277): 20230295, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39005012

RESUMEN

This study examines a class of time-dependent constitutive equations used to describe viscoelastic materials under creep in solid mechanics. In nonlinear elasticity, the strain response to the applied stress is expressed via an implicit graph allowing multi-valued functions. For coercive and maximal monotone graphs, the existence of a solution to the quasi-static viscoelastic problem is proven by applying the Browder-Minty fixed point theorem. Moreover, for quasi-linear viscoelastic problems, the solution is constructed as a semi-analytic formula. The inverse viscoelastic problem is represented by identification of a design variable from non-smooth measurements. A non-empty set of optimal variables is obtained based on the compactness argument by applying Tikhonov regularization in the space of bounded measures and deformations. Furthermore, an illustrative example is given for the inverse problem of isotropic kernel identification. This article is part of the theme issue 'Non-smooth variational problems with applications in mechanics'.

10.
Sci Total Environ ; 946: 174374, 2024 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-38945246

RESUMEN

Groundwater pollution source recognition (GPSR) is a prerequisite for subsequent pollution remediation and risk assessment work. The actual observed data are the most important known condition in GPSR, but the observed data can be contaminated with noise in real cases. This may directly affect the recognition results. Therefore, denoising is important. However, in different practical situations, the noise attribute (e.g., noise level) and observed data attribute (e.g., observed frequency) may be different. Therefore, it is necessary to study the applicability of denoising. Current studies have two deficiencies. First, when dealing with complex nonlinear and non-stationary situations, the effect of previous denoising methods needs to be improved. Second, previous attempts to analyze the applicability of denoising in GPSR have not been comprehensive enough because they only consider the influence of the noise attribute, while overlooking the observed data attribute. To resolve these issues, this study adopted the variational mode decomposition (VMD) to perform denoising on the noisy observed data in GPSR for the first time. It further explored the influence of different factors on the denoising effect. The tests were conducted under 12 different scenarios. Then, we expanded the study to include not only the noise attribute (noise level) but also the observed data attribute (observed frequency), thus providing a more comprehensive analysis of the applicability of denoising in GPSR. Additionally, we used a new heuristic optimization algorithm, the collective decision optimization algorithm, to improve the recognition accuracy. Four representative scenarios were adopted to test the ideas. The results showed that the VMD performed well under various scenarios, and the denoising effect diminished as the noise level increased and the observed frequency decreased. The denoising was more effective for GPSR with high noise levels and multiple observed frequencies. The collective decision optimization algorithm had a good inversion accuracy and strong robustness.

11.
Sci Rep ; 14(1): 14198, 2024 Jun 20.
Artículo en Inglés | MEDLINE | ID: mdl-38902434

RESUMEN

Precisely estimating material parameters for cement-based materials is crucial for assessing the structural integrity of buildings. Both destructive (e.g., compression test) and non-destructive methods (e.g., ultrasound, computed tomography) are used to estimate Young's modulus. Since ultrasound estimates the dynamic Young's modulus, a formula is required to adapt it to the static modulus. For this formulas from the literature are compared. The investigated specimens are cylindrical mortar specimens with four different sand-to-cement mass fractions of 20%, 35%, 50%, and 65%. The ultrasound signals are analyzed in two distinct ways: manual onset picking and full-waveform inversion. Full-waveform inversion involves comparing the measured signal with a simulated one and iteratively adjusting the ultrasound velocities in a numerical model until the measured signal closely matches the simulated one. Using computed tomography measurements, Young's moduli are semi-analytically determined based on sand distribution in cement images. The reconstructed volume is segmented into sand, cement, and pores. Young's moduli, as determined by compression tests, were better represented by full-waveform inversions (best RMSE = 0.34 GPa) than by manual onset picking (best RMSE = 0.87 GPa). Moreover, material parameters from full-waveform inversion showed less deviation than those manually picked. The maximal standard deviation of a Young's modulus determined with FWI was 0.36, while that determined with manual picking was 1.11. Young's moduli from computed tomography scans match those from compression tests the closest, with an RMSE of 0.13 GPa.

12.
Sensors (Basel) ; 24(11)2024 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-38894316

RESUMEN

We present a goniometer designed for capturing spectral and angular-resolved data from scattering and absorbing media. The experimental apparatus is complemented by a comprehensive Monte Carlo simulation, meticulously replicating the radiative transport processes within the instrument's optical components and simulating scattering and absorption across arbitrary volumes. Consequently, we were able to construct a precise digital replica, or "twin", of the experimental setup. This digital counterpart enabled us to tackle the inverse problem of deducing optical parameters such as absorption and scattering coefficients, along with the scattering anisotropy factor from measurements. We achieved this by fitting Monte Carlo simulations to our goniometric measurements using a Levenberg-Marquardt algorithm. Validation of our approach was performed using polystyrene particles, characterized by Mie scattering, supplemented by a theoretical analysis of algorithmic convergence. Ultimately, we demonstrate strong agreement between optical parameters derived using our novel methodology and those obtained via established measurement protocols.

13.
Biomed J Sci Tech Res ; 55(2): 46779-46884, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38883320

RESUMEN

There are fewer than 10 projection views in extreme few-view tomography. The state-of-the-art methods to reconstruct images with few-view data are compressed sensing based. Compressed sensing relies on a sparsification transformation and total variation (TV) norm minimization. However, for the extreme few-view tomography, the compressed sensing methods are not powerful enough. This paper seeks additional information as extra constraints so that extreme few-view tomography becomes possible. In transmission tomography, we roughly know the linear attenuation coefficients of the objects to be imaged. We can use these values as extra constraints. Computer simulations show that these extra constraints are helpful and improve the reconstruction quality.

14.
Photoacoustics ; 38: 100609, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38745884

RESUMEN

Quantitative photoacoustic tomography (qPAT) holds great potential in estimating chromophore concentrations, whereas the involved optical inverse problem, aiming to recover absorption coefficient distributions from photoacoustic images, remains challenging. To address this problem, we propose an extractor-attention-predictor network architecture (EAPNet), which employs a contracting-expanding structure to capture contextual information alongside a multilayer perceptron to enhance nonlinear modeling capability. A spatial attention module is introduced to facilitate the utilization of important information. We also use a balanced loss function to prevent network parameter updates from being biased towards specific regions. Our method obtains satisfactory quantitative metrics in simulated and real-world validations. Moreover, it demonstrates superior robustness to target properties and yields reliable results for targets with small size, deep location, or relatively low absorption intensity, indicating its broader applicability. The EAPNet, compared to the conventional UNet, exhibits improved efficiency, which significantly enhances performance while maintaining similar network size and computational complexity.

15.
Sensors (Basel) ; 24(9)2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38732809

RESUMEN

MIT (magnetic induction tomography) image reconstruction from data acquired with a single, small inductive sensor has unique requirements not found in other imaging modalities. During the course of scanning over a target, measured inductive loss decreases rapidly with distance from the target boundary. Since inductive loss exists even at infinite separation due to losses internal to the sensor, all other measurements made in the vicinity of the target require subtraction of the infinite-separation loss. This is accomplished naturally by treating infinite-separation loss as an unknown. Furthermore, since contributions to inductive loss decline with greater depth into a conductive target, regularization penalties must be decreased with depth. A pair of squared L2 penalty norms are combined to form a 2-term Sobolev norm, including a zero-order penalty that penalizes solution departures from a default solution and a first-order penalty that promotes smoothness. While constraining the solution to be non-negative and bounded from above, the algorithm is used to perform image reconstruction on scan data obtained over a 4.3 cm thick phantom consisting of bone-like features embedded in agarose gel, with the latter having a nominal conductivity of 1.4 S/m.

16.
Materials (Basel) ; 17(9)2024 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-38730894

RESUMEN

In the realm of high-tech materials and energy applications, accurately measuring the transient heat flow at media boundaries and the internal thermal conductivity of materials in harsh heat exchange environments poses a significant challenge when using conventional direct measurement methods. Consequently, the study of photothermal parameter reconstruction in translucent media, which relies on indirect measurement techniques, has crucial practical value. Current research on reconstructing photothermal properties within participating media typically focuses on single-objective or time-invariant properties. There is a pressing need to develop effective methods for the simultaneous reconstruction of time-varying thermal flow fields and internal thermal conductivity at the boundaries of participating media. This paper introduces a computational model based on the numerical simulation theory of internal heat transfer systems in participating media, stochastic particle swarm optimization algorithms, and Kalman filter technology. The model aims to enable the simultaneous reconstruction of various thermal parameters within the target medium. Our results demonstrate that under varying levels of measurement noise, the inversion results for different target parameters exhibit slight oscillations around the true values, leading to a reduction in reconstruction accuracy. However, overall, the model demonstrates robustness and accuracy in ideal conditions, validating its effectiveness.

17.
Genet Epidemiol ; 48(6): 270-288, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38644517

RESUMEN

The genome-wide association studies (GWAS) typically use linear or logistic regression models to identify associations between phenotypes (traits) and genotypes (genetic variants) of interest. However, the use of regression with the additive assumption has potential limitations. First, the normality assumption of residuals is the one that is rarely seen in practice, and deviation from normality increases the Type-I error rate. Second, building a model based on such an assumption ignores genetic structures, like, dominant, recessive, and protective-risk cases. Ignoring genetic variants may result in spurious conclusions about the associations between a variant and a trait. We propose an assumption-free model built upon data-consistent inversion (DCI), which is a recently developed measure-theoretic framework utilized for uncertainty quantification. This proposed DCI-derived model builds a nonparametric distribution on model inputs that propagates to the distribution of observed data without the required normality assumption of residuals in the regression model. This characteristic enables the proposed DCI-derived model to cover all genetic variants without emphasizing on additivity of the classic-GWAS model. Simulations and a replication GWAS with data from the COPDGene demonstrate the ability of this model to control the Type-I error rate at least as well as the classic-GWAS (additive linear model) approach while having similar or greater power to discover variants in different genetic modes of transmission.


Asunto(s)
Estudio de Asociación del Genoma Completo , Modelos Genéticos , Estudio de Asociación del Genoma Completo/métodos , Estudio de Asociación del Genoma Completo/estadística & datos numéricos , Humanos , Simulación por Computador , Polimorfismo de Nucleótido Simple , Fenotipo , Modelos Estadísticos , Genotipo , Enfermedad Pulmonar Obstructiva Crónica/genética , Variación Genética
18.
Physiol Meas ; 45(4)2024 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-38624240

RESUMEN

Objective.Electrical impedance tomography (EIT) is a noninvasive imaging method whereby electrical measurements on the periphery of a heterogeneous conductor are inverted to map its internal conductivity. The EIT method proposed here aims to improve computational speed and noise tolerance by introducing sensitivity volume as a figure-of-merit for comparing EIT measurement protocols.Approach.Each measurement is shown to correspond to a sensitivity vector in model space, such that the set of measurements, in turn, corresponds to a set of vectors that subtend a sensitivity volume in model space. A maximal sensitivity volume identifies the measurement protocol with the greatest sensitivity and greatest mutual orthogonality. A distinguishability criterion is generalized to quantify the increased noise tolerance of high sensitivity measurements.Main result.The sensitivity volume method allows the model space dimension to be minimized to match that of the data space, and the data importance to be increased within an expanded space of measurements defined by an increased number of contacts.Significance.The reduction in model space dimension is shown to increasecomputational efficiency, accelerating tomographic inversion by several orders of magnitude, while the enhanced sensitivitytolerates higher noiselevels up to several orders of magnitude larger than standard methods.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Impedancia Eléctrica , Tomografía/métodos , Conductividad Eléctrica
19.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(2): 262-271, 2024 Apr 25.
Artículo en Chino | MEDLINE | ID: mdl-38686406

RESUMEN

Accurate reconstruction of tissue elasticity modulus distribution has always been an important challenge in ultrasound elastography. Considering that existing deep learning-based supervised reconstruction methods only use simulated displacement data with random noise in training, which cannot fully provide the complexity and diversity brought by in-vivo ultrasound data, this study introduces the use of displacement data obtained by tracking in-vivo ultrasound radio frequency signals (i.e., real displacement data) during training, employing a semi-supervised approach to enhance the prediction accuracy of the model. Experimental results indicate that in phantom experiments, the semi-supervised model augmented with real displacement data provides more accurate predictions, with mean absolute errors and mean relative errors both around 3%, while the corresponding data for the fully supervised model are around 5%. When processing real displacement data, the area of prediction error of semi-supervised model was less than that of fully supervised model. The findings of this study confirm the effectiveness and practicality of the proposed approach, providing new insights for the application of deep learning methods in the reconstruction of elastic distribution from in-vivo ultrasound data.


Asunto(s)
Módulo de Elasticidad , Diagnóstico por Imagen de Elasticidad , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Fantasmas de Imagen , Diagnóstico por Imagen de Elasticidad/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Algoritmos , Aprendizaje Profundo
20.
Materials (Basel) ; 17(3)2024 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-38591434

RESUMEN

Measuring the size distribution and temperature of high-temperature dispersed particles, particularly in-flame soot, holds paramount importance across various industries. Laser-induced incandescence (LII) stands out as a potent non-contact diagnostic technology for in-flame soot, although its effectiveness is hindered by uncertainties associated with pre-determined thermal properties. To tackle this challenge, our study proposes a multi-parameter inversion strategy-simultaneous inversion of particle size distribution, thermal accommodation coefficient, and initial temperature of in-flame soot aggregates using time-resolved LII signals. Analyzing the responses of different heat transfer sub-models to temperature rise demonstrates the necessity of incorporating sublimation and thermionic emission for accurately reproducing LII signals of high-temperature dispersed particles. Consequently, we selected a particular LII model for the multi-parameter inversion strategy. Our research reveals that LII-based particle sizing is sensitive to biases in the initial temperature of particles (equivalent to the flame temperature), underscoring the need for the proposed multi-parameter inversion strategy. Numerical results obtained at two typical flame temperatures, 1100 K and 1700 K, illustrate that selecting an appropriate laser fluence enables the simultaneous inversion of particle size distribution, thermal accommodation coefficient, and initial particle temperatures of soot aggregates with high accuracy and confidence using the LII technique.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA