Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
1.
Sci Rep ; 14(1): 14898, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38942986

ABSTRACT

In this study, in order to characterize the buried object via deep-learning-based surrogate modeling approach, 3-D full-wave electromagnetic simulations of a GPR model have been used. The task is to independently predict characteristic parameters of a buried object of diverse radii allocated at different positions (depth and lateral position) in various dispersive subsurface media. This study has analyzed variable data structures (raw B-scans, extracted features, consecutive A-scans) with respect to computational cost and accuracy of surrogates. The usage of raw B-scan data and the applications for processing steps on B-scan profiles in the context of object characterization incur high computational cost so it can be a challenging issue. The proposed surrogate model referred to as the deep regression network (DRN) is utilized for time frequency spectrogram (TFS) of consecutive A-scans. DRN is developed with the main aim being computationally efficient (about 13 times acceleration) compared to conventional network models using B-scan images (2D data). DRN with TFS is favorably benchmarked to the state-of-the-art regression techniques. The experimental results obtained for the proposed model and second-best model, CNN-1D show mean absolute and relative error rates of 3.6 mm, 11.8 mm and 4.7%, 11.6% respectively. For the sake of supplementary verification under realistic scenarios, it is also applied for scenarios involving noisy data. Furthermore, the proposed surrogate modeling approach is validated using measurement data, which is indicative of suitability of the approach to handle physical measurements as data sources.

2.
Sci Rep ; 14(1): 10081, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698032

ABSTRACT

Utilization of optimization technique is a must in the design of contemporary antenna systems. Often, global search methods are necessary, which are associated with high computational costs when conducted at the level of full-wave electromagnetic (EM) models. In this study, we introduce an innovative method for globally optimizing reflection responses of multi-band antennas. Our approach uses surrogates constructed based on response features, smoothing the objective function landscape processed by the algorithm. We begin with initial parameter space screening and surrogate model construction using coarse-discretization EM analysis. Subsequently, the surrogate evolves iteratively into a co-kriging model, refining itself using accumulated high-fidelity EM simulation results, with the infill criterion focusing on minimizing the predicted objective function. Employing a particle swarm optimizer (PSO) as the underlying search routine, extensive verification case studies showcase the efficiency and superiority of our procedure over benchmarks. The average optimization cost translates to just around ninety high-fidelity EM antenna analyses, showcasing excellent solution repeatability. Leveraging variable-resolution simulations achieves up to a seventy percent speedup compared to the single-fidelity algorithm.

3.
Elife ; 122024 Apr 10.
Article in English | MEDLINE | ID: mdl-38598284

ABSTRACT

Computer models of the human ventricular cardiomyocyte action potential (AP) have reached a level of detail and maturity that has led to an increasing number of applications in the pharmaceutical sector. However, interfacing the models with experimental data can become a significant computational burden. To mitigate the computational burden, the present study introduces a neural network (NN) that emulates the AP for given maximum conductances of selected ion channels, pumps, and exchangers. Its applicability in pharmacological studies was tested on synthetic and experimental data. The NN emulator potentially enables massive speed-ups compared to regular simulations and the forward problem (find drugged AP for pharmacological parameters defined as scaling factors of control maximum conductances) on synthetic data could be solved with average root-mean-square errors (RMSE) of 0.47 mV in normal APs and of 14.5 mV in abnormal APs exhibiting early afterdepolarizations (72.5% of the emulated APs were alining with the abnormality, and the substantial majority of the remaining APs demonstrated pronounced proximity). This demonstrates not only very fast and mostly very accurate AP emulations but also the capability of accounting for discontinuities, a major advantage over existing emulation strategies. Furthermore, the inverse problem (find pharmacological parameters for control and drugged APs through optimization) on synthetic data could be solved with high accuracy shown by a maximum RMSE of 0.22 in the estimated pharmacological parameters. However, notable mismatches were observed between pharmacological parameters estimated from experimental data and distributions obtained from the Comprehensive in vitro Proarrhythmia Assay initiative. This reveals larger inaccuracies which can be attributed particularly to the fact that small tissue preparations were studied while the emulator was trained on single cardiomyocyte data. Overall, our study highlights the potential of NN emulators as powerful tool for an increased efficiency in future quantitative systems pharmacology studies.


Subject(s)
Myocytes, Cardiac , Neural Networks, Computer , Humans , Action Potentials , Computer Simulation , Biological Assay
4.
Biomech Model Mechanobiol ; 23(2): 615-629, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38236483

ABSTRACT

Machine learning (ML) techniques have shown great potential in cardiovascular surgery, including real-time stenosis recognition, detection of stented coronary anomalies, and prediction of in-stent restenosis (ISR). However, estimating neointima evolution poses challenges for ML models due to limitations in manual measurements, variations in image quality, low data availability, and the difficulty of acquiring biological quantities. An effective in silico model is necessary to accurately capture the mechanisms leading to neointimal hyperplasia. Physics-informed neural networks (PINNs), a novel deep learning (DL) method, have emerged as a promising approach that integrates physical laws and measurements into modeling. PINNs have demonstrated success in solving partial differential equations (PDEs) and have been applied in various biological systems. This paper aims to develop a robust multiphysics surrogate model for ISR estimation using the physics-informed DL approach, incorporating biological constraints and drug elution effects. The model seeks to enhance prediction accuracy, provide insights into disease progression factors, and promote ISR diagnosis and treatment planning. A set of coupled advection-reaction-diffusion type PDEs is constructed to track the evolution of the influential factors associated with ISR, such as platelet-derived growth factor (PDGF), the transforming growth factor- ß (TGF- ß ), the extracellular matrix (ECM), the density of smooth muscle cells (SMC), and the drug concentration. The nature of PINNs allows for the integration of patient-specific data (procedure-related, clinical and genetic, etc.) into the model, improving prediction accuracy and assisting in the optimization of stent implantation parameters to mitigate risks. This research addresses the existing gap in predictive models for ISR using DL and holds the potential to enhance patient outcomes through predictive risk assessment.


Subject(s)
Coronary Restenosis , Deep Learning , Diethylstilbestrol/analogs & derivatives , Drug-Eluting Stents , Percutaneous Coronary Intervention , Humans , Coronary Angiography , Constriction, Pathologic , Stents , Neointima , Treatment Outcome
5.
ACS Appl Bio Mater ; 7(2): 510-527, 2024 Feb 19.
Article in English | MEDLINE | ID: mdl-36701125

ABSTRACT

Polymers, with the capacity to tunably alter properties and response based on manipulation of their chemical characteristics, are attractive components in biomaterials. Nevertheless, their potential as functional materials is also inhibited by their complexity, which complicates rational or brute-force design and realization. In recent years, machine learning has emerged as a useful tool for facilitating materials design via efficient modeling of structure-property relationships in the chemical domain of interest. In this Spotlight, we discuss the emergence of data-driven design of polymers that can be deployed in biomaterials with particular emphasis on complex copolymer systems. We outline recent developments, as well as our own contributions and takeaways, related to high-throughput data generation for polymer systems, methods for surrogate modeling by machine learning, and paradigms for property optimization and design. Throughout this discussion, we highlight key aspects of successful strategies and other considerations that will be relevant to the future design of polymer-based biomaterials with target properties.


Subject(s)
Biocompatible Materials , Polymers , Polymers/chemistry , Biocompatible Materials/chemistry , Machine Learning , Computer Simulation
6.
Comput Biol Med ; 168: 107772, 2024 01.
Article in English | MEDLINE | ID: mdl-38064846

ABSTRACT

This study applies non-intrusive polynomial chaos expansion (NIPCE) surrogate modeling to analyze the performance of a rotary blood pump (RBP) across its operating range. We systematically investigate key parameters, including polynomial order, training data points, and data smoothness, while comparing them to test data. Using a polynomial order of 4 and a minimum of 20 training points, we successfully train a NIPCE model that accurately predicts pressure head and axial force within the specified operating point range ([0-5000] rpm and [0-7] l/min). We also assess the NIPCE model's ability to predict two-dimensional velocity data across the given range and find good overall agreement (mean absolute error = 0.1 m/s) with a test simulation under the same operating condition. Our approach extends current NIPCE modeling of RBPs by considering the entire operating range and providing validation guidelines. While acknowledging computational benefits, we emphasize the challenge of modeling discontinuous data and its relevance to clinically realistic operating points. We offer open access to our raw data and Python code, promoting reproducibility and accessibility within the scientific community. In conclusion, this study advances comprehensive NIPCE modeling of RBP performance and underlines how critically NIPCE parameters and rigorous validation affect results.


Subject(s)
Heart-Assist Devices , Reproducibility of Results , Computer Simulation , Models, Cardiovascular
7.
Int J Numer Method Biomed Eng ; 40(2): e3797, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38116742

ABSTRACT

In most variance-based sensitivity analysis (SA) approaches applied to biomechanical models, statistical independence of the model input is assumed. However, often the model inputs are correlated. This might alter the interpretation of the SA results, which may severely impact the guidance provided during model development and personalization. Potential reasons for the infrequent usage of SA techniques that account for input correlation are the associated high computational costs, especially for models with many parameters, and the fact that the input correlation structure is often unknown. The aim of this study was to propose an efficient correlated global sensitivity analysis method by applying a surrogate model-based approach. Furthermore, this article demonstrates how correlated SA should be interpreted and how the applied method can guide the modeler during model development and personalization, even when the correlation structure is not entirely known beforehand. The proposed methodology was applied to a typical example of a pulse wave propagation model and resulted in accurate SA results that could be obtained at a theoretically 27,000× lower computational cost compared to the correlated SA approach without employing a surrogate model. Furthermore, our results demonstrate that input correlations can significantly affect SA results, which emphasizes the need to thoroughly investigate the effect of input correlations during model development. We conclude that our proposed surrogate-based SA approach allows modelers to efficiently perform correlated SA to complex biomechanical models and allows modelers to focus on input prioritization, input fixing and model reduction, or assessing the dependency structure between parameters.


Subject(s)
Uncertainty , Analysis of Variance
8.
Comput Methods Programs Biomed ; 231: 107402, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36773593

ABSTRACT

BACKGROUND AND OBJECTIVES: Parameter estimation and uncertainty quantification are crucial in computational cardiology, as they enable the construction of digital twins that faithfully replicate the behavior of physical patients. Many model parameters regarding cardiac electromechanics and cardiovascular hemodynamics need to be robustly fitted by starting from a few, possibly non-invasive, noisy observations. Moreover, short execution times and a small amount of computational resources are required for the effective clinical translation. METHODS: In the framework of Bayesian statistics, we combine Maximum a Posteriori estimation and Hamiltonian Monte Carlo to find an approximation of model parameters and their posterior distributions. Fast simulations and minimal memory requirements are achieved by using an accurate and geometry-specific Artificial Neural Network surrogate model for the cardiac function, matrix-free methods, automatic differentiation and automatic vectorization. Furthermore, we account for the surrogate modeling error and measurement error. RESULTS: We perform three different in silico test cases, ranging from the ventricular function to the entire cardiocirculatory system, involving whole-heart mechanics, arterial and venous hemodynamics. By employing a single central processing unit on a standard laptop, we attain highly accurate estimations for all model parameters in short computational times. Furthermore, we obtain posterior distributions that contain the true values inside the 90% credibility regions. CONCLUSIONS: Many model parameters regarding the entire cardiovascular system can be fastly and robustly identified with minimal hardware requirements. This can be achieved when a small amount of non-invasive data is available and when high levels of signal-to-noise ratio are present in the quantities of interest. With these features, our approach meets the requirements for clinical exploitation, while being compliant with Green Computing practices.


Subject(s)
Heart , Neural Networks, Computer , Humans , Uncertainty , Bayes Theorem , Ventricular Function
9.
Biotechnol Bioeng ; 120(3): 803-818, 2023 03.
Article in English | MEDLINE | ID: mdl-36453664

ABSTRACT

Computational models are increasingly used to investigate and predict the complex dynamics of biological and biochemical systems. Nevertheless, governing equations of a biochemical system may not be (fully) known, which would necessitate learning the system dynamics directly from, often limited and noisy, observed data. On the other hand, when expensive models are available, systematic and efficient quantification of the effects of model uncertainties on quantities of interest can be an arduous task. This paper leverages the notion of flow-map (de)compositions to present a framework that can address both of these challenges via learning data-driven models useful for capturing the dynamical behavior of biochemical systems. Data-driven flow-map models seek to directly learn the integration operators of the governing differential equations in a black-box manner, irrespective of structure of the underlying equations. As such, they can serve as a flexible approach for deriving fast-to-evaluate surrogates for expensive computational models of system dynamics, or, alternatively, for reconstructing the long-term system dynamics via experimental observations. We present a data-efficient approach to data-driven flow-map modeling based on polynomial chaos Kriging. The approach is demonstrated for discovery of the dynamics of various benchmark systems and a coculture bioreactor subject to external forcing, as well as for uncertainty quantification of a microbial electrosynthesis reactor. Such data-driven models and analyses of dynamical systems can be paramount in the design and optimization of bioprocesses and integrated biomanufacturing systems.


Subject(s)
Algorithms , Nonlinear Dynamics , Uncertainty , Bioreactors , Models, Biological
10.
J Mech Behav Biomed Mater ; 137: 105577, 2023 01.
Article in English | MEDLINE | ID: mdl-36410165

ABSTRACT

BACKGROUND: Intra-arterial thrombectomy is the main treatment for acute ischemic stroke due to large vessel occlusions and can consist in mechanically removing the thrombus with a stent-retriever. A cause of failure of the procedure is the fragmentation of the thrombus and formation of micro-emboli, difficult to remove. This work proposes a methodology for the creation of a low-dimensional surrogate model of the mechanical thrombectomy procedure, trained on realizations from high-fidelity simulations, able to estimate the evolution of the maximum first principal strain in the thrombus. METHOD: A parametric finite-element model was created, composed of a tapered vessel, a thrombus, a stent-retriever and a catheter. A design of experiments was conducted to sample 100 combinations of the model parameters and the corresponding thrombectomy simulations were run and post-processed to extract the maximum first principal strain in the thrombus during the procedure. Then, a surrogate model was built with a combination of principal component analysis and Kriging. RESULTS: The surrogate model was chosen after a sensitivity analysis on the number of principal components and was tested with 10 additional cases. The model provided predictions of the strain curves with correlation above 0.9 and a maximum error of 28%, with an error below 20% in 60% of the test cases. CONCLUSIONS: The surrogate model provides nearly instantaneous estimates and constitutes a valuable tool for evaluating the risk of thrombus rupture during pre-operative planning for the treatment of acute ischemic stroke.


Subject(s)
Ischemic Stroke , Thrombosis , Humans , Thrombectomy/methods , Stents , Catheters
11.
Bioengineering (Basel) ; 11(1)2023 Dec 28.
Article in English | MEDLINE | ID: mdl-38247914

ABSTRACT

Subject-specific hip capsule models could offer insights into impingement and dislocation risk when coupled with computer-aided surgery, but model calibration is time-consuming using traditional techniques. This study developed a framework for instantaneously generating subject-specific finite element (FE) capsule representations from regression models trained with a probabilistic approach. A validated FE model of the implanted hip capsule was evaluated probabilistically to generate a training dataset relating capsule geometry and material properties to hip laxity. Multivariate regression models were trained using 90% of trials to predict capsule properties based on hip laxity and attachment site information. The regression models were validated using the remaining 10% of the training set by comparing differences in hip laxity between the original trials and the regression-derived capsules. Root mean square errors (RMSEs) in laxity predictions ranged from 1.8° to 2.3°, depending on the type of laxity used in the training set. The RMSE, when predicting the laxity measured from five cadaveric specimens with total hip arthroplasty, was 4.5°. Model generation time was reduced from days to milliseconds. The results demonstrated the potential of regression-based training to instantaneously generate subject-specific FE models and have implications for integrating subject-specific capsule models into surgical planning software.

12.
Data Sci Eng ; 7(4): 402-427, 2022.
Article in English | MEDLINE | ID: mdl-36345394

ABSTRACT

Surrogate modeling has been popularized as an alternative to full-scale models in complex engineering processes such as manufacturing and computer-assisted engineering. The modeling demand exponentially increases with complexity and number of system parameters, which consequently requires higher-dimensional engineering solving techniques. This is known as the curse of dimensionality. Surrogate models are commonly used to replace costly computational simulations and modeling of complex geometries. However, an ongoing challenge is to reduce execution and memory consumption of high-complexity processes, which often exhibit nonlinear phenomena. Dimensionality reduction algorithms have been employed for feature extraction, selection, and elimination for simplifying surrogate models of high-dimensional problems. By applying dimensionality reduction to surrogate models, less computation is required to generate surrogate model parts while retaining sufficient representation accuracy of the full process. This paper aims to review the current literature on dimensionality reduction integrated with surrogate modeling methods. A review of the current state-of-the-art dimensionality reduction and surrogate modeling methods is introduced with a discussion of their mathematical implications, applications, and limitations. Finally, current studies that combine the two topics are discussed and avenues of further research are presented.

13.
Procedia Comput Sci ; 212: 340-347, 2022.
Article in English | MEDLINE | ID: mdl-36437869

ABSTRACT

In this research, we aimed to assess the possibility of using surrogate modeling methods to replace time-consuming calculations related to modeling of COVID-19 dynamics. The Gaussian process regression (GPR) was used as a surrogate to replace detailed simulations by a COVID-19 multiagent model. Experiments were conducted with various kernels, as a result, in accordance with the quality metrics of the models, kernels were identified in which the surrogate gives the most accurate result (Rational Quadratic kernel and Additive kernel). It was demonstrated that by smoothing the dynamics of COVID-19 propagation, it is possible to achieve greater accuracy in GPR training. The obtained results prove the potential possibility of using surrogate modeling methods to conduct an uncertainty quantification of the multiagent model of COVID-19 propagation.

14.
Comput Biol Med ; 149: 105963, 2022 10.
Article in English | MEDLINE | ID: mdl-36058066

ABSTRACT

The computational requirements of the Huxley-type muscle models are substantially higher than those of Hill-type models, making large-scale simulations impractical or even impossible to use. We constructed a data-driven surrogate model that operates similarly to the original Huxley muscle model but consumes less computational time and memory to enable efficient usage in multiscale simulations of the cardiac cycle. The data was collected from numerical simulations to train deep neural networks so that the neural networks' behavior resembles that of the Huxley model. Since the Huxley muscle model is history-dependent, time series analysis is required to take the previous states of the muscle model into account. Recurrent and temporal convolutional neural networks are typically used for time series analysis. These networks were trained to produce stress and instantaneous stiffness. Once the networks have been trained, we compared the similarity of the produced stresses and achieved speed-up to the original Huxley model, which indicates the potential of the surrogate model to replace the model efficiently. We presented the creation procedure of the surrogate model and integration of the surrogate model into the finite element solver. Based on similarities between the surrogate model and the original model in several types of numerical experiments, and also achieved speed-up of an order of magnitude, it can be concluded that the surrogate model has the potential to replace the original model within multiscale simulations. Finally, we used our surrogate model to simulate a full cardiac cycle in order to demonstrate the application of the surrogate model in larger-scale problems.


Subject(s)
Models, Biological , Muscles , Muscle Contraction , Muscles/physiology , Myocardial Contraction , Neural Networks, Computer
15.
J Biomech Eng ; 144(12)2022 12 01.
Article in English | MEDLINE | ID: mdl-35767343

ABSTRACT

Modeling biological soft tissue is complex in part due to material heterogeneity. Microstructural patterns, which play a major role in defining the mechanical behavior of these tissues, are both challenging to characterize and difficult to simulate. Recently, machine learning (ML)-based methods to predict the mechanical behavior of heterogeneous materials have made it possible to more thoroughly explore the massive input parameter space associated with heterogeneous blocks of material. Specifically, we can train ML models to closely approximate computationally expensive heterogeneous material simulations where the ML model is trained on datasets of simulations with relevant spatial heterogeneity. However, when it comes to applying these techniques to tissue, there is a major limitation: the number of useful examples available to characterize the input domain under study is often limited. In this work, we investigate the efficacy of both ML-based generative models and procedural methods as tools for augmenting limited input pattern datasets. We find that a style-based generative adversarial network with an adaptive discriminator augmentation mechanism is able to successfully leverage just 1000 example patterns to create authentic generated patterns. In addition, we find that diverse generated patterns with adequate resemblance to real patterns can be used as inputs to finite element simulations to meaningfully augment the training dataset. To enable this methodological contribution, we have created an open access finite element analysis simulation dataset based on Cahn-Hilliard patterns. We anticipate that future researchers will be able to leverage this dataset and build on the work presented here.


Subject(s)
Finite Element Analysis , Computer Simulation
16.
Materials (Basel) ; 15(10)2022 May 20.
Article in English | MEDLINE | ID: mdl-35629674

ABSTRACT

The digitalization of manufacturing processes offers great potential in quality control, traceability, and the planning and setup of production. In this regard, process simulation is a well-known technology and a key step in the design of manufacturing processes. However, process simulations are computationally and time-expensive, typically beyond the manufacturing-cycle time, severely limiting their usefulness in real-time process control. Machine Learning-based surrogate models can overcome these drawbacks, and offer the possibility to achieve a soft real-time response, which can be potentially developed into full close-loop manufacturing systems, at a computational cost that can be realistically implemented in an industrial setting. This paper explores the novel concept of using a surrogate model to analyze the case of the press hardening of a steel sheet of 22MnB5. This hot sheet metal forming process involves a crucial heat treatment step, directly related to the final part quality. Given its common use in high-responsibility automobile parts, this process is an interesting candidate for digitalization in order to ensure production quality and traceability. A comparison of different data and model training strategies is presented. Finite element simulations for a transient heat transfer analysis are performed with ABAQUS software and they are used for the training data generation to effectively implement a ML-based surrogate model capable of predicting key process outputs for entire batch productions. The resulting final surrogate predicts the behavior and evolution of the most important temperature variables of the process in a wide range of scenarios, with a mean absolute error around 3 °C, but reducing the time four orders of magnitude with respect to the simulations. Moreover, the methodology presented is not only relevant for manufacturing purposes, but can be a technology enabler for advanced systems, such as digital twins and autonomous process control.

17.
Article in English | MEDLINE | ID: mdl-35422534

ABSTRACT

High-fidelity cardiac models using attribute-rich finite element based models have been developed to a very mature stage. However, such finite-element based approaches remain time consuming, which have limited their clinical use. There remains a need for alternative methods for novel cardiac simulation methods of capable of high fidelity simulations in clinically relevant time frames. Surrogate models are one approach, which traditionally use a data-driven approach for training, requiring the generation of a sufficiently large number of simulation results as the training dataset. Alternatively, a physics-informed neural network can be trained by minimizing the PDE residuals or energy potentials. However, this approach does not provide for a general method to easily using existing finite element models. To address these challenges, we developed a hybrid approach that seamlessly bridged a neural network surrogate model with a differentiable finite element domain representation (NNFE). Given its importance in cardiac simulations, we applied this approach to simulations of the hyperelastic mechanical behavior of ventricular myocardium from recent 3D kinematic constitutive model (J Mech Behav Biomed Mater, 2020 doi: 10.1016/j.jmbbm.2019.103508). We utilized cuboidal domain and conducted numerical studies of individual myocardium specimens discretized by a finite element mesh and assigned with experimentally obtained myofiber architectures. Both parameterized Dirichlet and Neumann boundary conditions were studied. We developed a second-order Newton optimization method, instead of using stochastic gradient descent method, to train the neural network efficiently. The resulting trained neural network surrogate model demonstrated excellent agreement with the corresponding 'ground truth' finite element solutions over the entire physiological deformation range. More importantly, the NNFE approach provided a significantly decreased computational time for a range of finite element mesh sizes for online predictions. For example, as the finite element mesh sized increased from 2744 to 175615 elements the NNFE computational time increased from 0.1108 s to 0.1393 s, while the 'ground truth' FE model increased from 4.541 s to 719.9 s. These results suggests that NNFE run times can be significantly reduced compared with the traditional large-deformation based finite element solution methods. The trade off is to train the NNFE off-line within a range of anticipated physiological responses. However, training time would only have to be performed once before any number of application uses. Moreover, since the NNFE is an analytical function its computational performance will be amplified when the corresponding problem becomes more complex.

18.
Int J Numer Method Biomed Eng ; 38(6): e3598, 2022 06.
Article in English | MEDLINE | ID: mdl-35343089

ABSTRACT

Nanoparticles (NPs) are used for drug delivery with enhanced selectivity and reduced side-effect toxicity in cancer treatments. Based on the literature, the influence of the NPs mechanical and geometrical properties on their cellular uptake has been studied through experimental investigations. However, due to the difficulty to vary the parameters independently in such a complex system, it remains hard to efficiently conclude on the influence of each one of them on the cellular internalization of a NP. In this context, different mechanical / mathematical models for the cellular uptake of NPs have been developed. In this paper, we numerically investigate the influence of the NP's aspect ratio, the membrane tension and the cell-NP adhesion on the uptake of the NP using the model introduced in1 coupled with a numerical stochastic scheme to measure the weight of each one of the aforementioned parameters. The results reveal that the aspect ratio of the particle is the most influential parameter on the wrapping of the particle by the cell membrane. Then the adhesion contributes twice as much as the membrane tension. Our numerical results match the previous experimental observations.


Subject(s)
Nanoparticles , Biological Transport , Cell Membrane
19.
Ann Biomed Eng ; 50(4): 467-481, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35212855

ABSTRACT

The current interest of those dealing with medical research is the preparation of digital twins. In this frame, the first step to accomplish is the preparation of reliable numerical models. This is a challenging task since it is not common to know the exact device geometry and material properties unless in studies performed in collaboration with the manufacturer. The particular case of modeling Ni-Ti stents can be highlighted as a worst-case scenario due to both the complex geometrical features and non-linear material response. Indeed, if the limitations in the description of the geometry can be overcome, many difficulties still exist in the assessment of the material, which can vary according to the manufacturing process and requires many parameters for its description. The purpose of this work is to propose a coupled experimental and computational workflow to identify the set of material properties in the case of commercially-resembling Ni-Ti stents. This has been achieved from non-destructive tensile tests on the devices compared with results from Finite Element Analysis (FEA). A surrogate modeling approach is proposed for the identification of the material parameters, based on a minimization problem on the database of responses of Ni-Ti materials obtained with FEA with a series of different parameters. The reliability of the final result was validated through the comparison with the output of additional experiments.


Subject(s)
Nickel , Titanium , Finite Element Analysis , Materials Testing , Reproducibility of Results , Stents
20.
Membranes (Basel) ; 12(2)2022 Feb 09.
Article in English | MEDLINE | ID: mdl-35207120

ABSTRACT

An ever-growing population together with globally depleting water resources pose immense stresses for water supply systems. Desalination technologies can reduce these stresses by generating fresh water from saline water sources. Reverse osmosis (RO), as the industry leading desalination technology, typically involves a complex network of membrane modules that separate unwanted particles from water. The optimal design and operation of these complex RO systems can be computationally expensive. In this work, we present a modeling and optimization strategy for addressing the optimal operation of an industrial-scale RO plant. We employ a feed-forward artificial neural network (ANN) surrogate modeling representation with rectified linear units as activation functions to capture the membrane behavior accurately. Several ANN set-ups and surrogate models are presented and evaluated, based on collected data from the H2Oaks RO desalination plant in South-Central Texas. The developed ANN is then transformed into a mixed-integer linear programming formulation for the purpose of minimizing energy consumption while maximizing water utilization. Trade-offs between the two competing objectives are visualized in a Pareto front, where indirect savings can be uncovered by comparing energy consumption for an array of water recoveries and feed flows.

SELECTION OF CITATIONS
SEARCH DETAIL
...