RESUMO
Digital twins in biomedical research, i.e. virtual replicas of biological entities such as cells, organs, or entire organisms, hold great potential to advance personalized healthcare. As all biological processes happen in space, there is a growing interest in modeling biological entities within their native context. Leveraging generative artificial intelligence (AI) and high-volume biomedical data profiled with spatial technologies, researchers can recreate spatially-resolved digital representations of a physical entity with high fidelity. In application to biomedical fields such as computational pathology, oncology, and cardiology, these generative digital twins (GDT) thus enable compelling in silico modeling for simulated interventions, facilitating the exploration of 'what if' causal scenarios for clinical diagnostics and treatments tailored to individual patients. Here, we outline recent advancements in this novel field and discuss the challenges and future research directions.
RESUMO
Digital twins enable simulation, comprehensive analysis and predictions, as virtual representations of physical systems. They are also finding increasing interest and application in the healthcare sector, with a particular focus on digital twins of the brain. We discuss how digital twins in neuroscience enable the modeling of brain functions and pathology as they offer an in-silico approach to studying the brain and illustrating the complex relationships between brain network dynamics and related functions. To showcase the capabilities of digital twinning in neuroscience we demonstrate how the impact of brain tumors on the brain's physical structures and functioning can be modeled in relation to the philosophical concept of plasticity. Against this technically derived backdrop, which assumes that the brain's nonlinear behavior toward improvement and repair can be modeled and predicted based on MRI data, we further explore the philosophical insights of Catherine Malabou. Malabou emphasizes the brain's dual capacity for adaptive and destructive plasticity. We will discuss in how far Malabou's ideas provide a more holistic theoretical framework for understanding how digital twins can model the brain's response to injury and pathology, embracing Malabou's concept of both adaptive and destructive plasticity which provides a framework to address such yet incomputable aspects of neuroscience and the sometimes seemingly unfavorable dynamics of neuroplasticity helping to bridge the gap between theoretical research and clinical practice.
RESUMO
A comprehensive digital transformation has been undergone by the oil and gas industry, wherein digital twins are leveraged to enable real-time data analysis, providing predictive and diagnostic engineering insights. The potential for developing intelligent oil and gas fields is substantial with the implementation of digital twins. A digital twin framework for gear rack drilling rigs is proposed, built upon an understanding of the digital twin composition and characteristics of the gear rack drilling rig lifting system. The framework encompasses descriptions of digital twin characteristics specific to drilling rigs, the application environment, and behavioral rules. The modeling approach integrates mechanism modeling, real-time performance response, instantaneous data transmission, and data visualization. To illustrate this framework, exemplary case studies involving the transmission unit and support unit of the lifting system are presented. Mechanism models are constructed to analyze dynamic gear performance and support unit response. Real-time data transmission is facilitated through sensor-based monitoring, enhancing the prediction speed and accuracy of dynamic performance through a synergy of mechanism modeling, machine learning, and real-time data analysis. The digital twin of the lifting system is visualized utilizing the Unity3D platform. Furthermore, functionalities on data acquisition, processing, and visualization across diverse application scenarios are encapsulated into modular components, streamlining the creation of high-fidelity digital twins. The frameworks and modeling methodologies presented herein can serve as a foundational and methodological guide for the exploration and implementation of digital twin technology within the oil and gas industry, ultimately fostering its advancement in this sector.
RESUMO
Resistive Random Access Memory (RRAM) has gained considerable momentum due to its non-volatility and energy efficiency. Material and device scientists have been proposing novel material stacks that can mimic the "ideal memristor" which can deliver performance, energy efficiency, reliability and accuracy. However, designing RRAM-based systems is challenging. Engineering a new material stack, designing a device, and experimenting takes significant time for material and device researchers. Furthermore, the acceptability of the device is ultimately decided at the system level. We see a gap here where there is a need for facilitating material and device researchers with a "push button" modeling framework that allows to evaluate the efficacy of the device at system level during early device design stages. Speed, accuracy, and adaptability are the fundamental requirements of this modelling framework. In this paper, we propose a digital twin (DT)-like modeling framework that automatically creates RRAM device models from device measurement data. Furthermore, the model incorporates the peripheral circuit to ensure accurate energy and performance evaluations. We demonstrate the DT generation and DT usage for multiple RRAM technologies and applications and illustrate the achieved performance of our GPU implementation. We conclude with the application of our modeling approach to measurement data from two distinct fabricated devices, validating its effectiveness in a neural network processing an Electrocardiogram (ECG) dataset and incorporating Fault Aware Training (FAT).
RESUMO
A new method was proposed to address fault diagnosis by applying the digital twin (DT) high-fidelity behavior and the deep learning (DL) data mining capabilities. Subsequently, the proposed fault distribution GAN (FDGAN) was built to map virtual and physical entities for the data from the established test platform. Finally, the MobileViG was employed to validate the model and diagnose faults. The accuracy of the proposed method with training samples of 600 and 800 were 88.4% and 99.5%, respectively. These accuracies surpass those of other methods based on CycleGAN (98.86%), CACGAN (94.92%), ACGAN (86.45%), ML1D-GAN (82.33%), and transfer learning (99.38%). Therefore, with the integration of global connectivity, an innovative network structure, and training methods, FDGAN can effectively address challenges such as network degradation, limited feature extraction in small windows, and insufficient model robustness.
RESUMO
Machine tool accuracy is greatly influenced by geometric and thermal errors that cause positioning deviations within its working volume. Conventionally, these two error sources are treated separately, with distinct procedures employed for their characterization and correction. This research proposes a unified volumetric error compensation approach in terms of a calibration procedure and error compensation model, which considers geometric and thermal errors as a single error source that exhibits temporal variation primarily due to changes in the machine's thermal state. Building upon previous works that introduced a fully automated volumetric calibration procedure capable of characterizing the variation in volumetric error over time, this study extends this methodology, incorporating multiple temperature sensors distributed throughout the machine and generating a digital twin based on a volumetric error compensation model capable of predicting and compensating for the volumetric error over time at any point in the workspace, using temperature measurements and axis positions as inputs. This methodology is applied to a medium-sized milling machine tool. The digital twin is trained and validated on volumetric calibration tests, wherein various controlled heat sources are employed to induce thermal variations while measuring the temperatures in the machine.
RESUMO
The digital twin (DT), which involves creating a virtual replica of a physical asset or system, has emerged as a transformative set of tools across various industries. In the oil and gas (O&G) industry, the development of DTs represents a significant evolution in how companies manage complex operations, enhance safety, and optimize decision-making processes. Despite these significant advancements, the underlying tools, technologies, and frameworks for developing DTs in O&G applications remain non-standardized and unfamiliar to many O&G practitioners, highlighting the need for a systematic literature review (SLR) on the topic. Thus, this paper offers an SLR of the existing literature on DT development for O&G from 2018 onwards, utilizing Scopus and Web of Science Core Collection. We provide a comprehensive overview of this field, demonstrate how it is evolving, and highlight standard practices and research opportunities in the area. We perform broad classifications of the 98 studies, categorizing the DTs by their development methodologies, implementation objectives, data acquisition, asset digital development, data integration and preprocessing, data analysis and modeling, evaluation and validation, and deployment tools. We also include a bibliometric analysis of the selected papers, highlighting trends and key contributors. Given the increasing number of new DT developments in O&G and the many new technologies available, we hope to provide guidance on the topic and promote knowledge production and growth concerning the development of DTs for O&G.
RESUMO
We developed and validated digital twins (DTs) for contrast sensitivity function (CSF) across 12 prediction tasks using a data-driven, generative model approach based on a hierarchical Bayesian model (HBM). For each prediction task, we utilized the HBM to compute the joint distribution of CSF hyperparameters and parameters at the population, subject, and test levels. This computation was based on a combination of historical data (N = 56), any new data from additional subjects (N = 56), and "missing data" from unmeasured conditions. The posterior distributions of the parameters in the unmeasured conditions were used as input for the CSF generative model to generate predicted CSFs. In addition to their accuracy and precision, these predictions were evaluated for their potential as informative priors that enable generation of synthetic quantitative contrast sensitivity function (qCSF) data or rescore existing qCSF data. The DTs demonstrated high accuracy in group level predictions across all tasks and maintained accuracy at the individual subject level when new data were available, with accuracy comparable to and precision lower than the observed data. DT predictions could reduce the data collection burden by more than 50% in qCSF testing when using 25 trials. Although further research is necessary, this study demonstrates the potential of DTs in vision assessment. Predictions from DTs could improve the accuracy, precision, and efficiency of vision assessment and enable personalized medicine, offering more efficient and effective patient care solutions.
Assuntos
Teorema de Bayes , Sensibilidades de Contraste , Humanos , Sensibilidades de Contraste/fisiologia , Feminino , Masculino , Adulto , Pessoa de Meia-Idade , Adulto JovemRESUMO
In this work, a high-fidelity digital twin was developed to support the design and testing of control strategies for drug product manufacturing via direct compression. The high-fidelity digital twin platform was based on typical pharmaceutical equipment, materials, and direct compression continuous processes. The paper describes in detail the material characterization, the Discrete Element Method (DEM) model and the DEM model parameter calibration approach and provides a comparison of the system's response to the experimental results for stepwise changes in the API concentration at the mixer inlet. A calibration method for a cohesive DEM contact model parameter estimation was introduced. To assure a correct prediction for a wide range of processes, the calibration approach contained four characterization experiments using different stress states and different measurement principles, namely the bulk density test, compression with elastic recovery, the shear cell, and the rotating drum. To demonstrate the sensitivity of the DEM contact parameters to the process response, two powder characterization data sets with different powder flowability were applied. The results showed that the calibration method could differentiate between the different material batches of the same blend and that small-scale material characterization tests could be used to predict the residence time distribution in a continuous manufacturing process.
RESUMO
This paper is the second in a series of two that describes the application of discrete element method (DEM) and reduced order modeling to predict the effect of disturbances in the concentration of drug substance at the inlet of a continuous powder mixer on the concentration of the drug substance at the outlet of the mixer. In the companion publication, small-scale material characterization tests, a careful DEM parameter calibration and DEM simulations of the manufacturing process were used to develop a reliable RTD models. In the current work, the same calibration workflow was employed to evaluate the predictive ability of the resulting reduced-order model for an extended design space. DEM simulations were extrapolated using a relay race method and the cumulative RTD was accurately parameterized using the n-CSTR model. By performing experiments and simulations, a calibrated DEM model predicted the response of a continuous powder mixer to step changes in the inlet concentration of an API. Thus, carefully calibrated DEM models was used to guide and reduce experimental work and to establish an adequate control strategy. In addition, a further reduction in the computational effort was obtained by using the relay race method to extrapolate results. The predicted RTD curves were then parameterized to develop reduced order models and used to simulate the process in a matter of seconds. Overall, a control strategy evaluation tool based on high-fidelity DEM simulations was developed using material-sparing small-scale characterization tests.
RESUMO
The experimental approach developed in this research demonstrated how the cloud, the Internet of Things (IoT), edge computing, and Artificial Intelligence (AI), considered key technologies in Industry 4.0, provide the expected horizon for adaptive vision in Continued Process Verification (CPV), the final stage of Process Validation (PV). Pichia pastoris producing Candida rugosa lipase 1 under the regulation of the constitutive GAP promoter was selected as an experimental bioprocess. The bioprocess worked under hypoxic conditions in carbon-limited fed-batch cultures through a physiological control based on the respiratory quotient (RQ). In this novel bioprocess, a digital twin (DT) was built and successfully tested. The implementation of online sensors worked as a bridge between the microorganism and AI models, to provide predictions from the edge and the cloud. AI models emulated the metabolism of Pichia based on critical process parameters and actionable factors to achieve the expected quality attributes. This innovative AI-aided Adaptive-Proportional Control strategy (AI-APC) improved the reproducibility comparing to a Manual-Heuristic Control strategy (MHC), showing better performance than the Boolean-Logic-Controller (BLC) tested. The accuracy, indicated by the Mean Relative Error (MRE), was for the AI-APC lower than 4%, better than the obtained for MHC (10%) and BLC (5%). Moreover, in terms of precision, the same trend was observed when comparing the Root Mean Square Deviation (RMSD) values, becoming lower as the complexity of the controller increases. The successful automatic real time control of the bioprocess orchestrated by AI models proved the 4.0 capabilities brought by the adaptive concept and its validity in biopharmaceutical upstream operations.
RESUMO
All-solid-state batteries with nonflammable inorganic solid electrolytes are the key to addressing the safety issues of lithium-ion batteries with flammable organic liquid electrolytes. However, conventional electrode materials suffer from substantial volume changes during Li+ (de)intercalation, leading to mechanical failure of interfaces between electrode materials and solid electrolytes and then severe performance degradation. In this study, we report strain-free charge storage via the interfaces between transition metal carbides (MXenes) and solid electrolytes, where MXene shows negligible structural changes during Li+ (de)intercalation. Operando scanning electron transmission microscopy with electron energy-loss spectroscopy reveals the pillar effect of trapped Li+ in the interlayer spaces of MXene to achieve the strain-free features. An all strain-free solid-state battery, which consists of a strain-free Ti3C2Tx negative electrode and a strain-free disordered rocksalt Li8/7Ti2/7V4/7O2 positive electrode, demonstrates long-term stable operation while preserving the interfacial contact between electrode materials and solid electrolytes.
RESUMO
Computational models can be at the basis of new powerful technologies for studying and classifying disorders like pre-eclampsia, where it is difficult to distinguish pre-eclamptic patients from non-pre-eclamptic based on pressure when patients have a track record of hypertension. Computational models now enable a detailed analysis of how pregnancy affects the cardiovascular system. Therefore, new non-invasive biomarkers were developed that can aid the classification of pre-eclampsia through the integration of six different measured non-invasive cardiovascular signals. Datasets of 21 pregnant women (no early onset pre-eclampsia, n = 12; early onset pre-eclampsia, n = 9) were used to create personalised cardiovascular models through computational modelling resulting in predictions of blood pressure and flow waveforms in all major and minor vessels of the utero-ovarian system. The analysis performed revealed that the new predictors PPI (pressure pulsatility index) and RI (resistance index) calculated in arcuate and radial/spiral arteries are able to differentiate between the 2 groups of women (t-test scores of p < .001) better than PI (pulsatility index) and RI (Doppler calculated in the uterine artery) for both supervised and unsupervised classification. In conclusion, two novel high-performing biomarkers for the classification of pre-eclampsia have been identified based on blood velocity and pressure predictions in the smaller placental vasculatures where non-invasive measurements are not feasible.
Assuntos
Biomarcadores , Pré-Eclâmpsia , Humanos , Pré-Eclâmpsia/diagnóstico , Pré-Eclâmpsia/fisiopatologia , Feminino , Gravidez , Adulto , Modelos Cardiovasculares , Pressão Sanguínea , Velocidade do Fluxo SanguíneoRESUMO
BACKGROUND: Patient-specific 3-dimensional (3D) computational modelling offers a tailored approach with promising results, but experience using digital-twin fusion on real-time fluoroscopy to guide left atrial appendage closure (LAAC) is unreported. OBJECTIVES: To assess whether LAAC guided by fusion of a 3D computational model on real-time fluoroscopy is safe and effective. METHODS: We included retrospectively through a multicenter registry all consecutive patients with non-valvular atrial fibrillation (AF) who underwent LAAC guided by artificial intelligence (AI)-enabled computer simulations (FEops, Gent, Belgium) fusion with real-time fluoroscopy. Operators selected the appropriate device size and position in relation to the LAA using FEops HEARTguide™, and a digital twin was provided for image fusion. The primary efficacy endpoint was successful LAAC with the use of a single device, without moderate or greater peri-device leak and/or device related thrombus (DRT) on follow-up imaging. The primary safety endpoint was a composite of major procedural complications including tamponade, stroke, systemic embolism, major bleeding, and device embolization. RESULTS: A total of 106 patients underwent LAAC with an Amulet™ or Watchman FLX™ device using CT-model-fluoroscopy fusion imaging. Device implantation was successful in 100 % of cases. The primary efficacy endpoint was met in 82 patients (89 %). A single-device SINGLE-deployment LAAC procedure was observed in 49 cases (46 %). The primary safety endpoint occurred in 2 patients (1.9 %). After a median follow-up of 405 days, two patients suffered an ischemic stroke and four expired. CONCLUSIONS: Fusion of a CT-based 3D computational model on real-time fluoroscopy is a safe and effective approach that may optimize transcatheter LAAC outcomes.
RESUMO
Background: Digital twin (DT)-guided lifestyle changes induce type 2 diabetes (T2D) remission but effects on hypertension (HTN) in this population are unknown. Objectives: The purpose of this study was to assess effects of DT vs standard of care (SC) on blood pressure (BP), anti-HTN medication, HTN remission, and microalbuminuria in participants with T2D. Methods: This is a secondary analysis of a randomized controlled trial in India of 319 participants with T2D. Participants were randomized to DT group (N = 233), which used artificial intelligence-enabled DT technology, or SC group (N = 86). A Home Blood Pressure Monitoring system guided anti-HTN medication adjustments. BP, anti-HTN medications, HTN remission rates, and microalbuminuria were compared between groups. Results: Among the 319 participants, 44 in DT and 15 in SC group were on anti-HTN medications, totaling 59 (18.4%) participants. DT group achieved significant reductions in systolic blood pressure (-7.6 vs -3.2 mm Hg; P < 0.007) and diastolic blood pressure (-4.3 vs -2.2 mm Hg; P = 0.046) after 1 year compared with SC group. 68.2% of DT group remained off anti-HTN medications compared to none in SC group. Among participants with HTN, DT subgroup achieved higher rates of normotension (40.9% vs 6.7%; P = 0.0009) and HTN remission (50% vs 0%; P < 0.0001) than SC subgroup. DT group had a higher rate of achieving normoalbuminuria (92.4% vs 83.1%; P = 0.018) at 1 year compared with SC group. Conclusions: Artificial intelligence -enabled DT technology is more effective than SC in reducing BP and anti-HTN medications and inducing HTN remission and normoalbuminuria in participants with HTN and T2D. (A Novel WholeBody Digital Twin Enabled Precision Treatment for Reversing Diabetes; CTRI/2020/08/027072).
RESUMO
Objectives: This study proposes a novel architecture for designing digital twins in healthcare units. Methods: A systematic research methodology was employed to develop architecture design patterns. In particular, a systematic literature review was conducted using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework to answer specific research questions and provide guidelines for designing the architecture. Subsequently, a case study was designed and analyzed at a chemotherapy treatment center for outpatients. Results: System architecture knowledge was distilled from this real-world case study, supplemented by existing software and systems design patterns. A novel five-layer architecture for digital twins in healthcare units was proposed with a focus on the security and privacy of patients' information. Conclusion: The proposed digital twin architecture for healthcare units offers a comprehensive solution that provides modularity, scalability, security, and interoperability. The architecture provides a robust framework for effectively and efficiently managing healthcare environments.
Assuntos
Segurança Computacional , Humanos , Segurança Computacional/normas , ConfidencialidadeRESUMO
Multifunctional, nanostructured membranes hold immense promise for overcoming permeability-selectivity trade-offs and enhancing membrane durability in challenging molecule separations. Following the fabrication of copolymer membranes, additive manufacturing technologies can introduce reactive inks onto substrates to modify pore wall chemistries. However, large-scale implementation is hindered by a lack of systematic optimization. This study addresses this challenge by elucidating the membrane functionalization mechanisms and optimal manufacturing conditions using a copper(I)-catalyzed azide-alkyne cycloaddition (CuAAC) "click" reaction. Leveraging a data science toolkit (e.g., nonlinear regression, uncertainty quantification, identifiability analyses, model selection, and design of experiments), we developed two mathematical models: (1) algebraic equations to predict equilibrium concentrations after preparing reactive inks by mixing copper sulfate, ascorbic acid (AA), and an alkynyl-terminated reactant; and (2) reaction-diffusion partial differential equations (PDEs) to describe the functionalization process. The ink preparation chemistry with side reactions was validated through pH and UV-vis measurements, while the diffusion and kinetic parameters in the PDE model were calibrated using time-series conversion of the azide moieties inferred from Fourier-transform infrared spectroscopy. This modeling framework avoids redundant experimental efforts and offers a functionalization protocol for scaling up designs. Ink optimization problems were proposed to reduce the use of expensive and environmentally insulting ink materials, i.e., Cu(II), while ensuring the desired chemical distributions. With optimal ink formulation Cu(II)/AA/alkyne = 1:1:2 identified, we uncovered trade-offs between Cu(II) usage and functionalization time; for example, in continuous roll-to-roll manufacturing with a conserved functionalization bath setup, our optimal operational conditions to achieve ≥90% functionalization enable at least a 20% reduction in total copper investment compared to previous experimental results. The data science-enabled ink optimization framework is extendable for on-demand multifunctional membranes in numerous future applications such as metal recovery from wastewater and brine.
RESUMO
This retrospective observational study, building on prior research that demonstrated the efficacy of the Digital Twin (DT) Precision Treatment Program over shorter follow-up periodsââ, aimed to examine glycemic control and reduced anti-diabetic medication use after one-year in a DT commercial program. T2D patients enrolled had adequate hepatic and renal function and no recent cardiovascular events. DT intervention powered by artificial intelligence utilizes precision nutrition, activity, sleep, and deep breathing exercises. Outcome measures included HbA1c change, medication reduction, anthropometrics, insulin markers, and continuous glucose monitoring (CGM) metrics. Of 1985 enrollees, 132 (6.6%) were lost to follow-up, leaving 1853 participants who completed one-year. At one-year, participants exhibited significant reductions in HbA1c [mean change: -1.8% (SD 1.7%), p < 0.001], with 1650 (89.0%) achieving HbA1c below 7%. At baseline, participants were on mean 1.9 (SD 1.4) anti-diabetic medications, which decreased to 0.5 (SD 0.7) at one-year [change: -1.5 (SD 1.3), p < 0.001]. Significant reductions in weight [mean change: -4.8 kg (SD 6.0 kg), p < 0.001], insulin resistance [HOMA2-IR: -0.1 (SD 1.2), p < 0.001], and improvements in ß-cell function [HOMA2-B: +21.6 (SD 47.7), p < 0.001] were observed, along with better CGM metrics. These findings suggest that DT intervention could play a vital role in the future of T2D care.
Assuntos
Diabetes Mellitus Tipo 2 , Hemoglobinas Glicadas , Hipoglicemiantes , Humanos , Diabetes Mellitus Tipo 2/terapia , Masculino , Feminino , Estudos Retrospectivos , Pessoa de Meia-Idade , Hipoglicemiantes/uso terapêutico , Hemoglobinas Glicadas/análise , Hemoglobinas Glicadas/metabolismo , Idoso , Resultado do Tratamento , Glicemia/metabolismo , Glicemia/análise , Automonitorização da Glicemia/métodos , AdultoRESUMO
Proliferative vitreoretinopathy (PVR) is a pathological process characterized by the formation of fibrotic membranes that contract and lead to recurrent retinal detachment. Pars plana vitrectomy (PPV) is the primary treatment, but recurrence rates remain high, as surgery does not address the underlying molecular mechanisms driving fibrosis. Despite several proposed pharmacological interventions, no approved therapies exist, partly due to challenges in conducting preclinical and in vivo studies for ethical and safety reasons. This review explores the potential of computational models and Digital Twins, which are increasingly gaining attention in medicine. These tools could enable the development of progressively complex PVR models, from basic simulations to patient-specific Digital Twins. Nintedanib, a tyrosine kinase inhibitor targeting PDGFR, VEGFR, and FGFR, is presented as a prototype for computational models to simulate its effects on fibrotic pathways in virtual patient cohorts. Although still in its early stages, the integration of computational models and Digital Twins offers promising avenues for improving PVR management through more personalized therapeutic strategies.