RESUMEN
To explore the optoelectronic wetting droplet transport mechanism, a transient numerical model of optoelectrowetting (OEW) under the coupling of flow and electric fields is established. The study investigates the impact of externally applied voltage, dielectric constant of the dielectric layer, and interfacial tension between the two phases on the dynamic behavior of droplets during transport. The proposed model employs an improved Young's equation to calculate the instantaneous voltage and contact angle of the droplet on the dielectric layer. Results indicate that, under the influence of OEW, significant variations in the interface contact angle of droplets occur in bright and dark regions, inducing droplet movement. Moreover, the dynamic behavior of droplet transport is closely associated with various parameters, including externally applied voltage, dielectric layer material, and interfacial tension between the two phases, all of which impact the contact angle and, consequently, the transport process. By summarizing the influence patterns of the three key parameters studied, the optimization of droplet transport performance is achieved. The study employs two-dimensional simulation models to emulate the droplet motion under the influence of the electric field, investigating the OEW droplet transport mechanism. The continuous movement of droplets involves three stages: initial wetting, continuous transport, and reaching a steady position. The findings contribute theoretical support for the efficient design of digital microfluidic devices for OEW droplet movement and the selection of key parameters for droplet manipulation.
Asunto(s)
Humectabilidad , Modelos Teóricos , Simulación por Computador , Tensión Superficial , Técnicas Analíticas Microfluídicas/instrumentación , Técnicas Analíticas Microfluídicas/métodosRESUMEN
In the post-epidemic era, industrial production has gradually recovered, and the attendant air pollution problem has attracted much attention. In this study, the Zr-doped h-BN monolayer (Zr-BN) is proposed as a new gas sensor for air pollution. Based on density functional theory (DFT), we calculated and compared the adsorption energies (Eads), geometric parameters, the shortest distance between gas and substrate (dsub/gas), density of states (DOS), electron localization function (ELF), charge density difference (CDD), band structure, band gap energy change rate (ΔEg), and sensitivity (S) of Zr-BN adsorption systems (SO2F2, SOF2, SO2, NO, and CO2 adsorption systems). The results show that Zr-BN had strong adsorption and high sensitivity to the above-mentioned polluted gases, and the sensitivity was in the order of SOF2 > SO2F2 > CO2 > SO2 > NO. Therefore, this study provides a theoretical basis for the preparation of Zr-BN gas sensors and provides new ideas and methods for the development of other gas sensors.
RESUMEN
The design-build-test-learn workflow is pivotal in synthetic biology as it seeks to broaden access to diverse levels of expertise and enhance circuit complexity through recent advancements in automation. The design of complex circuits depends on developing precise models and parameter values for predicting the circuit performance and noise resilience. However, obtaining characterized parameters under diverse experimental conditions is a significant challenge, often requiring substantial time, funding, and expertise. This work compares five computational models of three different genetic circuit implementations of the same logic function to evaluate their relative predictive capabilities. The primary focus is on determining whether simpler models can yield conclusions similar to those of more complex ones and whether certain models offer greater analytical benefits. These models explore the influence of noise, parametrization, and model complexity on predictions of synthetic circuit performance through simulation. The findings suggest that when developing a new circuit without characterized parts or an existing design, any model can effectively predict the optimal implementation by facilitating qualitative comparison of designs' failure probabilities (e.g., higher or lower). However, when characterized parts are available and accurate quantitative differences in failure probabilities are desired, employing a more precise model with characterized parts becomes necessary, albeit requiring additional effort.
Asunto(s)
Redes Reguladoras de Genes , Modelos Genéticos , Biología Sintética , Biología Sintética/métodos , Simulación por ComputadorRESUMEN
Block copolymer membranes offer a bottom-up approach to form isoporous membranes that are useful for ultrafiltration of functional macromolecules, colloids, and water purification. The fabrication of isoporous block copolymer membranes from a mixed film of an asymmetric block copolymer and two solvents involves two stages: First, the volatile solvent evaporates, creating a polymer skin, in which the block copolymer self-assembles into a top layer, comprised of perpendicularly oriented cylinders, via evaporation-induced self-assembly (EISA). This top layer imparts selectivity onto the membrane. Subsequently, the film is brought into contact with a nonsolvent, and the exchange between the remaining nonvolatile solvent and nonsolvent through the self-assembled top layer results in nonsolvent-induced phase separation (NIPS). Thereby, a macroporous support for the functional top layer that imparts mechanical stability onto the system without significantly affecting permeability is fabricated. We use a single, particle-based simulation technique to investigate the sequence of both processes, EISA and NIPS. The simulations identify a process window, which allows for the successful in silico fabrication of integral-asymmetric, isoporous diblock copolymer membranes, and provide direct insights into the spatiotemporal structure formation and arrest. The role of the different thermodynamic (e.g., solvent selectivity for the block copolymer components) and kinetic (e.g., plasticizing effect of the solvent) characteristics is discussed.
RESUMEN
This study performs a complex analysis and review of the currently applied methods of inductively heating the charge material in hot die forging processes, as well as elaborates and verifies a more effective heating method. On this basis, a device for inductive heating using variable frequency inductors was designed and constructed, which made it possible to reduce the scale and decarburization with respect to the heater used so far. In the first place, the temperature distributions in the heater in the function of time were modeled with the use of the CEDRAT FLUX software. The aim of the research was to analyze the temperature gradient and value diversification on the surface and in the material core, as well as to determine the process stability. The following stage was designing and constructing a heater with an automatic system of loading and positioning of the charge on the exit, as well as with a possibility of working in a fully automated system adjusted to the work center. The last stage of investigations was the verification of the elaborated effective heating method on the basis of a short production series and a continuous work for the period of 8 h, both in the quantitative and qualitative aspect (reduced oxidation and decarburization as well as a gradient between the core and the surface). The obtained results confirm the effectiveness of the proposed solution referring to heating the charge material, especially in the aspect of stability and repeatability of the process, as well as a significant reduction in oxidation and decarburization of the material surface.
RESUMEN
This paper addresses the study of the complex effect of alloying elements (magnesium, manganese, copper and zirconium) on changes in magnesium-rich aluminum alloy composition, fine and coarse particle size and number, recrystallization characteristics and mechanical properties. The data obtained made it possible to analyze change in the chemical composition, sizes of intermetallic compounds and dispersoids depending on alloying elements content. The effect of the chemical composition on the driving force and the number of recrystallization nuclei was studied. It was established that the addition of alloying elements leads to grain refinement, including through the activation of a particle-stimulated nucleation mechanism. As a result, with Mg increase from 4 to 5%, addition of 0.5% Mn and 0.5% Cu, the grain size decreased from 72 to 15 µm. Grain refinement occurred due to an increase in the number of particle-stimulated nuclei, the number of which at minimal alloying rose from 3.47 × 1011 to 81.2 × 1011 with the maximum concentration of Mg, Mn, Cu additives. The retarding force of recrystallization, which in the original alloy was 1.57 × 10-3 N/m2, increased to 5.49 × 10-3 N/m2 at maximum alloying. The influence of copper was especially noticeable, the introduction of 0.5% increasing the retarding force of recrystallization by 2.39 × 10-3 N/m2. This is due to the fact that copper has the most significant effect on the size and number of intermetallic particles. It was established that strength increase without ductility change occurs when magnesium, manganese and copper content increases.
RESUMEN
This article continues a series of works devoted to the creation of large agent-based models, built as an artificial society, and the development of software for their implementation-the MÖBIUS design system for scalable agent-based models. The basic core of the system is a demographic model that simulates the natural movement of the population. A new stage in the development of the work discussed in this article was the creation on the basis of this core of an agent-based model of Russia, which includes families as agents of a new type, hierarchically connected with human agents. In addition, objects of a new type were introduced into the model-projects that provide for the creation in an artificial environment of analogues of complex control actions aimed at stimulating fertility. Developed on the basis of simulating the reaction of individual families to the introduced regional support measures, the model makes it possible to track their impact on key demographic indicators. The agent-based model of Russia was tested on data for a long retrospective period using the example of the launch of maternal capital programs and showed good agreement with official statistics.
RESUMEN
Enhancement of the electromagnetic properties of metallic nanostructures constitute an extensive research field related to plasmonics. The latter term is derived from plasmons, which are quanta corresponding to longitudinal waves that are propagating in matter by the collective motion of electrons. Plasmonics are increasingly finding wide application in sensing, microscopy, optical communications, biophotonics, and light trapping enhancement for solar energy conversion. Although the plasmonics field has relatively a short history of development, it has led to substantial advancement in enhancing the absorption of the solar spectrum and charge carrier separation efficiency. Recently, huge developments have been made in understanding the basic parameters and mechanisms governing the application of plasmonics, including the effects of nanoparticles' size, arrangement, and geometry and how all these factors impact the dielectric field in the surrounding medium of the plasmons. This review article emphasizes recent developments, fundamentals, and fabrication techniques for plasmonic nanostructures while investigating their thermal effects and detailing light-trapping enhancement mechanisms. The mismatch effect of the front and back light grating for optimum light trapping is also discussed. Different arrangements of plasmonic nanostructures in photovoltaics for efficiency enhancement, plasmonics' limitations, and modeling performance are also deeply explored.
RESUMEN
In real time computer graphics, "interactivity" is limited to a display rate of 30 frames per second. However, in multimodal virtual environments involving haptic interactions, a much higher update rate of about 1 kHz is necessary to ensure continuous interactions and smooth transitions. The simplest and most efficient interaction paradigm in such environments is to represent the haptic cursor as a point. However, in many situations, such as those in the development of real time medical simulations involving the interactions of long slender surgical tools with soft deformable organs, such a paradigm is nonrealistic and at least a line-based interaction is desirable. While such paradigms exist, the main impediment to their widespread use is the associated computational complexity. In this paper, we introduce, for the first time, an efficient algorithm for computing the interaction of a line-shaped haptic cursor and polygonal surface models which has a near constant complexity. The algorithm relies on space-time coherence, topological information, and the properties of lines in 3D space to maintain proximity information between a line segment and triangle meshes. For interaction with convex objects, the line is represented by its end points and a dynamic point, which is the closest point on the line to any potentially colliding triangle. To deal with multiple contacts and non-convexities, the line is decomposed into segments and a dynamic point is used for each segment. The algorithm may be used to compute collision detection and response with rigid as well as deformable objects with no performance penalty. Realistic examples are presented to demonstrate the effectiveness of our approach.
RESUMEN
Variances in polymers processed by single-screw extrusion are investigated. While vortical flows are well known in the fluids community and fountain flows are well known to be caused by the frozen layers in injection molding, our empirical evidence and process modeling suggests the presence of vortical fountain flows in the melt channels of plasticating screws adjacent to a slower-moving solids bed. The empirical evidence includes screw freezing experiments with cross-sections of processed high-impact polystyrene (HIPS) blended with varying colorants. Non-isothermal, non-Newtonian process simulations indicate that the underlying causality is increased flow conductance in the melt pool caused by higher temperatures and shear rates in the recirculating melt pool. The results indicate the development of persistent, coiled sheet morphologies in both general purpose and barrier screw designs. The behavior differs significantly from prior melting and plastication models with the net effect of broader residence time distributions. The process models guide potential strategies for the remediation of the processing variances as well as potential opportunities to achieve improved dispersion as well as complex micro and nanostructures in polymer processing.
RESUMEN
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
RESUMEN
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases-the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.
RESUMEN
The winter 2014-15 measles outbreak in the US represents a significant crisis in the emergence of a functionally extirpated pathogen. Conclusively linking this outbreak to decreases in the measles/mumps/rubella (MMR) vaccination rate (driven by anti-vaccine sentiment) is critical to motivating MMR vaccination. We used the NOVA modeling platform to build a stochastic, spatially-structured, individual-based SEIR model of outbreaks, under the assumption that R0 ≈ 7 for measles. We show this implies that herd immunity requires vaccination coverage of greater than approximately 85%. We used a network structured version of our NOVA model that involved two communities, one at the relatively low coverage of 85% coverage and one at the higher coverage of 95%, both of which had 400-student schools embedded, as well as students occasionally visiting superspreading sites (e.g. high-density theme parks, cinemas, etc.). These two vaccination coverage levels are within the range of values occurring across California counties. Transmission rates at schools and superspreading sites were arbitrarily set to respectively 5 and 15 times background community rates. Simulations of our model demonstrate that a 'send unvaccinated students home' policy in low coverage counties is extremely effective at shutting down outbreaks of measles.
RESUMEN
MOTIVATION: Despite several reported acceleration successes of programmable GPUs (Graphics Processing Units) for molecular modeling and simulation tools, the general focus has been on fast computation with small molecules. This was primarily due to the limited memory size on the GPU. Moreover simultaneous use of CPU and GPU cores for a single kernel execution - a necessity for achieving high parallelism - has also not been fully considered. RESULTS: We present fast computation methods for molecular mechanical (Lennard-Jones and Coulombic) and generalized Born solvation energetics which run on commodity multicore CPUs and manycore GPUs. The key idea is to trade off accuracy of pairwise, long-range atomistic energetics for higher speed of execution. A simple yet efficient CUDA kernel for GPU acceleration is presented which ensures high arithmetic intensity and memory efficiency. Our CUDA kernel uses a cache-friendly, recursive and linear-space octree data structure to handle very large molecular structures with up to several million atoms. Based on this CUDA kernel, we present a hybrid method which simultaneously exploits both CPU and GPU cores to provide the best performance based on selected parameters of the approximation scheme. Our CUDA kernels achieve more than two orders of magnitude speedup over serial computation for many of the molecular energetics terms. The hybrid method is shown to be able to achieve the best performance for all values of the approximation parameter. AVAILABILITY: The source code and binaries are freely available as PMEOPA (Parallel Molecular Energetic using Octree Pairwise Approximation) and downloadable from http://cvcweb.ices.utexas.edu/software.
RESUMEN
Our ability to collect large datasets is growing rapidly. Such richness of data offers great promise in terms of addressing detailed scientific questions in great depth. However, this benefit is not without scientific difficulty: many traditional analysis methods become computationally intractable for very large datasets. However, one can frequently still simulate data from scientific models for which direct calculation is no longer possible. In this paper we propose a Bayesian perspective for such analyses, and argue for the advantage of a simulation-based approximate Bayesian method that remains tractable when tractability of other methods is lost. This method, which is known as "approximate Bayesian computation" [ABC], has now been used in a variety of contexts, such as the analysis of tumor data (a tumor being a complex population of cells), and the analysis of human genetic variation data (which arise from a population of individual people). We review a number of ABC methods, with specific attention to the use of ABC in agent-based models, and give pointers to software that allows straightforward implementation of the ABC approach. In this way we demonstrate the utility of simulation-based analyses of large datasets within a rigorous statistical framework.
RESUMEN
The development of computational approaches for modeling the spatiotemporal dynamics of intracellular, small molecule drug concentrations has become an increasingly important area of pharmaceutical research. For systems pharmacology, the system dynamics of subcellular transport can be coupled to downstream pharmacological effects on biochemical pathways that impact cell structure and function. Here, we demonstrate how a widely used systems biology modeling package - Virtual Cell - can also be used to model the intracellular, passive transport pathways of small druglike molecules. Using differential equations to represent passive drug transport across cellular membranes, spatiotemporal changes in the intracellular distribution and concentrations of exogenous chemical agents in specific subcellular organelles were simulated for weakly acidic, neutral, and basic molecules, as a function of the molecules' lipophilicity and ionization potentials. In addition, we simulated the transport properties of small molecule chemical agents in the presence of a homogenous extracellular concentration or a transcellular concentration gradient. We also simulated the effects of cell type-dependent variations in the intracellular microenvironments on the distribution and accumulation of small molecule chemical agents in different organelles over time, under influx and efflux conditions. Lastly, we simulated the transcellular transport of small molecule chemical agents, in the presence of different apical and basolateral microenvironments. By incorporating existing models of drug permeation and subcellular distribution, our results indicate that Virtual Cell can provide a user-friendly, open, online computational modeling platform for systems pharmacology and biopharmaceutics research, making mathematical models and simulation results accessible to a broad community of users, without requiring advanced computer programming knowledge.