RESUMO
We have repurposed Google tensor processing units (TPUs), application-specific chips developed for machine learning, into large-scale dense linear algebra supercomputers. The TPUs' fast intercore interconnects (ICIs), physically two-dimensional network topology, and high-bandwidth memory (HBM) permit distributed matrix multiplication algorithms to rapidly become computationally bound. In this regime, the matrix-multiply units (MXUs) dominate the runtime, yielding impressive scaling, performance, and raw size: Operating in float32 precision, a full 2,048-core pod of third-generation TPUs can multiply two matrices with linear size [Formula: see text] in about 2 min. Via curated algorithms emphasizing large, single-core matrix multiplications, other tasks in dense linear algebra can similarly scale. As examples, we present 1) QR decomposition; 2) resolution of linear systems; and 3) the computation of matrix functions by polynomial iteration, demonstrated by the matrix polar factorization.
RESUMO
In 2003, Chicago Public Schools introduced double-dose algebra, requiring two periods of math-one period of algebra and one of algebra support-for incoming ninth graders with eighth-grade math scores below the national median. Using a regression discontinuity design, earlier studies showed promising results from the program: For median-skill students, double-dose algebra improved algebra test scores, pass rates, high school graduation rates, and college enrollment. This study follows the same students 12 y later. Our findings show that, for median-skill students in the 2003 cohort, double-dose significantly increased semesters of college attended and college degree attainment. These results were not replicated for the 2004 cohort. Importantly, the impact of the policy on median-skill students depended largely on how classes were organized. In 2003, the impacts on college persistence and degree attainment were large in schools that strongly adhered to the cut-score-based course assignment, but without grouping median-skill students with lower-skill peers. Few schools implemented the policy in such a way in 2004.
Assuntos
Escolaridade , Matemática , Universidades , Estudos de Coortes , Matemática/economia , Matemática/educação , Políticas , Instituições Acadêmicas , Universidades/economiaRESUMO
Doctrinal texts on architectural heritage conservation emphasize the importance of fully understanding the structural and material characteristics and utilizing information systems. Photogrammetry allows for the generation of detailed, geo-referenced Digital Elevation Models of architectural elements at a low cost, while GIS software enables the addition of layers of material characteristic data to these models, creating different property maps that can be combined through map algebra. This paper presents the results of the mechanical characterization of materials and salt-related decay forms of the polygonal apse of the 13th-century monastery of Santa María de Bonaval (Guadalajara, Spain), which is primarily affected by salt crystallization. Rock strength is estimated using on-site nondestructive testing (ultrasound pulse velocity and Leeb hardness). They are mapped and combined through map algebra to derive a single mechanical soundness index (MSI) to determine whether the decay of the walls could be dependent on the orientation. The presented results show that salt decay in the building is anisotropic, with the south-facing side of the apse displaying an overall lower MSI than the others. The relative overheating of the south-facing side of the apse enhances the effect of salt crystallization, thereby promoting phase transitions between epsomite and hexahydrite.
RESUMO
Process algebra can be considered one of the most practical formal methods for modeling Smart IoT Systems in Digital Twin, since each IoT device in the systems can be considered as a process. Further, some of the algebras are applied to predict the behavior of the systems. For example, PALOMA (Process Algebra for Located Markovian Agents) and PACSR (Probabilistic Algebra of Communicating Shared Resources) process algebras are designed to predict the behavior of IoT Systems with probability on choice operations. However, there is a lack of analytical methods in the algebras to predict the nondeterministic behavior of the systems. Further, there is no control mechanism to handle undesirable nondeterministic behavior of the systems. In order to overcome these limitations, this paper proposes a new process algebra, called dTP-Calculus, which can be used (1) to specify the nondeterministic behavior of the systems with static probability, (2) verify the safety and security requirements of the nondeterministic behavior with probability requirements, and (3) control undesirable nondeterministic behavior with dynamic probability. To demonstrate the feasibility and practicality of the approach, the SAVE (Specification, Analysis, Verification, Evaluation) tool has been developed on the ADOxx Meta-Modeling Platform and applied to a SEMS (Smart Emergency Medical Service) example. In addition, a miniature digital twin system for the SEMS example was constructed and applied to the SAVE tool as a proof of concept for Digital Twin. It shows that the approach with dTP-Calculus on the tool can be very efficient and effective for Smart IoT Systems in Digital Twin.
RESUMO
Process algebra is one of the most suitable formal methods to model smart IoT systems for smart cities. Each IoT in the systems can be modeled as a process in algebra. In addition, the nondeterministic behavior of the systems can be predicted by defining probabilities on the choice operations in some algebra, such as PALOMA and PACSR. However, there are no practical mechanisms in algebra either to measure or control uncertainty caused by the nondeterministic behavior in terms of satisfiability of the system requirements. In our previous research, to overcome the limitation, a new process algebra called dTP-Calculus was presented to verify probabilistically the safety and security requirements of smart IoT systems: the nondeterministic behavior of the systems was defined and controlled by the static and dynamic probabilities. However, the approach required a strong assumption to handle the unsatisfied probabilistic requirements: enforcing an optimally arbitrary level of high-performance probability from the continuous range of the probability domain. In the paper, the assumption from the previous research is eliminated by defining the levels of probability from the discrete domain based on the notion of Permissible Process and System Equivalences so that satisfiability is incrementally enforced by both Permissible Process Enhancement in the process level and Permissible System Enhancement in the system level. In this way, the unsatisfied probabilistic requirements can be incrementally enforced with better-performing probabilities in the discrete steps until the final decision for satisfiability can be made. The SAVE tool suite has been developed on the ADOxx meta-modeling platform to demonstrate the effectiveness of the approach with a smart EMS (emergency medical service) system example, which is one of the most practical examples for smart cities. SAVE showed that the approach is very applicable to specify, analyze, verify, and especially, predict and control uncertainty or risks caused by the nondeterministic behavior of smart IoT systems. The approach based on dTP-Calculus and SAVE may be considered one of the most suitable formal methods and tools to model smart IoT systems for smart cities.
RESUMO
Increasing use of intelligent tutoring systems in education calls for analytic methods that can unravel students' learning behaviors. In this study, we explore a latent variable modeling approach for tracking learning flow during computer-interactive artificial tutoring. The study considers three models that give discrete profiles of a latent process: the (i) latent class model, (ii) latent transition model, and (iii) hidden Markov model. We illustrate application of each model using example log data from Cognitive Tutor Algebra I and suggest analytic procedures of drawing learning flow. Through experimental application, we show that the models can reveal substantive information about students' learning behaviors and have potential utility for describing the learning flow. The models differed in the assumptions and data constraints but yielded consistent findings on the flow states and interaction modalities. Based on our experiential analyses, we discuss strengths and limitations of the models and illuminate areas of future development.
Assuntos
Aprendizagem , Aprendizagem Baseada em Problemas , Humanos , Análise de Classes Latentes , Aprendizagem Baseada em Problemas/métodos , Estudantes , InteligênciaRESUMO
Quantum simulation qubit models of electronic Hamiltonians rely on specific transformations in order to take into account the fermionic permutation properties of electrons. These transformations (principally the Jordan-Wigner transformation (JWT) and the Bravyi-Kitaev transformation) correspond in a quantum circuit to the introduction of a supplementary circuit level. In order to include the fermionic properties in a more straightforward way in quantum computations, we propose to use methods issued from Geometric Algebra (GA), which, due to its commutation properties, are well adapted for fermionic systems. First, we apply the Witt basis method in GA to reformulate the JWT in this framework and use this formulation to express various quantum gates. We then rewrite the general one and two-electron Hamiltonian and use it for building a quantum simulation circuit for the Hydrogen molecule. Finally, the quantum Ising Hamiltonian, widely used in quantum simulation, is reformulated in this framework.
RESUMO
BACKGROUND: Massive amounts of data are produced by combining next-generation sequencing with complex biochemistry techniques to characterize regulatory genomics profiles, such as protein-DNA interaction and chromatin accessibility. Interpretation of such high-throughput data typically requires different computation methods. However, existing tools are usually developed for a specific task, which makes it challenging to analyze the data in an integrative manner. RESULTS: We here describe the Regulatory Genomics Toolbox (RGT), a computational library for the integrative analysis of regulatory genomics data. RGT provides different functionalities to handle genomic signals and regions. Based on that, we developed several tools to perform distinct downstream analyses, including the prediction of transcription factor binding sites using ATAC-seq data, identification of differential peaks from ChIP-seq data, and detection of triple helix mediated RNA and DNA interactions, visualization, and finding an association between distinct regulatory factors. CONCLUSION: We present here RGT; a framework to facilitate the customization of computational methods to analyze genomic data for specific regulatory genomics problems. RGT is a comprehensive and flexible Python package for analyzing high throughput regulatory genomics data and is available at: https://github.com/CostaLab/reg-gen . The documentation is available at: https://reg-gen.readthedocs.io.
Assuntos
Cromatina , Genômica , Sequenciamento de Cromatina por Imunoprecipitação , Documentação , Biblioteca GênicaRESUMO
The tryptophan (trp) operon in Escherichia coli codes for the proteins responsible for the synthesis of the amino acid tryptophan from chorismic acid, and has been one of the most well-studied gene networks since its discovery in the 1960s. The tryptophanase (tna) operon codes for proteins needed to transport and metabolize it. Both of these have been modeled individually with delay differential equations under the assumption of mass-action kinetics. Recent work has provided strong evidence for bistable behavior of the tna operon. The authors of Orozco-Gómez et al. (Sci Rep 9(1):5451, 2019) identified a medium range of tryptophan in which the system has two stable steady-states, and they reproduced these experimentally. In this paper, we will show how a Boolean model can capture this bistability. We will also develop and analyze a Boolean model of the trp operon. Finally, we will combine these two to create a single Boolean model of the transport, synthesis, and metabolism of tryptophan. In this amalgamated model, the bistability disappears, presumably reflecting the ability of the trp operon to produce tryptophan and drive the system toward homeostasis. All of these models have longer attractors that we call "artifacts of synchrony", which disappear in the asynchronous automata. This curiously matches the behavior of a recent Boolean model of the arabinose operon in E. coli, and we discuss some open-ended questions that arise along these lines.
Assuntos
Escherichia coli , Triptofano , Escherichia coli/genética , Conceitos Matemáticos , Modelos Biológicos , HomeostaseRESUMO
The successful application of epidemic models hinges on our ability to estimate model parameters from limited observations reliably. An often-overlooked step before estimating model parameters consists of ensuring that the model parameters are structurally identifiable from the observed states of the system. In this tutorial-based primer, intended for a diverse audience, including students training in dynamic systems, we review and provide detailed guidance for conducting structural identifiability analysis of differential equation epidemic models based on a differential algebra approach using differential algebra for identifiability of systems (DAISY) and Mathematica (Wolfram Research). This approach aims to uncover any existing parameter correlations that preclude their estimation from the observed variables. We demonstrate this approach through examples, including tutorial videos of compartmental epidemic models previously employed to study transmission dynamics and control. We show that the lack of structural identifiability may be remedied by incorporating additional observations from different model states, assuming that the system's initial conditions are known, using prior information to fix some parameters involved in parameter correlations, or modifying the model based on existing parameter correlations. We also underscore how the results of structural identifiability analysis can help enrich compartmental diagrams of differential-equation models by indicating the observed state variables and the results of the structural identifiability analysis.
Assuntos
Algoritmos , Modelos Biológicos , HumanosRESUMO
Quantum parallelism can be implemented on a classical ensemble of discrete level quantum systems. The nanosystems are not quite identical, and the ensemble represents their individual variability. An underlying Lie algebraic theory is developed using the closure of the algebra to demonstrate the parallel information processing at the level of the ensemble. The ensemble is addressed by a sequence of laser pulses. In the Heisenberg picture of quantum dynamics the coherence between the N levels of a given quantum system can be handled as an observable. Thereby there are N2 logic variables per N level system. This is how massive parallelism is achieved in that there are N2 potential outputs for a quantum system of N levels. The use of an ensemble allows simultaneous reading of such outputs. Due to size dispersion the expectation values of the observables can differ somewhat from system to system. We show that for a moderate variability of the systems one can average the N2 expectation values over the ensemble while retaining closure and parallelism. This allows directly propagating in time the ensemble averaged values of the observables. Results of simulations of electronic excitonic dynamics in an ensemble of quantum dot (QD) dimers are presented. The QD size and interdot distance in the dimer are used to parametrize the Hamiltonian. The dimer N levels include local and charge transfer excitons within each dimer. The well-studied physics of semiconducting QDs suggests that the dimer coherences can be probed at room temperature.
RESUMO
The study tested the hypothesis that there are sex differences in the pathways to mathematical development. Three hundred forty-two adolescents (169 boys) were assessed in various mathematics areas from arithmetic fluency to algebra across 6th to 9th grade, inclusive, and completed a battery of working memory, spatial, and intelligence measures in middle school. Their middle school and 9th grade teachers reported on their in-class attentive behavior. There were no sex differences in overall mathematics performance, but boys had advantages on all spatial measures (ds = .29 to .58) and girls were more attentive in classroom settings (ds = -.28 to -.37). A series of structural equation models indicated that 6th- to 9th-grade mathematical competence was influenced by a combination of general cognitive ability, spatial abilities, and in-class attention. General cognitive ability was important for both sexes but the spatial pathway to mathematical competence was relatively more important for boys and the in-class attention pathway for girls.
RESUMO
Robot measurement systems with a binocular planar structured light camera (3D camera) installed on a robot end-effector are often used to measure workpieces' shapes and positions. However, the measurement accuracy is jointly influenced by the robot kinematics, camera-to-robot installation, and 3D camera measurement errors. Incomplete calibration of these errors can result in inaccurate measurements. This paper proposes a joint calibration method considering these three error types to achieve overall calibration. In this method, error models of the robot kinematics and camera-to-robot installation are formulated using Lie algebra. Then, a pillow error model is proposed for the 3D camera based on its error distribution and measurement principle. These error models are combined to construct a joint model based on homogeneous transformation. Finally, the calibration problem is transformed into a stepwise optimization problem that minimizes the sum of the relative position error between the calibrator and robot, and analytical solutions for the calibration parameters are derived. Simulation and experiment results demonstrate that the joint calibration method effectively improves the measurement accuracy, reducing the mean positioning error from over 2.5228 mm to 0.2629 mm and the mean distance error from over 0.1488 mm to 0.1232 mm.
RESUMO
We argue that a clear view of quantum mechanics is obtained by considering that the unicity of the macroscopic world is a fundamental postulate of physics, rather than an issue that must be mathematically justified or demonstrated. This postulate allows for a framework in which quantum mechanics can be constructed in a complete mathematically consistent way. This is made possible by using general operator algebras to extend the mathematical description of the physical world toward macroscopic systems. Such an approach goes beyond the usual type-I operator algebras used in standard textbook quantum mechanics. This avoids a major pitfall, which is the temptation to make the usual type-I formalism 'universal'. This may also provide a meta-framework for both classical and quantum physics, shedding new light on ancient conceptual antagonisms and clarifying the status of quantum objects. Beyond exploring remote corners of quantum physics, we expect these ideas to be helpful to better understand and develop quantum technologies.
RESUMO
In regards to the nature of time, it has become commonplace to hear physicists state that time does not exist and that the perception of time passing and of events occurring in time is an illusion. In this paper, I argue that physics is actually agnostic on the question of the nature of time. The standard arguments against its existence all suffer from implicit biases and hidden assumptions, rendering many of them circular in nature. An alternative viewpoint to that of Newtonian materialism is the process view of Whitehead. I will show that the process perspective supports the reality of becoming, of happening, and of change. At the fundamental level, time is an expression of the action of process generating the elements of reality. Metrical space-time is an emergent aspect of relations between process-generated entities. Such a view is compatible with existing physics. The situation of time in physics is reminiscent of that of the continuum hypothesis in mathematical logic. It may be an independent assumption, not provable within physics proper (though it may someday be amenable to experimental exploration).
RESUMO
Considering the inference rules in generalized logics, J.C. Abbott arrives to the notion of orthoimplication algebra (see Abbott (1970) and Abbott (Stud. Logica. 2:173-177, XXXV)). We show that when one enriches the Abbott orthoimplication algebra with a falsity symbol and a natural XOR-type operation, one obtains an orthomodular difference lattice as an enriched quantum logic (see Matousek (Algebra Univers. 60:185-215, 2009)). Moreover, we find that these two structures endowed with the natural morphisms are categorically equivalent. We also show how one can introduce the notion of a state in the Abbott XOR algebras strenghtening thus the relevance of these algebras to quantum theories.
RESUMO
BACKGROUND: Mathematical expressions mainly include arithmetic (such as 8 - (1 + 3)) and algebra (such as a - (b + c)). Previous studies have shown that both algebraic processing and arithmetic involved the bilateral parietal brain regions. Although previous studies have revealed that algebra was dissociated from arithmetic, the neural bases of the dissociation between algebraic processing and arithmetic is still unclear. The present study uses functional magnetic resonance imaging (fMRI) to identify the specific brain networks for algebraic and arithmetic processing. METHODS: Using fMRI, this study scanned 30 undergraduates and directly compared the brain activation during algebra and arithmetic. Brain activations, single-trial (item-wise) interindividual correlation and mean-trial interindividual correlation related to algebra processing were compared with those related to arithmetic. The functional connectivity was analyzed by a seed-based region of interest (ROI)-to-ROI analysis. RESULTS: Brain activation analyses showed that algebra elicited greater activation in the angular gyrus and arithmetic elicited greater activation in the bilateral supplementary motor area, left insula, and left inferior parietal lobule. Interindividual single-trial brain-behavior correlation revealed significant brain-behavior correlations in the semantic network, including the middle temporal gyri, inferior frontal gyri, dorsomedial prefrontal cortices, and left angular gyrus, for algebra. For arithmetic, the significant brain-behavior correlations were located in the phonological network, including the precentral gyrus and supplementary motor area, and in the visuospatial network, including the bilateral superior parietal lobules. For algebra, significant positive functional connectivity was observed between the visuospatial network and semantic network, whereas for arithmetic, significant positive functional connectivity was observed only between the visuospatial network and phonological network. CONCLUSION: These findings suggest that algebra relies on the semantic network and conversely, arithmetic relies on the phonological and visuospatial networks.
Assuntos
Mapeamento Encefálico , Web Semântica , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Lobo TemporalRESUMO
BACKGROUND: Modern configurational comparative methods (CCMs) of causal inference, such as Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA), have started to make inroads into medical and health research over the last decade. At the same time, these methods remain unable to process data on multi-morbidity, a situation in which at least two chronic conditions are simultaneously present. Such data require the capability to analyze complex effects. Against a background of fast-growing numbers of patients with multi-morbid diagnoses, we present a new member of the family of CCMs with which multiple conditions and their complex conjunctions can be analyzed: Combinational Regularity Analysis (CORA). METHODS: The technical heart of CORA consists of algorithms that have originally been developed in electrical engineering for the analysis of multi-output switching circuits. We have adapted these algorithms for purposes of configurational data analysis. To demonstrate CORA, we provide several example applications, both with simulated and empirical data, by means of the eponymous software package CORA. Also included in CORA is the possibility to mine configurational data and to visualize results via logic diagrams. RESULTS: For simple single-condition analyses, CORA's solution is identical with that of QCA or CNA. However, analyses of multiple conditions with CORA differ in important respects from analyses with QCA or CNA. Most importantly, CORA is presently the only configurational method able to simultaneously explain individual conditions as well as complex conjunctions of conditions. CONCLUSIONS: Through CORA, problems of multi-morbidity in particular, and configurational analyses of complex effects in general, come into the analytical reach of CCMs. Future research aims to further broaden and enhance CORA's capabilities for refining such analyses.
Assuntos
Algoritmos , HumanosRESUMO
Although some people do not have any chronic disease or are not in the risky age group for Covid-19, they are more vulnerable to the coronavirus. As the reason for this situation, some experts focus on the immune system of the person, while others think that the genetic history of patients may play a role. It is critical to detect corona from DNA signals as early as possible to determine the relationship between Covid-19 and genes. Thus, the effect on the severe course of the disease of variations in the genes associated with the corona disease will be revealed. In this study, a novel intelligent computer approach is proposed to identify coronavirus from nucleotide signals for the first time. The proposed method presents a multilayered feature extraction structure to extract the most effective features using an Entropy-based mapping technique, Discrete Wavelet Transform (DWT), statistical feature extractor, and Singular Value Decomposition (SVD), together. Then 94 distinctive features are selected by the ReliefF technique. Support vector machine (SVM) and k nearest neighborhood (k-NN) are chosen as classifiers. The method achieved the highest classification accuracy rate of 98.84% with an SVM classifier to detect Covid-19 from DNA signals. The proposed method is ready to be tested with a different database in the diagnosis of Covid-19 using RNA or other signals.
RESUMO
Conventional digital computers can execute advanced operations by a sequence of elementary Boolean functions of 2 or more bits. As a result, complicated tasks such as solving a linear system or solving a differential equation require a large number of computing steps and an extensive use of memory units to store individual bits. To accelerate the execution of such advanced tasks, in-memory computing with resistive memories provides a promising avenue, thanks to analog data storage and physical computation in the memory. Here, we show that a cross-point array of resistive memory devices can directly solve a system of linear equations, or find the matrix eigenvectors. These operations are completed in just one single step, thanks to the physical computing with Ohm's and Kirchhoff's laws, and thanks to the negative feedback connection in the cross-point circuit. Algebraic problems are demonstrated in hardware and applied to classical computing tasks, such as ranking webpages and solving the Schrödinger equation in one step.