Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 381
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
EMBO J ; 42(20): e112630, 2023 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-37712330

RESUMO

Two major mechanisms safeguard genome stability during mitosis: the mitotic checkpoint delays mitosis until all chromosomes have attached to microtubules, and the kinetochore-microtubule error-correction pathway keeps this attachment process free from errors. We demonstrate here that the optimal strength and dynamics of these processes are set by a kinase-phosphatase pair (PLK1-PP2A) that engage in negative feedback from adjacent phospho-binding motifs on the BUB complex. Uncoupling this feedback to skew the balance towards PLK1 produces a strong checkpoint, hypostable microtubule attachments and mitotic delays. Conversely, skewing the balance towards PP2A causes a weak checkpoint, hyperstable microtubule attachments and chromosome segregation errors. These phenotypes are associated with altered BUB complex recruitment to KNL1-MELT motifs, implicating PLK1-PP2A in controlling auto-amplification of MELT phosphorylation. In support, KNL1-BUB disassembly becomes contingent on PLK1 inhibition when KNL1 is engineered to contain excess MELT motifs. This elevates BUB-PLK1/PP2A complex levels on metaphase kinetochores, stabilises kinetochore-microtubule attachments, induces chromosome segregation defects and prevents KNL1-BUB disassembly at anaphase. Together, these data demonstrate how a bifunctional PLK1/PP2A module has evolved together with the MELT motifs to optimise BUB complex dynamics and ensure accurate chromosome segregation.


Assuntos
Cinetocoros , Pontos de Checagem da Fase M do Ciclo Celular , Humanos , Cinetocoros/metabolismo , Proteínas Serina-Treonina Quinases/metabolismo , Proteínas de Ciclo Celular/genética , Proteínas de Ciclo Celular/metabolismo , Segregação de Cromossomos , Fosforilação , Microtúbulos/metabolismo , Mitose , Células HeLa
2.
Proc Natl Acad Sci U S A ; 121(25): e2323009121, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38875144

RESUMO

Error correction is central to many biological systems and is critical for protein function and cell health. During mitosis, error correction is required for the faithful inheritance of genetic material. When functioning properly, the mitotic spindle segregates an equal number of chromosomes to daughter cells with high fidelity. Over the course of spindle assembly, many initially erroneous attachments between kinetochores and microtubules are fixed through the process of error correction. Despite the importance of chromosome segregation errors in cancer and other diseases, there is a lack of methods to characterize the dynamics of error correction and how it can go wrong. Here, we present an experimental method and analysis framework to quantify chromosome segregation error correction in human tissue culture cells with live cell confocal imaging, timed premature anaphase, and automated counting of kinetochores after cell division. We find that errors decrease exponentially over time during spindle assembly. A coarse-grained model, in which errors are corrected in a chromosome-autonomous manner at a constant rate, can quantitatively explain both the measured error correction dynamics and the distribution of anaphase onset times. We further validated our model using perturbations that destabilized microtubules and changed the initial configuration of chromosomal attachments. Taken together, this work provides a quantitative framework for understanding the dynamics of mitotic error correction.


Assuntos
Segregação de Cromossomos , Cinetocoros , Microtúbulos , Mitose , Fuso Acromático , Humanos , Cinetocoros/metabolismo , Fuso Acromático/metabolismo , Microtúbulos/metabolismo , Anáfase , Modelos Biológicos , Células HeLa
3.
Proc Natl Acad Sci U S A ; 121(1): e2313269120, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38147549

RESUMO

Quantum computers have been proposed to solve a number of important problems such as discovering new drugs, new catalysts for fertilizer production, breaking encryption protocols, optimizing financial portfolios, or implementing new artificial intelligence applications. Yet, to date, a simple task such as multiplying 3 by 5 is beyond existing quantum hardware. This article examines the difficulties that would need to be solved for quantum computers to live up to their promises. I discuss the whole stack of technologies that has been envisioned to build a quantum computer from the top layers (the actual algorithms and associated applications) down to the very bottom ones (the quantum hardware, its control electronics, cryogeny, etc.) while not forgetting the crucial intermediate layer of quantum error correction.

4.
Proc Natl Acad Sci U S A ; 120(41): e2221736120, 2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37801473

RESUMO

The design of quantum hardware that reduces and mitigates errors is essential for practical quantum error correction (QEC) and useful quantum computation. To this end, we introduce the circuit-Quantum Electrodynamics (QED) dual-rail qubit in which our physical qubit is encoded in the single-photon subspace, [Formula: see text], of two superconducting microwave cavities. The dominant photon loss errors can be detected and converted into erasure errors, which are in general much easier to correct. In contrast to linear optics, a circuit-QED implementation of the dual-rail code offers unique capabilities. Using just one additional transmon ancilla per dual-rail qubit, we describe how to perform a gate-based set of universal operations that includes state preparation, logical readout, and parametrizable single and two-qubit gates. Moreover, first-order hardware errors in the cavities and the transmon can be detected and converted to erasure errors in all operations, leaving background Pauli errors that are orders of magnitude smaller. Hence, the dual-rail cavity qubit exhibits a favorable hierarchy of error rates and is expected to perform well below the relevant QEC thresholds with today's coherence times.

5.
Mol Syst Biol ; 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39349762

RESUMO

Chemical genomics is a powerful and increasingly accessible technique to probe gene function, gene-gene interactions, and antibiotic synergies and antagonisms. Indeed, multiple large-scale pooled datasets in diverse organisms have been published. Here, we identify an artifact arising from uncorrected differences in the number of cell doublings between experiments within such datasets. We demonstrate that this artifact is widespread, show how it causes spurious gene-gene and drug-drug correlations, and present a simple but effective post hoc method for removing its effects. Using several published datasets, we demonstrate that this correction removes spurious correlations between genes and conditions, improving data interpretability and revealing new biological insights. Finally, we determine experimental factors that predispose a dataset for this artifact and suggest a set of experimental and computational guidelines for performing pooled chemical genomics experiments that will maximize the potential of this powerful technique.

6.
Proc Natl Acad Sci U S A ; 119(24): e2202235119, 2022 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-35687669

RESUMO

Entanglement-assisted concatenated quantum codes (EACQCs), constructed by concatenating two quantum codes, are proposed. These EACQCs show significant advantages over standard concatenated quantum codes (CQCs). First, we prove that, unlike standard CQCs, EACQCs can beat the nondegenerate Hamming bound for entanglement-assisted quantum error-correction codes (EAQECCs). Second, we construct families of EACQCs with parameters better than the best-known standard quantum error-correction codes (QECCs) and EAQECCs. Moreover, these EACQCs require very few Einstein-Podolsky-Rosen (EPR) pairs to begin with. Finally, it is shown that EACQCs make entanglement-assisted quantum communication possible, even if the ebits are noisy. Furthermore, EACQCs can outperform CQCs in entanglement fidelity over depolarizing channels if the ebits are less noisy than the qubits. We show that the error-probability threshold of EACQCs is larger than that of CQCs when the error rate of ebits is sufficiently lower than that of qubits. Specifically, we derive a high threshold of 47% when the error probability of the preshared entanglement is 1% to that of qubits.

7.
BMC Bioinformatics ; 25(1): 267, 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39160480

RESUMO

BACKGROUND: The utilization of long reads for single nucleotide polymorphism (SNP) phasing has become popular, providing substantial support for research on human diseases and genetic studies in animals and plants. However, due to the complexity of the linkage relationships between SNP loci and sequencing errors in the reads, the recent methods still cannot yield satisfactory results. RESULTS: In this study, we present a graph-based algorithm, GCphase, which utilizes the minimum cut algorithm to perform phasing. First, based on alignment between long reads and the reference genome, GCphase filters out ambiguous SNP sites and useless read information. Second, GCphase constructs a graph in which a vertex represents alleles of an SNP locus and each edge represents the presence of read support; moreover, GCphase adopts a graph minimum-cut algorithm to phase the SNPs. Next, GCpahse uses two error correction steps to refine the phasing results obtained from the previous step, effectively reducing the error rate. Finally, GCphase obtains the phase block. GCphase was compared to three other methods, WhatsHap, HapCUT2, and LongPhase, on the Nanopore and PacBio long-read datasets. The code is available from https://github.com/baimawjy/GCphase . CONCLUSIONS: Experimental results show that GCphase under different sequencing depths of different data has the least number of switch errors and the highest accuracy compared with other methods.


Assuntos
Algoritmos , Polimorfismo de Nucleotídeo Único , Polimorfismo de Nucleotídeo Único/genética , Humanos , Análise de Sequência de DNA/métodos , Software , Sequenciamento de Nucleotídeos em Larga Escala/métodos
8.
BMC Genomics ; 25(1): 365, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622536

RESUMO

BACKGROUND: Microbial genomes are largely comprised of protein coding sequences, yet some genomes contain many pseudogenes caused by frameshifts or internal stop codons. These pseudogenes are believed to result from gene degradation during evolution but could also be technical artifacts of genome sequencing or assembly. RESULTS: Using a combination of observational and experimental data, we show that many putative pseudogenes are attributable to errors that are incorporated into genomes during assembly. Within 126,564 publicly available genomes, we observed that nearly identical genomes often substantially differed in pseudogene counts. Causal inference implicated assembler, sequencing platform, and coverage as likely causative factors. Reassembly of genomes from raw reads confirmed that each variable affects the number of putative pseudogenes in an assembly. Furthermore, simulated sequencing reads corroborated our observations that the quality and quantity of raw data can significantly impact the number of pseudogenes in an assembler dependent fashion. The number of unexpected pseudogenes due to internal stops was highly correlated (R2 = 0.96) with average nucleotide identity to the ground truth genome, implying relative pseudogene counts can be used as a proxy for overall assembly correctness. Applying our method to assemblies in RefSeq resulted in rejection of 3.6% of assemblies due to significantly elevated pseudogene counts. Reassembly from real reads obtained from high coverage genomes showed considerable variability in spurious pseudogenes beyond that observed with simulated reads, reinforcing the finding that high coverage is necessary to mitigate assembly errors. CONCLUSIONS: Collectively, these results demonstrate that many pseudogenes in microbial genome assemblies are actually genes. Our results suggest that high read coverage is required for correct assembly and indicate an inflated number of pseudogenes due to internal stops is indicative of poor overall assembly quality.


Assuntos
Genoma Bacteriano , Pseudogenes , Pseudogenes/genética , Mapeamento Cromossômico , Sequência de Bases , Genoma Microbiano , Análise de Sequência de DNA/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos
9.
BMC Genomics ; 25(1): 573, 2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38849740

RESUMO

BACKGROUNDS: The single-pass long reads generated by third-generation sequencing technology exhibit a higher error rate. However, the circular consensus sequencing (CCS) produces shorter reads. Thus, it is effective to manage the error rate of long reads algorithmically with the help of the homologous high-precision and low-cost short reads from the Next Generation Sequencing (NGS) technology. METHODS: In this work, a hybrid error correction method (NmTHC) based on a generative neural machine translation model is proposed to automatically capture discrepancies within the aligned regions of long reads and short reads, as well as the contextual relationships within the long reads themselves for error correction. Akin to natural language sequences, the long read can be regarded as a special "genetic language" and be processed with the idea of generative neural networks. The algorithm builds a sequence-to-sequence(seq2seq) framework with Recurrent Neural Network (RNN) as the core layer. The before and post-corrected long reads are regarded as the sentences in the source and target language of translation, and the alignment information of long reads with short reads is used to create the special corpus for training. The well-trained model can be used to predict the corrected long read. RESULTS: NmTHC outperforms the latest mainstream hybrid error correction methods on real-world datasets from two mainstream platforms, including PacBio and Nanopore. Our experimental evaluation results demonstrate that NmTHC can align more bases with the reference genome without any segmenting in the six benchmark datasets, proving that it enhances alignment identity without sacrificing any length advantages of long reads. CONCLUSION: Consequently, NmTHC reasonably adopts the generative Neural Machine Translation (NMT) model to transform hybrid error correction tasks into machine translation problems and provides a novel perspective for solving long-read error correction problems with the ideas of Natural Language Processing (NLP). More remarkably, the proposed methodology is sequencing-technology-independent and can produce more precise reads.


Assuntos
Algoritmos , Sequenciamento de Nucleotídeos em Larga Escala , Redes Neurais de Computação , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Aprendizado de Máquina
10.
Rep Prog Phys ; 87(9)2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39059436

RESUMO

Decoherence-free subspaces and subsystems (DFS) preserve quantum information by encoding it into symmetry-protected states unaffected by decoherence. An inherent DFS of a given experimental system may not exist; however, through the use of dynamical decoupling (DD), one can induce symmetries that support DFSs. Here, we provide the first experimental demonstration of DD-generated decoherence-free subsystem logical qubits. Utilizing IBM Quantum superconducting processors, we investigate two and three-qubit DFS codes comprising up to six and seven noninteracting logical qubits, respectively. Through a combination of DD and error detection, we show that DFS logical qubits can achieve up to a 23% improvement in state preservation fidelity over physical qubits subject to DD alone. This constitutes a beyond-breakeven fidelity improvement for DFS-encoded qubits. Our results showcase the potential utility of DFS codes as a pathway toward enhanced computational accuracy via logical encoding on quantum processors.

11.
Rep Prog Phys ; 87(3)2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38314645

RESUMO

Molecular nanomagnets (MNMs), molecules containing interacting spins, have been a playground for quantum mechanics. They are characterized by many accessible low-energy levels that can be exploited to store and process quantum information. This naturally opens the possibility of using them as qudits, thus enlarging the tools of quantum logic with respect to qubit-based architectures. These additional degrees of freedom recently prompted the proposal for encoding qubits with embedded quantum error correction (QEC) in single molecules. QEC is the holy grail of quantum computing and this qudit approach could circumvent the large overhead of physical qubits typical of standard multi-qubit codes. Another important strength of the molecular approach is the extremely high degree of control achieved in preparing complex supramolecular structures where individual qudits are linked preserving their individual properties and coherence. This is particularly relevant for building quantum simulators, controllable systems able to mimic the dynamics of other quantum objects. The use of MNMs for quantum information processing is a rapidly evolving field which still requires to be fully experimentally explored. The key issues to be settled are related to scaling up the number of qudits/qubits and their individual addressing. Several promising possibilities are being intensively explored, ranging from the use of single-molecule transistors or superconducting devices to optical readout techniques. Moreover, new tools from chemistry could be also at hand, like the chiral-induced spin selectivity. In this paper, we will review the present status of this interdisciplinary research field, discuss the open challenges and envisioned solution paths which could finally unleash the very large potential of molecular spins for quantum technologies.

12.
BMC Plant Biol ; 24(1): 306, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38644480

RESUMO

Linkage maps are essential for genetic mapping of phenotypic traits, gene map-based cloning, and marker-assisted selection in breeding applications. Construction of a high-quality saturated map requires high-quality genotypic data on a large number of molecular markers. Errors in genotyping cannot be completely avoided, no matter what platform is used. When genotyping error reaches a threshold level, it will seriously affect the accuracy of the constructed map and the reliability of consequent genetic studies. In this study, repeated genotyping of two recombinant inbred line (RIL) populations derived from crosses Yangxiaomai × Zhongyou 9507 and Jingshuang 16 × Bainong 64 was used to investigate the effect of genotyping errors on linkage map construction. Inconsistent data points between the two replications were regarded as genotyping errors, which were classified into three types. Genotyping errors were treated as missing values, and therefore the non-erroneous data set was generated. Firstly, linkage maps were constructed using the two replicates as well as the non-erroneous data set. Secondly, error correction methods implemented in software packages QTL IciMapping (EC) and Genotype-Corrector (GC) were applied to the two replicates. Linkage maps were therefore constructed based on the corrected genotypes and then compared with those from the non-erroneous data set. Simulation study was performed by considering different levels of genotyping errors to investigate the impact of errors and the accuracy of error correction methods. Results indicated that map length and marker order differed among the two replicates and the non-erroneous data sets in both RIL populations. For both actual and simulated populations, map length was expanded as the increase in error rate, and the correlation coefficient between linkage and physical maps became lower. Map quality can be improved by repeated genotyping and error correction algorithm. When it is impossible to genotype the whole mapping population repeatedly, 30% would be recommended in repeated genotyping. The EC method had a much lower false positive rate than did the GC method under different error rates. This study systematically expounded the impact of genotyping errors on linkage analysis, providing potential guidelines for improving the accuracy of linkage maps in the presence of genotyping errors.


Assuntos
Mapeamento Cromossômico , Genótipo , Triticum , Triticum/genética , Mapeamento Cromossômico/métodos , Locos de Características Quantitativas , Ligação Genética , Técnicas de Genotipagem/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos
13.
Brief Bioinform ; 23(2)2022 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-35136947

RESUMO

In this paper, we study the problem for finding complex proteoforms from protein databases based on top-down tandem mass spectrum data. The main difficulty to solve the problem is to handle the combinatorial explosion of various alterations on a protein. To overcome the combinatorial explosion of various alterations on a protein, the problem has been formulated as the alignment problem of a proteoform mass graph (PMG) and a spectrum mass graph (SMG). The other important issue is to handle mass errors of peaks in the input spectrum. In previous methods, an error tolerance value is used to handle the mass differences between the matched consecutive nodes/peaks in PMG and SMG. However, such a way to handle mass error can not guarantee that the mass difference between any pairs of nodes in the alignment is approximately the same for both PMG and SMG. It may lead to large error accumulation if positive (or negative) errors occur consecutively for a large number of consecutive matched node pairs. The problem is severe so that some existing software packages include a step to further refine the alignments. In this paper, we propose a new model to handle the mass errors of peaks based on the formulation of the PMG and SMG. Note that the masses of sub-paths on the PMG are theoretical and suppose to be accurate. Our method allows each peak in the input spectrum to have a predefined error range. In the alignment of PMG and SMG, we need to give a correction of the mass for each matched peak within the predefined error range. After the correction, we impose that the mass between any two (not necessarily consecutive) matched nodes in the PMG is identical to that of the corresponding two matched peaks in the SMG. Intuitively, this kind of alignment is more accurate. We design an algorithm to find a maximum number of matched node and peak pairs in the two (PMG and SMG) mass graphs under the new constraint. The obtained alignment can show matched node and peak pairs as well as the corrected positions of peaks. The algorithm works well for moderate size input instances and takes very long time as well as huge size memory for large input size instances. Therefore, we propose an algorithm to do diagonal alignment. The diagonal alignment algorithm can solve large input size instances in reasonable time. Experiments show that our new algorithms can report alignments with much larger number of matched node pairs. The software package and test data sets are available at https://github.com/Zeirdo/TopMGRefine.


Assuntos
Algoritmos , Espectrometria de Massas em Tandem , Bases de Dados de Proteínas , Software , Espectrometria de Massas em Tandem/métodos
14.
Biochem Soc Trans ; 52(1): 29-39, 2024 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-38305688

RESUMO

Accurate chromosome segregation in mitosis relies on sister kinetochores forming stable attachments to microtubules (MTs) extending from opposite spindle poles and establishing biorientation. To achieve this, erroneous kinetochore-MT interactions must be resolved through a process called error correction, which dissolves improper kinetochore-MT attachment and allows new interactions until biorientation is achieved. The Aurora B kinase plays key roles in driving error correction by phosphorylating Dam1 and Ndc80 complexes, while Mps1 kinase, Stu2 MT polymerase and phosphatases also regulate this process. Once biorientation is formed, tension is applied to kinetochore-MT interaction, stabilizing it. In this review article, we discuss the mechanisms of kinetochore-MT interaction, error correction and biorientation. We focus mainly on recent insights from budding yeast, where the attachment of a single MT to a single kinetochore during biorientation simplifies the analysis of error correction mechanisms.


Assuntos
Proteínas de Saccharomyces cerevisiae , Saccharomycetales , Saccharomyces cerevisiae/genética , Cinetocoros , Microtúbulos/genética , Mitose , Segregação de Cromossomos , Proteínas de Saccharomyces cerevisiae/genética
15.
Exp Brain Res ; 242(2): 337-353, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38078961

RESUMO

Children with neurodevelopmental disorders (NDDs) often display motor problems that may impact their daily lives. Studying specific motor characteristics related to spatiotemporal control may inform us about the mechanisms underlying their challenges. Fifty-eight children with varying neurodevelopmental symptoms load (median age: 5.6 years, range: 2.7-12.5 years) performed an interactive tablet-based tracking task. By investigating digit touch errors relative to the target's movement direction, we found that a load of neurodevelopmental symptoms was associated with reduced performance in the tracking of abrupt alternating directions (zigzag) and overshooting the target. In contrast, reduced performance in children without neurodevelopmental symptoms was associated with lagging behind the target. Neurodevelopmental symptom load was also associated with reduced flexibility in correcting for lateral deviations in smooth tracking (spiral). Our findings suggest that neurodevelopmental symptoms are associated with difficulties in motor regulation related to inhibitory control and reduced flexibility, impacting motor control in NDDs.


Assuntos
Transtornos do Neurodesenvolvimento , Criança , Humanos , Pré-Escolar , Movimento
16.
Methods ; 216: 39-50, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37330158

RESUMO

Assessing the quality of sequencing data plays a crucial role in downstream data analysis. However, existing tools often achieve sub-optimal efficiency, especially when dealing with compressed files or performing complicated quality control operations such as over-representation analysis and error correction. We present RabbitQCPlus, an ultra-efficient quality control tool for modern multi-core systems. RabbitQCPlus uses vectorization, memory copy reduction, parallel (de)compression, and optimized data structures to achieve substantial performance gains. It is 1.1 to 5.4 times faster when performing basic quality control operations compared to state-of-the-art applications yet requires fewer compute resources. Moreover, RabbitQCPlus is at least 4 times faster than other applications when processing gzip-compressed FASTQ files and 1.3 times faster with the error correction module turned on. Furthermore, it takes less than 4 minutes to process 280 GB of plain FASTQ sequencing data, while other applications take at least 22 minutes on a 48-core server when enabling the per-read over-representation analysis. C++ sources are available at https://github.com/RabbitBio/RabbitQCPlus.


Assuntos
Compressão de Dados , Software , Sequenciamento de Nucleotídeos em Larga Escala , Controle de Qualidade , Algoritmos , Análise de Sequência de DNA
17.
Environ Res ; 247: 118176, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38215922

RESUMO

With the ongoing process of industrialization, the issue of declining air quality is increasingly becoming a critical concern. Accurate prediction of the Air Quality Index (AQI), considered as an all-inclusive measure representing the extent of pollutants present in the atmosphere, is of paramount importance. This study introduces a novel methodology that combines stacking ensemble and error correction to improve AQI prediction. Additionally, the reptile search algorithm (RSA) is employed for optimizing model parameters. In this study, four distinct regional AQI data containing a collection of 34864 data samples are collected. Initially, we perform cross-validation on ten commonly used single models to obtain prediction results. Then, based on evaluation indices, five models are selected for ensemble. The results of the study show that the model proposed in this paper achieves an improvement of around 10% in terms of accuracy when compared to the conventional model. Thus, the model introduced in this study offers a more scientifically grounded approach in tackling air pollution.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Poluentes Ambientais , Poluição do Ar/análise , Poluentes Atmosféricos/análise , Algoritmos , Projetos de Pesquisa
18.
Environ Res ; 246: 118533, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38417660

RESUMO

Real-time flood forecasting is one of the most pivotal measures for flood management, and real-time error correction is a critical step to guarantee the reliability of forecasting results. However, it is still challenging to develop a robust error correction technique due to the limited cognitions of catchment mechanisms and multi-source errors across hydrological modeling. In this study, we proposed a hydrologic similarity-based correction (HSBC) framework, which hybridizes hydrological modeling and multiple machine learning algorithms to advance the error correction of real-time flood forecasting. This framework can quickly and accurately retrieve similar historical simulation errors for different types of real-time floods by integrating clustering, supervised classification, and similarity retrieval methods. The simulation errors "carried" by similar historical floods are extracted to update the real-time forecasting results. Here, combining the Xin'anjiang model-based forecasting platform with k-means, K-nearest neighbor (KNN), and embedding based subsequences matching (EBSM) method, we constructed the HSBC framework and applied it to China's Dufengkeng Basin. Three schemes, including "non-corrected" (scheme 1), "auto-regressive (AR) corrected" (scheme 2), and "HSBC corrected" (scheme 3), were built for comparison purpose. The results indicated the following: 1) the proposed framework can successfully retrieval similar simulation errors with a considerable retrieval accuracy (2.79) and time consumption (228.18 s). 2) four evaluation metrics indicated that the HSBC-based scheme 3 performed much better than the AR-based scheme 2 in terms of both the whole flood process and the peak discharge; 3) the proposed framework overcame the shortcoming of the AR model that have poor correction for the flood peaks and can provide more significant correction for the floods with bad forecasting performance. Overall, the HSBC framework demonstrates the advancement of benefiting the real-time error correction from hydrologic similarity theory and provides a novel methodological alternative for flood control and water management in wider areas.


Assuntos
Inundações , Aprendizado de Máquina , Reprodutibilidade dos Testes , Simulação por Computador , Previsões
19.
Bioessays ; 44(5): e2100246, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35261042

RESUMO

Correct chromosome segregation in mitosis relies on chromosome biorientation, in which sister kinetochores attach to microtubules from opposite spindle poles prior to segregation. To establish biorientation, aberrant kinetochore-microtubule interactions must be resolved through the error correction process. During error correction, kinetochore-microtubule interactions are exchanged (swapped) if aberrant, but the exchange must stop when biorientation is established. In this article, we discuss recent findings in budding yeast, which have revealed fundamental molecular mechanisms promoting this "swap and stop" process for error correction. Where relevant, we also compare the findings in budding yeast with mechanisms in higher eukaryotes. Evidence suggests that Aurora B kinase differentially regulates kinetochore attachments to the microtubule end and its lateral side and switches relative strength of the two kinetochore-microtubule attachment modes, which drives the exchange of kinetochore-microtubule interactions to resolve aberrant interactions. However, Aurora B kinase, recruited to centromeres and inner kinetochores, cannot reach its targets at kinetochore-microtubule interface when tension causes kinetochore stretching, which stops the kinetochore-microtubule exchange once biorientation is established.


Assuntos
Cinetocoros , Saccharomycetales , Aurora Quinase B/genética , Segregação de Cromossomos , Microtúbulos/fisiologia , Mitose
20.
Quantum Inf Process ; 23(3): 86, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38434176

RESUMO

We construct new stabilizer quantum error-correcting codes from generalized monomial-Cartesian codes. Our construction uses an explicitly defined twist vector, and we present formulas for the minimum distance and dimension. Generalized monomial-Cartesian codes arise from polynomials in m variables. When m=1 our codes are MDS, and when m=2 and our lower bound for the minimum distance is 3, the codes are at least Hermitian almost MDS. For an infinite family of parameters, when m=2 we prove that our codes beat the Gilbert-Varshamov bound. We also present many examples of our codes that are better than any known code in the literature.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA