Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Bioinformatics ; 23(1): 85, 2022 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-35247967

RESUMO

BACKGROUND: A typical Copy Number Variations (CNVs) detection process based on the depth of coverage in the Whole Exome Sequencing (WES) data consists of several steps: (I) calculating the depth of coverage in sequencing regions, (II) quality control, (III) normalizing the depth of coverage, (IV) calling CNVs. Previous tools performed one normalization process for each chromosome-all the coverage depths in the sequencing regions from a given chromosome were normalized in a single run. METHODS: Herein, we present the new CNVind tool for calling CNVs, where the normalization process is conducted separately for each of the sequencing regions. The total number of normalizations is equal to the number of sequencing regions in the investigated dataset. For example, when analyzing a dataset composed of n sequencing regions, CNVind performs n independent depth of coverage normalizations. Before each normalization, the application selects the k most correlated sequencing regions with the depth of coverage Pearson's Correlation as distance metric. Then, the resulting subgroup of [Formula: see text] sequencing regions is normalized, the results of all n independent normalizations are combined; finally, the segmentation and CNV calling process is performed on the resultant dataset. RESULTS AND CONCLUSIONS: We used WES data from the 1000 Genomes project to evaluate the impact of independent normalization on CNV calling performance and compared the results with state-of-the-art tools: CODEX and exomeCopy. The results proved that independent normalization allows to improve the rare CNVs detection specificity significantly. For example, for the investigated dataset, we reduced the number of FP calls from over 15,000 to around 5000 while maintaining a constant number of TP calls equal to about 150 CNVs. However, independent normalization of each sequencing region is a computationally expensive process, therefore our pipeline is customized and can be easily run in the cloud computing environment, on the computer cluster, or the single CPU server. To our knowledge, the presented application is the first attempt to implement an innovative approach to independent normalization of the depth of WES data coverage.


Assuntos
Variações do Número de Cópias de DNA , Exoma , Algoritmos , Computação em Nuvem , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Sequenciamento do Exoma
2.
BMC Bioinformatics ; 23(1): 122, 2022 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-35392798

RESUMO

BACKGROUND: The assembly task is an indispensable step in sequencing genomes of new organisms and studying structural genomic changes. In recent years, the dynamic development of next-generation sequencing (NGS) methods raises hopes for making whole-genome sequencing a fast and reliable tool used, for example, in medical diagnostics. However, this is hampered by the slowness and computational requirements of the current processing algorithms, which raises the need to develop more efficient algorithms. One possible approach, still little explored, is the use of quantum computing. RESULTS: We present a proof of concept of de novo assembly algorithm, using the Genomic Signal Processing approach, detecting overlaps between DNA reads by calculating the Pearson correlation coefficient and formulating the assembly problem as an optimization task (Traveling Salesman Problem). Computations performed on a classic computer were compared with the results achieved by a hybrid method combining CPU and QPU calculations. For this purpose quantum annealer by D-Wave was used. The experiments were performed with artificially generated data and DNA reads coming from a simulator, with actual organism genomes used as input sequences. To our knowledge, this work is one of the few where actual sequences of organisms were used to study the de novo assembly task on quantum annealer. CONCLUSIONS: Proof of concept carried out by us showed that the use of quantum annealer (QA) for the de novo assembly task might be a promising alternative to the computations performed in the classical model. The current computing power of the available devices requires a hybrid approach (combining CPU and QPU computations). The next step may be developing a hybrid algorithm strictly dedicated to the de novo assembly task, using its specificity (e.g. the sparsity and bounded degree of the overlap-layout-consensus graph).


Assuntos
Metodologias Computacionais , Teoria Quântica , Algoritmos , Sequência de Bases , DNA/genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos
3.
J Math Biol ; 84(5): 36, 2022 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-35394192

RESUMO

Species tree estimation faces many significant hurdles. Chief among them is that the trees describing the ancestral lineages of each individual gene-the gene trees-often differ from the species tree. The multispecies coalescent is commonly used to model this gene tree discordance, at least when it is believed to arise from incomplete lineage sorting, a population-genetic effect. Another significant challenge in this area is that molecular sequences associated to each gene typically provide limited information about the gene trees themselves. While the modeling of sequence evolution by single-site substitutions is well-studied, few species tree reconstruction methods with theoretical guarantees actually address this latter issue. Instead, a standard-but unsatisfactory-assumption is that gene trees are perfectly reconstructed before being fed into a so-called summary method. Hence much remains to be done in the development of inference methodologies that rigorously account for gene tree estimation error-or completely avoid gene tree estimation in the first place. In previous work, a data requirement trade-off was derived between the number of loci m needed for an accurate reconstruction and the length of the locus sequences k. It was shown that to reconstruct an internal branch of length f, one needs m to be of the order of [Formula: see text]. That previous result was obtained under the restrictive assumption that mutation rates as well as population sizes are constant across the species phylogeny. Here we further generalize this result beyond this assumption. Our main contribution is a novel reduction to the molecular clock case under the multispecies coalescent, which we refer to as a stochastic Farris transform. As a corollary, we also obtain a new identifiability result of independent interest: for any species tree with [Formula: see text] species, the rooted topology of the species tree can be identified from the distribution of its unrooted weighted gene trees even in the absence of a molecular clock.


Assuntos
Especiação Genética , Modelos Genéticos , Filogenia
4.
Sensors (Basel) ; 22(6)2022 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-35336308

RESUMO

In this work, the problem of classifying Polish court rulings based on their text is presented. We use natural language processing methods and classifiers based on convolutional and recurrent neural networks. We prepared a dataset of 144,784 authentic, anonymized Polish court rulings. We analyze various general language embedding matrices and multiple neural network architectures with different parameters. Results show that such models can classify documents with very high accuracy (>99%). We also include an analysis of wrongly predicted examples. Performance analysis shows that our method is fast and could be used in practice on typical server hardware with 2 Processors (Central Processing Units, CPUs) or with a CPU and a Graphics processing unit (GPU).


Assuntos
Processamento de Linguagem Natural , Redes Neurais de Computação , Computadores , Idioma , Polônia
5.
Sensors (Basel) ; 22(6)2022 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-35336445

RESUMO

Third-generation DNA sequencers provided by Oxford Nanopore Technologies (ONT) produce a series of samples of an electrical current in the nanopore. Such a time series is used to detect the sequence of nucleotides. The task of translation of current values into nucleotide symbols is called basecalling. Various solutions for basecalling have already been proposed. The earlier ones were based on Hidden Markov Models, but the best ones use neural networks or other machine learning models. Unfortunately, achieved accuracy scores are still lower than competitive sequencing techniques, like Illumina's. Basecallers differ in the input data type-currently, most of them work on a raw data straight from the sequencer (time series of current). Still, the approach of using event data is also explored. Event data is obtained by preprocessing of raw data and dividing it into segments described by several features computed from raw data values within each segment. We propose a novel basecaller that uses joint processing of raw and event data. We define basecalling as a sequence-to-sequence translation, and we use a machine learning model based on an encoder-decoder architecture of recurrent neural networks. Our model incorporates twin encoders and an attention mechanism. We tested our solution on simulated and real datasets. We compare the full model accuracy results with its components: processing only raw or event data. We compare our solution with the existing ONT basecaller-Guppy. Results of numerical experiments show that joint raw and event data processing provides better basecalling accuracy than processing each data type separately. We implement an application called Ravvent, freely available under MIT licence.


Assuntos
Nanoporos , DNA , Aprendizado de Máquina , Redes Neurais de Computação , Análise de Sequência de DNA/métodos
6.
Sensors (Basel) ; 22(1)2022 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-35009927

RESUMO

Illegal discharges of pollutants into sewage networks are a growing problem in large European cities. Such events often require restarting wastewater treatment plants, which cost up to a hundred thousand Euros. A system for localization and quantification of pollutants in utility networks could discourage such behavior and indicate a culprit if it happens. We propose an enhanced algorithm for multisensor data fusion for the detection, localization, and quantification of pollutants in wastewater networks. The algorithm processes data from multiple heterogeneous sensors in real-time, producing current estimates of network state and alarms if one or many sensors detect pollutants. Our algorithm models the network as a directed acyclic graph, uses adaptive peak detection, estimates the amount of specific compounds, and tracks the pollutant using a Kalman filter. We performed numerical experiments for several real and artificial sewage networks, and measured the quality of discharge event reconstruction. We report the correctness and performance of our system. We also propose a method to assess the importance of specific sensor locations. The experiments show that the algorithm's success rate is equal to sensor coverage of the network. Moreover, the median distance between nodes pointed out by the fusion algorithm and nodes where the discharge was introduced equals zero when more than half of the network nodes contain sensors. The system can process around 5000 measurements per second, using 1 MiB of memory per 4600 measurements plus a constant of 97 MiB, and it can process 20 tracks per second, using 1.3 MiB of memory per 100 tracks.


Assuntos
Algoritmos , Águas Residuárias , Cidades , Esgotos
7.
Sensors (Basel) ; 22(24)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36559944

RESUMO

The non-invasive electrocardiogram (ECG) signals are useful in heart condition assessment and are found helpful in diagnosing cardiac diseases. However, traditional ways, i.e., a medical consultation required effort, knowledge, and time to interpret the ECG signals due to the large amount of data and complexity. Neural networks have been shown to be efficient recently in interpreting the biomedical signals including ECG and EEG. The novelty of the proposed work is using spectrograms instead of raw signals. Spectrograms could be easily reduced by eliminating frequencies with no ECG information. Moreover, spectrogram calculation is time-efficient through short-time Fourier transformation (STFT) which allowed to present reduced data with well-distinguishable form to convolutional neural network (CNN). The data reduction was performed through frequency filtration by taking a specific cutoff value. These steps makes architecture of the CNN model simple which showed high accuracy. The proposed approach reduced memory usage and computational power through not using complex CNN models. A large publicly available PTB-XL dataset was utilized, and two datasets were prepared, i.e., spectrograms and raw signals for binary classification. The highest accuracy of 99.06% was achieved by the proposed approach, which reflects spectrograms are better than the raw signals for ECG classification. Further, up- and down-sampling of the signals were also performed at various sampling rates and accuracies were attained.


Assuntos
Cardiopatias , Redes Neurais de Computação , Humanos , Frequência Cardíaca , Eletrocardiografia , Filtração , Algoritmos
8.
Sensors (Basel) ; 21(3)2021 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-33530562

RESUMO

In December 2016, the wastewater treatment plant of Baarle-Nassau, Netherlands, failed. The failure was caused by the illegal disposal of high volumes of acidic waste into the sewer network. Repairs cost between 80,000 and 100,000 EUR. A continuous monitoring system of a utility network such as this one would help to determine the causes of such pollution and could mitigate or reduce the impact of these kinds of events in the future. We have designed and tested a data fusion system that transforms the time-series of sensor measurements into an array of source-localized discharge events. The data fusion system performs this transformation as follows. First, the time-series of sensor measurements are resampled and converted to sensor observations in a unified discrete time domain. Second, sensor observations are mapped to pollutant detections that indicate the amount of specific pollutants according to a priori knowledge. Third, pollutant detections are used for inferring the propagation of the discharged pollutant downstream of the sewage network to account for missing sensor observations. Fourth, pollutant detections and inferred sensor observations are clustered to form tracks. Finally, tracks are processed and propagated upstream to form the final list of probable events. A set of experiments was performed using a modified variant of the EPANET Example Network 2. Results of our experiments show that the proposed system can narrow down the source of pollution to seven or fewer nodes, depending on the number of sensors, while processing approximately 100 sensor observations per second. Having considered the results, such a system could provide meaningful information about pollution events in utility networks.

9.
Sensors (Basel) ; 21(24)2021 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-34960407

RESUMO

Models for keyword spotting in continuous recordings can significantly improve the experience of navigating vast libraries of audio recordings. In this paper, we describe the development of such a keyword spotting system detecting regions of interest in Polish call centre conversations. Unfortunately, in spite of recent advancements in automatic speech recognition systems, human-level transcription accuracy reported on English benchmarks does not reflect the performance achievable in low-resource languages, such as Polish. Therefore, in this work, we shift our focus from complete speech-to-text conversion to acoustic similarity matching in the hope of reducing the demand for data annotation. As our primary approach, we evaluate Siamese and prototypical neural networks trained on several datasets of English and Polish recordings. While we obtain usable results in English, our models' performance remains unsatisfactory when applied to Polish speech, both after mono- and cross-lingual training. This performance gap shows that generalisation with limited training resources is a significant obstacle for actual deployments in low-resource languages. As a potential countermeasure, we implement a detector using audio embeddings generated with a generic pre-trained model provided by Google. It has a much more favourable profile when applied in a cross-lingual setup to detect Polish audio patterns. Nevertheless, despite these promising results, its performance on out-of-distribution data are still far from stellar. It would indicate that, in spite of the richness of internal representations created by more generic models, such speech embeddings are not entirely malleable to cross-language transfer.


Assuntos
Redes Neurais de Computação , Fala , Acústica , Curadoria de Dados , Humanos , Idioma
10.
Sensors (Basel) ; 21(16)2021 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-34450735

RESUMO

Despite technological progress, we lack a consensus on the method of conducting automated bowel sound (BS) analysis and, consequently, BS tools have not become available to doctors. We aimed to briefly review the literature on BS recording and analysis, with an emphasis on the broad range of analytical approaches. Scientific journals and conference materials were researched with a specific set of terms (Scopus, MEDLINE, IEEE) to find reports on BS. The research articles identified were analyzed in the context of main research directions at a number of centers globally. Automated BS analysis methods were already well developed by the early 2000s. Accuracy of 90% and higher had been achieved with various analytical approaches, including wavelet transformations, multi-layer perceptrons, independent component analysis and autoregressive-moving-average models. Clinical research on BS has exposed their important potential in the non-invasive diagnosis of irritable bowel syndrome, in surgery, and for the investigation of gastrointestinal motility. The most recent advances are linked to the application of artificial intelligence and the development of dedicated BS devices. BS research is technologically mature, but lacks uniform methodology, an international forum for discussion and an open platform for data exchange. A common ground is needed as a starting point. The next key development will be the release of freely available benchmark datasets with labels confirmed by human experts.


Assuntos
Inteligência Artificial , Gastroenteropatias , Redes Neurais de Computação , Automação , Gastroenteropatias/diagnóstico , Humanos , Som
11.
Sensors (Basel) ; 21(22)2021 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-34833679

RESUMO

Automated bowel sound (BS) analysis methods were already well developed by the early 2000s. Accuracy of ~90% had been achieved by several teams using various analytical approaches. Clinical research on BS had revealed their high potential in the non-invasive investigation of irritable bowel syndrome to study gastrointestinal motility and in a surgical setting. This article proposes a novel methodology for the analysis of BS using hybrid convolutional and recursive neural networks. It is one of the first methods of using deep learning to be widely explored. We have developed an experimental pipeline and evaluated our results with a new dataset collected using a device with a dedicated contact microphone. Data have been collected at night-time, which is the most interesting period from a neurogastroenterological point of view. Previous works had ignored this period and instead kept brief records only during the day. Our algorithm can detect bowel sounds with an accuracy >93%. Moreover, we have achieved a very high specificity (>97%), crucial in diagnosis. The results have been checked with a medical professional, and they successfully support clinical diagnosis. We have developed a client-server system allowing medical practitioners to upload the recordings from their patients and have them analyzed online. This system is available online. Although BS research is technologically mature, it still lacks a uniform methodology, an international forum for discussion, and an open platform for data exchange, and therefore it is not commonly used. Our server could provide a starting point for establishing a common framework in BS research.


Assuntos
Algoritmos , Redes Neurais de Computação , Acústica , Humanos
12.
J Strength Cond Res ; 35(8): 2279-2286, 2021 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-34398078

RESUMO

ABSTRACT: Nowak, R, Kostrzerwa-Nowak, D, and Buryta, R. Analysis of selected lymphocyte (CD45+) subset distribution in capillary blood of young soccer players. J Strength Cond Res 35(8): 2279-2286, 2021-Mechanisms responsible for increasing athletes' physical capacity and induction of exercise-induced immunosuppression processes are not fully understood. The aim of the study was to monitor changes in percentages of lymphocyte subsets: T, Th, Tc, B, and NK cells in capillary blood of junior soccer players. Ten subjects median aged 18 years (range 17-19 years) were recruited form young soccer players. Capillary blood was collected 24 hours after each soccer match during the 8 weeks of the final phase of Central Junior League competition, and white blood cell (WBC) phenotyping was performed to determine the percentages of B lymphocytes, NK cells, and T-lymphocyte subsets. Cumulative match-time (a sum of time spend playing the game by each athlete during the observation period) was also calculated. Significant changes in the percentage of total lymphocytes (p = 0.00005) and T cells (p = 0.00006) were observed. The slight increases in lymphocytes' and Th cells' median percentages correlated with increasing cumulative match-time of studied subjects, although the correlation was not strong (R = 0.24; p = 0.0205 and R = 0.30; p = 0.0035, for lymphocytes and Th cells, respectively). It seems that the exercise bouts are among considerable factors influencing the changes in WBC subsets, especially in CD3+ cells, among young soccer players. Regarding the number of games played and training loads, they are more susceptible to immunosuppression and subsequent infections and thus should be monitored regarding WBC phenotype assessment.


Assuntos
Desempenho Atlético , Corrida , Futebol , Adolescente , Adulto , Atletas , Humanos , Linfócitos , Adulto Jovem
13.
Molecules ; 26(11)2021 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-34073894

RESUMO

Radiation and photodynamic therapies are used for cancer treatment by targeting DNA. However, efficiency is limited due to physico-chemical processes and the insensitivity of native nucleobases to damage. Thus, incorporation of radio- and photosensitizers into these therapies should increase both efficacy and the yield of DNA damage. To date, studies of sensitization processes have been performed on simple model systems, e.g., buffered solutions of dsDNA or sensitizers alone. To fully understand the sensitization processes and to be able to develop new efficient sensitizers in the future, well established model systems are necessary. In the cell environment, DNA tightly interacts with proteins and incorporating this interaction is necessary to fully understand the DNA sensitization process. In this work, we used dsDNA/protein complexes labeled with photo- and radiosensitizers and investigated degradation pathways using LC-MS and HPLC after X-ray or UV radiation.


Assuntos
DNA/efeitos da radiação , Proteínas/efeitos da radiação , Raios Ultravioleta , Raios X , DNA/química , Radiossensibilizantes/química
14.
Molecules ; 25(16)2020 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-32784992

RESUMO

Radiotherapy, the most common therapy for the treatment of solid tumors, exerts its effects by inducing DNA damage. To fully understand the extent and nature of this damage, DNA models that mimic the in vivo situation should be utilized. In a cellular context, genomic DNA constantly interacts with proteins and these interactions could influence both the primary radical processes (triggered by ionizing radiation) and secondary reactions, ultimately leading to DNA damage. However, this is seldom addressed in the literature. In this work, we propose a general approach to tackle these shortcomings. We synthesized a protein-DNA complex that more closely represents DNA in the physiological environment than oligonucleotides solution itself, while being sufficiently simple to permit further chemical analyses. Using click chemistry, we obtained an oligonucleotide-peptide conjugate, which, if annealed with the complementary oligonucleotide strand, forms a complex that mimics the specific interactions between the GCN4 protein and DNA. The covalent bond connecting the oligonucleotide and peptide constitutes a part of substituted triazole, which forms due to the click reaction between the short peptide corresponding to the specific amino acid sequence of GCN4 protein (yeast transcription factor) and a DNA fragment that is recognized by the protein. DNAse footprinting demonstrated that the part of the DNA fragment that specifically interacts with the peptide in the complex is protected from DNAse activity. Moreover, the thermodynamic characteristics obtained using differential scanning calorimetry (DSC) are consistent with the interaction energies calculated at the level of metadynamics. Thus, we present an efficient approach to generate a well-defined DNA-peptide conjugate that mimics a real DNA-peptide complex. These complexes can be used to investigate DNA damage under conditions very similar to those present in the cell.


Assuntos
Fatores de Transcrição de Zíper de Leucina Básica/química , DNA de Cadeia Simples/química , DNA/química , Peptídeos/química , Proteínas de Saccharomyces cerevisiae/química , Sequência de Aminoácidos , Fatores de Transcrição de Zíper de Leucina Básica/metabolismo , Sítios de Ligação , Varredura Diferencial de Calorimetria , Catálise , Cromatografia Líquida de Alta Pressão , Química Click , Cobre/química , DNA/metabolismo , Dano ao DNA , DNA de Cadeia Simples/metabolismo , Simulação de Dinâmica Molecular , Conformação de Ácido Nucleico , Peptídeos/metabolismo , Domínios Proteicos , Proteínas de Saccharomyces cerevisiae/metabolismo , Espectrometria de Massas por Ionização por Electrospray , Temperatura de Transição
15.
BMC Bioinformatics ; 20(1): 266, 2019 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-31138108

RESUMO

BACKGROUND: There are over 25 tools dedicated for the detection of Copy Number Variants (CNVs) using Whole Exome Sequencing (WES) data based on read depth analysis. The tools reported consist of several steps, including: (i) calculation of read depth for each sequencing target, (ii) normalization, (iii) segmentation and (iv) actual CNV calling. The essential aspect of the entire process is the normalization stage, in which systematic errors and biases are removed and the reference sample set is used to increase the signal-to-noise ratio. Although some CNV calling tools use dedicated algorithms to obtain the optimal reference sample set, most of the advanced CNV callers do not include this feature. To our knowledge, this work is the first attempt to assess the impact of reference sample set selection on CNV detection performance. METHODS: We used WES data from the 1000 Genomes project to evaluate the impact of various methods of reference sample set selection on CNV calling performance of three chosen state-of-the-art tools: CODEX, CNVkit and exomeCopy. Two naive solutions (all samples as reference set and random selection) as well as two clustering methods (k-means and k nearest neighbours (kNN) with a variable number of clusters or group sizes) have been evaluated to discover the best performing sample selection method. RESULTS AND CONCLUSIONS: The performed experiments have shown that the appropriate selection of the reference sample set may greatly improve the CNV detection rate. In particular, we found that smart reduction of reference sample size may significantly increase the algorithms' precision while having negligible negative effect on sensitivity. We observed that a complete CNV calling process with the k-means algorithm as the selection method has significantly better time complexity than kNN-based solution.


Assuntos
Algoritmos , Variações do Número de Cópias de DNA/genética , Benchmarking , Bases de Dados Genéticas , Feminino , Humanos , Masculino , Padrões de Referência , Tamanho da Amostra
16.
BMC Bioinformatics ; 19(1): 273, 2018 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-30021513

RESUMO

BACKGROUND: Many organisms, in particular bacteria, contain repetitive DNA fragments called tandem repeats. These structures are restored by DNA assemblers by mapping paired-end tags to unitigs, estimating the distance between them and filling the gap with the specified DNA motif, which could be repeated many times. However, some of the tandem repeats are longer than the distance between the paired-end tags. RESULTS: We present a new algorithm for de novo DNA assembly, which uses the relative frequency of reads to properly restore tandem repeats. The main advantage of the presented algorithm is that long tandem repeats, which are much longer than maximum reads length and the insert size of paired-end tags can be properly restored. Moreover, repetitive DNA regions covered only by single-read sequencing data could also be restored. Other existing de novo DNA assemblers fail in such cases. The presented application is composed of several steps, including: (i) building the de Bruijn graph, (ii) correcting the de Bruijn graph, (iii) normalizing edge weights, and (iv) generating the output set of DNA sequences. We tested our approach on real data sets of bacterial organisms. CONCLUSIONS: The software library, console application and web application were developed. Web application was developed in client-server architecture, where web-browser is used to communicate with end-user and algorithms are implemented in C++ and Python. The presented approach enables proper reconstruction of tandem repeats, which are longer than the insert size of paired-end tags. The application is freely available to all users under GNU Library or Lesser General Public License version 3.0 (LGPLv3).


Assuntos
Algoritmos , Bactérias/genética , DNA/genética , Genoma Bacteriano , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Sequências Repetitivas de Ácido Nucleico/genética , Sequência de Bases , Simulação por Computador , Bases de Dados Genéticas , Sequências de Repetição em Tandem/genética
17.
J Strength Cond Res ; 29(5): 1399-405, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25426511

RESUMO

Numerous literature data point out the differences in immunological parameters as a result of physical effort and the relation of those changes to the subject's fitness level. This study was aimed at the assessment of soccer players' condition and adaptation to physical effort based on the changes in C-reactive protein (CRP) blood level. C-reactive protein, total protein, and albumin plasma levels before and after 60-minute-long outdoor running were determined among 16 (8 men and 8 women) soccer players. Statistically significant increase in total blood protein level was observed in both studied groups. However, there were no statistically significant changes in albumin level in soccer players' blood. Determination of CRP showed that the exercise test caused changes in its level among both women and men; yet, statistically significant increase in CRP level was found only in women's blood. The different influence of effort on CRP plasma level may be explained by the involvement of various mechanisms in regulation of acute-phase responses in different conditions. It was found in our study that CRP level could be a valuable tool to assess the metabolic response to aerobic exercise.


Assuntos
Proteína C-Reativa/metabolismo , Esforço Físico/fisiologia , Futebol/fisiologia , Reação de Fase Aguda , Adaptação Fisiológica/fisiologia , Adolescente , Teste de Esforço , Feminino , Humanos , Masculino , Aptidão Física/fisiologia , Corrida/fisiologia , Albumina Sérica/metabolismo , Fatores Sexuais , Adulto Jovem
18.
J Strength Cond Res ; 28(8): 2180-6, 2014 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25057846

RESUMO

Monitoring and optimizing the effectiveness of training course require wide analyses of athletes' blood parameter changes. The aim of this study was to evaluate the usefulness of biochemical liver profile to assess the metabolic response to semi-long-distance outdoor run in football players. Sixteen football players run outdoor for 60 minutes to achieve aerobic metabolism. Plasma activity of aspartate aminotransferase (AST), alanine aminotransferase (ALT), γ-glutamyltransferase (GGT) and plasma levels of total and direct bilirubin were determined in samples obtained before exercise test (pre-exercise) and immediately after the run (post-exercise). Mean AST plasma activity (U·L-1) before/after the exercise, respectively, was 78.3/228.3 in women and 76.5/56.2 in men. Mean ALT plasma activity (U·L-1) before/after the exercise, respectively, was 27.5/59.1 in women and 36.2/35.3 in men. Mean GGT plasma activity (U·L-1) before/after the exercise, respectively, was 39.3/76.6 in women and 44.7/71.2 in men. Plasma levels of total and direct bilirubin were similar before and after the run regardless of the gender. Statistical significance of the differences between results obtained pre- and post-exercise occurred in women (p = 0.0212 for AST; p = 0.0320 for ALT; p = 0.0067 for GGT, respectively). The training monitoring in athletes should be performed using measurements of performance and biological or physiological parameters. It was found that AST, ALT, and GGT activities could be a valuable tool to assess the metabolic response in high-level fitness female athletes. Therefore, monitoring of those well-known diagnostic markers could prevent the trainee from harmful overtraining.


Assuntos
Alanina Transaminase/sangue , Aspartato Aminotransferases/sangue , Fígado/enzimologia , Condicionamento Físico Humano/fisiologia , Corrida/fisiologia , Futebol/fisiologia , gama-Glutamiltransferase/sangue , Adolescente , Bilirrubina/sangue , Teste de Esforço , Feminino , Humanos , Masculino , Adulto Jovem
19.
Comput Biol Med ; 170: 107908, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38217973

RESUMO

Electrocardiogram (ECG) are the physiological signals and a standard test to measure the heart's electrical activity that depicts the movement of cardiac muscles. A review study has been conducted on ECG signals analysis with the help of artificial intelligence (AI) methods over the last ten years i.e., 2012-22. Primarily, the method of ECG analysis by software systems was divided into classical signal processing (e.g. spectrograms or filters), machine learning (ML) and deep learning (DL), including recursive models, transformers and hybrid. Secondly, the data sources and benchmark datasets were depicted. Authors grouped resources by ECG acquisition methods into hospital-based portable machines and wearable devices. Authors also included new trends like advanced pre-processing, data augmentation, simulations and agent-based modeling. The study found improvement in ECG examination perfection made each year through ML, DL, hybrid models, and transformers. Convolutional neural networks and hybrid models were more targeted and proved efficient. The transformer model extended the accuracy from 90% to 98%. The Physio-Net library helps acquire ECG signals, including the popular benchmark databases such as MIT-BIH, PTB, and challenging datasets. Similarly, wearable devices have been established as a appropriate option for monitoring patient health without the time and place limitations and are also helpful for AI model calibration with so far accuracy of 82%-83% on Samsung smartwatch. In the pre-processing signals, spectrogram generation through Fourier and wavelet transformations are erected leading approaches promoting on average accuracy of 90%-95%. Likewise, data enhancement using geometrical techniques is well-considered; however, extraction and concatenation-based methods need attention. As the what-if analysis in healthcare or cardiac issues can be performed using a complex simulation, the study reviews agent-based modeling and simulation approaches for cardiovascular risk event assessment.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Redes Neurais de Computação , Software , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos
20.
New Phytol ; 198(1): 127-138, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23356437

RESUMO

Deserts are considered 'below-ground dominated', yet little is known about the impact of rising CO(2) in combination with natural weather cycles on long-term dynamics of root biomass. This study quantifies the temporal dynamics of fine-root production, loss and standing crop in an intact desert ecosystem exposed to 10 yr of elevated CO(2). We used monthly minirhizotron observations from 4 yr (2003-2007) for two dominant shrub species and along community transects at the Nevada Desert free-air CO(2) enrichment Facility. Data were synthesized within a Bayesian framework that included effects of CO(2) concentration, cover type, phenological period, antecedent soil water and biological inertia (i.e. the influence of prior root production and loss). Elevated CO(2) treatment interacted with antecedent soil moisture and had significantly greater effects on fine-root dynamics during certain phenological periods. With respect to biological inertia, plants under elevated CO(2) tended to initiate fine-root growth sooner and sustain growth longer, with the net effect of increasing the magnitude of production and mortality cycles. Elevated CO(2) interacts with past environmental (e.g. antecedent soil water) and biological (e.g. biological inertia) factors to affect fine-root dynamics, and such interactions are expected to be important for predicting future soil carbon pools.


Assuntos
Dióxido de Carbono/farmacologia , Clima Desértico , Raízes de Plantas/efeitos dos fármacos , Raízes de Plantas/fisiologia , Produtos Agrícolas/fisiologia , Umidade , Modelos Biológicos , Nevada , Chuva , Solo/química , Fatores de Tempo , Água
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA