Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 123.024
Filtrar
1.
Braz. j. biol ; 84: e245592, 2024. tab, graf
Artigo em Inglês | LILACS, VETINDEX | ID: biblio-1355866

RESUMO

Abstract In recent years, the development of high-throughput technologies for obtaining sequence data leveraged the possibility of analysis of protein data in silico. However, when it comes to viral polyprotein interaction studies, there is a gap in the representation of those proteins, given their size and length. The prepare for studies using state-of-the-art techniques such as Machine Learning, a good representation of such proteins is a must. We present an alternative to this problem, implementing a fragmentation and modeling protocol to prepare those polyproteins in the form of peptide fragments. Such procedure is made by several scripts, implemented together on the workflow we call PolyPRep, a tool written in Python script and available in GitHub. This software is freely available only for noncommercial users.


Resumo Nos últimos anos, o desenvolvimento de tecnologias de alto rendimento para obtenção de dados sequenciais potencializou a possibilidade de análise de dados proteicos in silico. No entanto, quando se trata de estudos de interação de poliproteínas virais, existe uma lacuna na representação dessas proteínas, devido ao seu tamanho e comprimento. Para estudos utilizando técnicas de ponta como o Aprendizado de Máquina, uma boa representação dessas proteínas é imprescindível. Apresentamos uma alternativa para este problema, implementando um protocolo de fragmentação e modelagem para preparar essas poliproteínas na forma de fragmentos de peptídeos. Tal procedimento é feito por diversos scripts, implementados em conjunto no workflow que chamamos de PolyPRep, uma ferramenta escrita em script Python e disponível no GitHub. Este software está disponível gratuitamente apenas para usuários não comerciais.


Assuntos
Protease de HIV , Poliproteínas , Software , Simulação de Acoplamento Molecular
2.
Braz. j. oral sci ; 21: e227903, jan.-dez. 2022. ilus
Artigo em Inglês | LILACS, BBO - Odontologia | ID: biblio-1355005

RESUMO

Aim: To evaluate the accuracy and the validity of orthodontic diagnostic measurements, as well as virtual tooth transformations using a generic open access 3D software compared to OrthoAnalyzer (3Shape) software; which was previously tested and proven for accuracy. Methods: 40 maxillary and mandibular single arch study models were duplicated and scanned using 3Shape laser scanner. The files were imported into the generic and OrthoAnalyzer software programs; where linear measurements were taken twice to investigate the accuracy of the program. To test the accuracy of the program format, they were printed, rescanned and imported into OrthAnalyzer. Finally, to investigate the accuracy of editing capabilities, linear and angular transformation procedures were performed, superimposed and printed to be rescanned and imported to OrthoAnalyzer for comparison. Results: There was no statistically significant difference between the two groups using the two software programs regarding the accuracy of the linear measurements (p>0.05). There was no statistically significant difference between the different formats among all the measurements, (p>0.05). The editing capabilities also showed no statistically significant difference (p>0.05). Conclusion: The generic 3D software (Meshmixer) was valid and accurate in cast measurements and linear and angular editing procedures. It can be used for orthodontic diagnosis and treatment planning without added costs


Assuntos
Software , Moldes Cirúrgicos , Imageamento Tridimensional , Modelos Dentários
3.
J Acoust Soc Am ; 152(1): 266, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35931540

RESUMO

This paper addresses the development of a system for classifying mouse ultrasonic vocalizations (USVs) present in audio recordings. The automatic labeling process for USVs is usually divided into two main steps: USV segmentation followed by the matching classification. Three main contributions can be highlighted: (i) a new segmentation algorithm, (ii) a new set of features, and (iii) the discrimination of a higher number of classes when compared to similar studies. The developed segmentation algorithm is based on spectral entropy analysis. This novel segmentation approach can detect USVs with 94% and 74% recall and precision, respectively. When compared to other methods/software, our segmentation algorithm achieves a higher recall. Regarding the classification phase, besides the traditional features from time, frequency, and time-frequency domains, a new set of contour-based features were extracted and used as inputs of shallow machine learning classification models. The contour-based features were obtained from the time-frequency ridge representation of USVs. The classification methods can differentiate among ten different syllable types with 81.1% accuracy and 80.5% weighted F1-score. The algorithms were developed and evaluated based on a large dataset, acquired on diverse social interaction conditions between the animals, to stimulate a varied vocal repertoire.


Assuntos
Ultrassom , Vocalização Animal , Algoritmos , Animais , Aprendizado de Máquina , Camundongos , Software
4.
Am J Orthod Dentofacial Orthop ; 162(2): 257-263, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35933158

RESUMO

INTRODUCTION: Accurate landmark identification is a prerequisite for accurate and reliable biomedical image analysis. Orthodontic study models are valuable tools for diagnosis, treatment planning, and maintaining complete records. The purpose of this study was to evaluate the reliability and validity of a software program (Align Technology, Inc) as a tool for automatic landmark location. METHODS: Using digital intraoral scans of 10 dental arches, 4 calibrated human judges independently located cusp tips and interproximal contacts. The same landmarks were automatically identified by the software. Intraclass correlation coefficient (Cronbach α), absolute mean errors, and regression analysis were calculated. In addition, Bland-Altman 95% confidence limits were also applied to the data to graphically display agreement on landmark identification between the human judges and the software. RESULTS: The intraclass correlation coefficient between the software and the human judges' average for the x-, y-, and z-coordinates for all landmarks was excellent, at 1.0, 1.0, and 0.98, respectively. The regression analysis and Bland-Altman plots show no systematic errors for agreement on landmark identification between the human judges and the software. CONCLUSIONS: Landmark location was nearly identical between the software and the human judges, making the methods interchangeable.


Assuntos
Processamento de Imagem Assistida por Computador , Software , Cefalometria/métodos , Humanos , Imageamento Tridimensional , Reprodutibilidade dos Testes
5.
Development ; 149(20)2022 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-35929583

RESUMO

To obtain commensurate numerical data of neuronal network morphology in vitro, network analysis needs to follow consistent guidelines. Important factors in successful analysis are sample uniformity, suitability of the analysis method for extracting relevant data and the use of established metrics. However, for the analysis of 3D neuronal cultures, there is little coherence in the analysis methods and metrics used in different studies. Here, we present a framework for the analysis of neuronal networks in 3D. First, we selected a hydrogel that supported the growth of human pluripotent stem cell-derived cortical neurons. Second, we tested and compared two software programs for tracing multi-neuron images in three dimensions and optimized a workflow for neuronal analysis using software that was considered highly suitable for this purpose. Third, as a proof of concept, we exposed 3D neuronal networks to oxygen-glucose deprivation- and ionomycin-induced damage and showed morphological differences between the damaged networks and control samples utilizing the proposed analysis workflow. With the optimized workflow, we present a protocol for preparing, challenging, imaging and analysing 3D human neuronal cultures.


Assuntos
Neurônios , Células-Tronco Pluripotentes , Humanos , Software
6.
BMC Bioinformatics ; 23(1): 315, 2022 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-35927614

RESUMO

BACKGROUND: Genetic and epigenetic biological studies often combine different types of experiments and multiple conditions. While the corresponding raw and processed data are made available through specialized public databases, the processed files are usually limited to a specific research question. Hence, they are unsuitable for an unbiased, systematic overview of a complex dataset. However, possible combinations of different sample types and conditions grow exponentially with the amount of sample types and conditions. Therefore the risk to miss a correlation or to overrate an identified correlation should be mitigated in a complex dataset. Since reanalysis of a full study is rarely a viable option, new methods are needed to address these issues systematically, reliably, reproducibly and efficiently. RESULTS: Cogito "COmpare annotated Genomic Intervals TOol" provides a workflow for an unbiased, structured overview and systematic analysis of complex genomic datasets consisting of different data types (e.g. RNA-seq, ChIP-seq) and conditions. Cogito is able to visualize valuable key information of genomic or epigenomic interval-based data, thereby providing a straightforward analysis approach for comparing different conditions. It supports getting an unbiased impression of a dataset and developing an appropriate analysis strategy for it. In addition to a text-based report, Cogito offers a fully customizable report as a starting point for further in-depth investigation. CONCLUSIONS: Cogito implements a novel approach to facilitate high-level overview analyses of complex datasets, and offers additional insights into the data without the need for a full, time-consuming reanalysis. The R/Bioconductor package is freely available at https://bioconductor.org/packages/release/bioc/html/Cogito.html , a comprehensive documentation with detailed descriptions and reproducible examples is included.


Assuntos
Genômica , Software , Sequenciamento de Cromatina por Imunoprecipitação , Epigenômica , Genoma
7.
BMC Bioinformatics ; 23(1): 316, 2022 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-35927623

RESUMO

BACKGROUND: ImputAccur is a software tool to measure genotype-imputation accuracy. Imputation of untyped markers is a standard approach in genome-wide association studies to close the gap between directly genotyped and other known DNA variants. However, high accuracy for imputed genotypes is fundamental. Several accuracy measures have been proposed, but unfortunately, they are implemented on different platforms, which is impractical. RESULTS: With ImputAccur, the accuracy measures info, Iam-hiQ and r2-based indices can be derived from standard output files of imputation software. Sample/probe and marker filtering is possible. This allows e.g. accurate marker filtering ahead of data analysis. CONCLUSIONS: The source code (Python version 3.9.4), a standalone executive file, and example data for ImputAccur are freely available at https://gitlab.gwdg.de/kolja.thormann1/imputationquality.git .


Assuntos
Estudo de Associação Genômica Ampla , Polimorfismo de Nucleotídeo Único , Genótipo , Software
10.
Hist Philos Life Sci ; 44(3): 33, 2022 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-35918565

RESUMO

The purpose of this study is to examine how trends in the use of images in modern life science journals have changed since the spread of computer-based visual and imaging technology. To this end, a new classification system was constructed to analyze how the graphics of a scientific journal have changed over the years. The focus was on one international peer-reviewed journal in life sciences, Cell, which was founded in 1974, whereby 1725 figures and 160 tables from the research articles in Cell were sampled. The unit of classification was defined as a graphic and the figures and tables were divided into 5952 graphics. These graphics were further classified into hierarchical categories, and the data in each category were aggregated every five years. The following categories were observed: (1) data graphics, (2) explanation graphics, and (3) hybrid graphics. Data graphics increased by more than sixfold between 1974 and 2014, and some types of data graphics including mechanical reproduction images and bar charts displayed notable changes. The representation of explanatory graphics changed from hand-painted illustrations to diagrams of Bezier-curves. It is suggested that in addition to the development of experimental technologies such as fluorescent microscopy and big data analysis, continuously evolving application software for image creation and researchers' motivation to convince reviewers and editors have influenced these changes.


Assuntos
Disciplinas das Ciências Biológicas , Software
11.
Sci Rep ; 12(1): 13363, 2022 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-35922653

RESUMO

Congestion control plays an essential role on the internet to manage overload, which affects data transmission performance. The random early detection (RED) algorithm belongs to active queue management (AQM), which is used to manage internet traffic. The RED is used to eliminate weakness in default control of the Transport Control Protocol (TCP) drop-tail mechanism. The drawback of RED is parameter tuning, while adaptive RED (ARED) automatically adjusts these parameters. In this study, the suggested algorithm, the Markov decision process RED (MDPRED) uses the Markov decision process (MDP) to suitably adapt values for queue weight in the RED algorithm based on average queue length to enhance the performance of the traditional RED during TCP Slow Startup phase. This study is conducted based on fluctuations among the rate of service, queuing weight, and the mean queue length by using open-source network simulator NS3. The study shows efficient results by fluctuating end-to-end packet throughput and fast response to the inception of congestion in the network. The modified algorithm achieves a low level of drop packets by evaluating the results with other five algorithms, which is done by increasing the algorithm's response when the average queue size becomes close to the maximum queue length threshold.


Assuntos
Algoritmos , Software , Cadeias de Markov
12.
BMC Biol ; 20(1): 174, 2022 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-35932043

RESUMO

BACKGROUND: High-throughput live-cell imaging is a powerful tool to study dynamic cellular processes in single cells but creates a bottleneck at the stage of data analysis, due to the large amount of data generated and limitations of analytical pipelines. Recent progress on deep learning dramatically improved cell segmentation and tracking. Nevertheless, manual data validation and correction is typically still required and tools spanning the complete range of image analysis are still needed. RESULTS: We present Cell-ACDC, an open-source user-friendly GUI-based framework written in Python, for segmentation, tracking and cell cycle annotations. We included state-of-the-art deep learning models for single-cell segmentation of mammalian and yeast cells alongside cell tracking methods and an intuitive, semi-automated workflow for cell cycle annotation of single cells. Using Cell-ACDC, we found that mTOR activity in hematopoietic stem cells is largely independent of cell volume. By contrast, smaller cells exhibit higher p38 activity, consistent with a role of p38 in regulation of cell size. Additionally, we show that, in S. cerevisiae, histone Htb1 concentrations decrease with replicative age. CONCLUSIONS: Cell-ACDC provides a framework for the application of state-of-the-art deep learning models to the analysis of live cell imaging data without programming knowledge. Furthermore, it allows for visualization and correction of segmentation and tracking errors as well as annotation of cell cycle stages. We embedded several smart algorithms that make the correction and annotation process fast and intuitive. Finally, the open-source and modularized nature of Cell-ACDC will enable simple and fast integration of new deep learning-based and traditional methods for cell segmentation, tracking, and downstream image analysis. Source code: https://github.com/SchmollerLab/Cell_ACDC.


Assuntos
Processamento de Imagem Assistida por Computador , Saccharomyces cerevisiae , Ciclo Celular , Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Software
13.
Nat Commun ; 13(1): 4616, 2022 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-35941103

RESUMO

As the scale of single-cell genomics experiments grows into the millions, the computational requirements to process this data are beyond the reach of many. Herein we present Scarf, a modularly designed Python package that seamlessly interoperates with other single-cell toolkits and allows for memory-efficient single-cell analysis of millions of cells on a laptop or low-cost devices like single-board computers. We demonstrate Scarf's memory and compute-time efficiency by applying it to the largest existing single-cell RNA-Seq and ATAC-Seq datasets. Scarf wraps memory-efficient implementations of a graph-based t-stochastic neighbour embedding and hierarchical clustering algorithm. Moreover, Scarf performs accurate reference-anchored mapping of datasets while maintaining memory efficiency. By implementing a subsampling algorithm, Scarf additionally has the capacity to generate representative sampling of cells from a given dataset wherein rare cell populations and lineage differentiation trajectories are conserved. Together, Scarf provides a framework wherein any researcher can perform advanced processing, subsampling, reanalysis, and integration of atlas-scale datasets on standard laptop computers. Scarf is available on Github: https://github.com/parashardhapola/scarf .


Assuntos
Genômica , Análise de Célula Única , Algoritmos , Análise por Conglomerados , Software , Sequenciamento Completo do Exoma
14.
Sci Eng Ethics ; 28(4): 35, 2022 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-35943614

RESUMO

The field of scientific image integrity presents a challenging research bottleneck given the lack of available datasets to design and evaluate forensic techniques. The sensitivity of data also creates a legal hurdle that restricts the use of real-world cases to build any accessible forensic benchmark. In light of this, there is no comprehensive understanding on the limitations and capabilities of automatic image analysis tools for scientific images, which might create a false sense of data integrity. To mitigate this issue, we present an extendable open-source algorithm library that reproduces the most common image forgery operations reported by the research integrity community: duplication, retouching, and cleaning. We create a large scientific forgery image benchmark (39,423 images) with enriched ground truth using this library and realistic scientific images. All figures within the benchmark are synthetically doctored using images collected from creative commons sources. While collecting the source images, we ensured that the they did not present any suspicious integrity problems. Because of the high number of retracted papers due to image duplication, this work evaluates the state-of-the-art copy-move detection methods in the proposed dataset, using a new metric that asserts consistent match detection between the source and the copied region. All evaluated methods had a low performance in this dataset, indicating that scientific images might need a specialized copy-move detector. The dataset and source code are available at https://github.com/phillipecardenuto/rsiil .


Assuntos
Algoritmos , Benchmarking , Processamento de Imagem Assistida por Computador , Software
15.
JCO Clin Cancer Inform ; 6: e2200040, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35944232

RESUMO

PURPOSE: Advances in biological measurement technologies are enabling large-scale studies of patient cohorts across multiple omics platforms. Holistic analysis of these data can generate actionable insights for translational research and necessitate new approaches for data integration and mining. METHODS: We present a novel approach for integrating data across platforms on the basis of the shared nearest neighbors algorithm and use it to create a network of multiplatform data from the immunogenomic profiling of non-small-cell lung cancer project. RESULTS: Benchmarking demonstrates that the shared nearest neighbors-based network approach outperforms a traditional gene-gene network in capturing established interactions while providing new ones on the basis of the interplay between measurements from different platforms. When used to examine patient characteristics of interest, our approach provided signatures associated with and new leads related to recurrence and TP53 oncogenotype. CONCLUSION: The network developed offers an unprecedented, holistic view into immunogenomic profiling of non-small-cell lung cancer, which can be explored through the accompanying interactive browser that we built.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/genética , Análise por Conglomerados , Perfilação da Expressão Gênica , Humanos , Neoplasias Pulmonares/genética , Software
17.
Prog Brain Res ; 273(1): 199-229, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35940717

RESUMO

For more than two centuries scientists and engineers have worked to understand and model how the eye encodes electromagnetic radiation (light). We now understand the principles of how light is transmitted through the optics of the eye and encoded by retinal photoreceptors and light-sensitive neurons. In recent years, new instrumentation has enabled scientists to measure the specific parameters of the optics and photoreceptor encoding. We implemented the principles and parameter estimates that characterize the human eye in an open-source software toolbox. This chapter describes the principles behind these tools and illustrates how to use them to compute the initial visual encoding.


Assuntos
Retina , Células Fotorreceptoras Retinianas Cones , Humanos , Óptica e Fotônica , Células Fotorreceptoras de Vertebrados , Retina/fisiologia , Software
18.
PLoS One ; 17(8): e0272263, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35913903

RESUMO

We deal with a finite-buffer queue, in which arriving jobs are subject to loss due to buffer overflows. The burst ratio parameter, which reflects the tendency of losses to form long series, is studied in detail. Perhaps the most versatile model of the arrival stream is used, i.e. the batch Markovian arrival process (BMAP). Among other things, it enables modeling the interarrival time density function, the interarrival time autocorrelation function and batch arrivals. The main contribution in an exact formula for the burst ratio in a queue with BMAP arrivals and arbitrary service time distribution. The formula is presented in an explicite, ready-to-use form. Additionally, the impact of various system parameters on the burst ratio is demonstrated in numerical examples. The primary application area of the results is computer networking, where the complex nature of traffic has a deep impact on the burst ratio. However, due to the versatile arrival model, the results can be applied in other fields as well.


Assuntos
Computadores , Software , Tempo
19.
Metabolomics ; 18(8): 64, 2022 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-35917032

RESUMO

INTRODUCTION: Flow infusion electrospray high resolution mass spectrometry (FIE-HRMS) fingerprinting produces complex, high dimensional data sets which require specialist in-silico software tools to process the data prior to analysis. OBJECTIVES: Present spectral binning as a pragmatic approach to post-acquisition procession of FIE-HRMS metabolome fingerprinting data. METHODS: A spectral binning approach was developed that included the elimination of single scan m/z events, the binning of spectra and the averaging of spectra across the infusion profile. The modal accurate m/z was then extracted for each bin. This approach was assessed using four different biological matrices and a mix of 31 known chemical standards analysed by FIE-HRMS using an Exactive Orbitrap. Bin purity and centrality metrics were developed to objectively assess the distribution and position of accurate m/z within an individual bin respectively. RESULTS: The optimal spectral binning width was found to be 0.01 amu. 80.8% of the extracted accurate m/z matched to predicted ionisation products of the chemical standards mix were found to have an error of below 3 ppm. The open-source R package binneR was developed as a user friendly implementation of the approach. This was able to process 100 data files using 4 Central Processing Units (CPU) workers in only 55 seconds with a maximum memory usage of 1.36 GB. CONCLUSION: Spectral binning is a fast and robust method for the post-acquisition processing of FIE-HRMS data. The open-source R package binneR allows users to efficiently process data from FIE-HRMS experiments with the resources available on a standard desktop computer.


Assuntos
Metaboloma , Metabolômica , Humanos , Espectrometria de Massas/métodos , Metabolômica/métodos , Software
20.
PLoS One ; 17(8): e0272168, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35917306

RESUMO

Algorithmic agents, popularly known as bots, have been accused of spreading misinformation online and supporting fringe views. Collectives are vulnerable to hidden-profile environments, where task-relevant information is unevenly distributed across individuals. To do well in this task, information aggregation must equally weigh minority and majority views against simple but inefficient majority-based decisions. In an experimental design, human volunteers working in teams of 10 were asked to solve a hidden-profile prediction task. We trained a variational auto-encoder (VAE) to learn people's hidden information distribution by observing how people's judgments correlated over time. A bot was designed to sample responses from the VAE latent embedding to selectively support opinions proportionally to their under-representation in the team. We show that the presence of a single bot (representing 10% of team members) can significantly increase the polarization between minority and majority opinions by making minority opinions less prone to social influence. Although the effects on hybrid team performance were small, the bot presence significantly influenced opinion dynamics and individual accuracy. These findings show that self-supervized machine learning techniques can be used to design algorithms that can sway opinion dynamics and group outcomes.


Assuntos
Algoritmos , Aprendizado de Máquina , Humanos , Julgamento , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...