Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
J Proteome Res ; 22(7): 2199-2217, 2023 07 07.
Artigo em Inglês | MEDLINE | ID: mdl-37235544

RESUMO

Generating top-down tandem mass spectra (MS/MS) from complex mixtures of proteoforms benefits from improvements in fractionation, separation, fragmentation, and mass analysis. The algorithms to match MS/MS to sequences have undergone a parallel evolution, with both spectral alignment and match-counting approaches producing high-quality proteoform-spectrum matches (PrSMs). This study assesses state-of-the-art algorithms for top-down identification (ProSight PD, TopPIC, MSPathFinderT, and pTop) in their yield of PrSMs while controlling false discovery rate. We evaluated deconvolution engines (ThermoFisher Xtract, Bruker AutoMSn, Matrix Science Mascot Distiller, TopFD, and FLASHDeconv) in both ThermoFisher Orbitrap-class and Bruker maXis Q-TOF data (PXD033208) to produce consistent precursor charges and mass determinations. Finally, we sought post-translational modifications (PTMs) in proteoforms from bovine milk (PXD031744) and human ovarian tissue. Contemporary identification workflows produce excellent PrSM yields, although approximately half of all identified proteoforms from these four pipelines were specific to only one workflow. Deconvolution algorithms disagree on precursor masses and charges, contributing to identification variability. Detection of PTMs is inconsistent among algorithms. In bovine milk, 18% of PrSMs produced by pTop and TopMG were singly phosphorylated, but this percentage fell to 1% for one algorithm. Applying multiple search engines produces more comprehensive assessments of experiments. Top-down algorithms would benefit from greater interoperability.


Assuntos
Proteoma , Espectrometria de Massas em Tandem , Humanos , Proteoma/genética , Proteômica , Software , Processamento de Proteína Pós-Traducional
2.
Sensors (Basel) ; 22(14)2022 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-35891050

RESUMO

The electrochemical detection of heavy metal ions is reported using an inexpensive portable in-house built potentiostat and epitaxial graphene. Monolayer, hydrogen-intercalated quasi-freestanding bilayer, and multilayer epitaxial graphene were each tested as working electrodes before and after modification with an oxygen plasma etch to introduce oxygen chemical groups to the surface. The graphene samples were characterized using X-ray photoelectron spectroscopy, atomic force microscopy, Raman spectroscopy, and van der Pauw Hall measurements. Dose-response curves in seawater were evaluated with added trace levels of four heavy metal salts (CdCl2, CuSO4, HgCl2, and PbCl2), along with detection algorithms based on machine learning and library development for each form of graphene and its oxygen plasma modification. Oxygen plasma-modified, hydrogen-intercalated quasi-freestanding bilayer epitaxial graphene was found to perform best for correctly identifying heavy metals in seawater.


Assuntos
Grafite , Metais Pesados , Grafite/química , Hidrogênio , Oxigênio , Sais , Água do Mar
3.
Sensors (Basel) ; 20(14)2020 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-32708477

RESUMO

The electrochemical response of multilayer epitaxial graphene electrodes on silicon carbide substrates was studied for use as an electrochemical sensor for seawater samples spiked with environmental contaminants using cyclic square wave voltammetry. Results indicate that these graphene working electrodes are more robust and have lower background current than either screen-printed carbon or edge-plane graphite in seawater. Identification algorithms developed using machine learning techniques are described for several heavy metals, herbicides, pesticides, and industrial compounds. Dose-response curves provide a basis for quantitative analysis.

4.
J Clin Epidemiol ; 166: 111232, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38043830

RESUMO

BACKGROUND AND OBJECTIVES: Among observational studies of routinely collected health data (RCD) for exploring treatment effects, algorithms are used to identify study variables. However, the extent to which algorithms are reliable and impact the credibility of effect estimates is far from clear. This study aimed to investigate the validation of algorithms for identifying study variables from RCD, and examine the impact of alternative algorithms on treatment effects. METHODS: We searched PubMed for observational studies published in 2018 that used RCD to explore drug treatment effects. Information regarding the reporting, validation, and interpretation of algorithms was extracted. We summarized the reporting and methodological characteristics of algorithms and validation. We also assessed the divergence in effect estimates given alternative algorithms by calculating the ratio of estimates of the primary vs. alternative analyses. RESULTS: A total of 222 studies were included, of which 93 (41.9%) provided a complete list of algorithms for identifying participants, 36 (16.2%) for exposure, and 132 (59.5%) for outcomes, and 15 (6.8%) for all study variables including population, exposure, and outcomes. Fifty-nine (26.6%) studies stated that the algorithms were validated, and 54 (24.3%) studies reported methodological characteristics of 66 validations, among which 61 validations in 49 studies were from the cross-referenced validation studies. Of those 66 validations, 22 (33.3%) reported sensitivity and 16 (24.2%) reported specificity. A total of 63.6% of studies reporting sensitivity and 56.3% reporting specificity used test-result-based sampling, an approach that potentially biases effect estimates. Twenty-eight (12.6%) studies used alternative algorithms to identify study variables, and 24 reported the effects estimated by primary analyses and sensitivity analyses. Of these, 20% had differential effect estimates when using alternative algorithms for identifying population, 18.2% for identifying exposure, and 45.5% for classifying outcomes. Only 32 (14.4%) studies discussed how the algorithms may affect treatment estimates. CONCLUSION: In observational studies of RCD, the algorithms for variable identification were not regularly validated, and-even if validated-the methodological approach and performance of the validation were often poor. More seriously, different algorithms may yield differential treatment effects, but their impact is often ignored by researchers. Strong efforts, including recommendations, are warranted to improve good practice.


Assuntos
Algoritmos , Dados de Saúde Coletados Rotineiramente , Humanos , PubMed , Estudos Observacionais como Assunto
5.
Data Brief ; 54: 110544, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38868386

RESUMO

This paper presents the data (images, observations, metadata) of three different deployments of camera traps in the Amsterdam Water Supply Dunes, a Natura 2000 nature reserve in the coastal dunes of the Netherlands. The pilots were aimed at determining how different types of camera deployment (e.g. regular vs. wide lens, various heights, inside/outside exclosures) might influence species detections, and how to deploy autonomous wildlife monitoring networks. Two pilots were conducted in herbivore exclosures and mainly detected European rabbits (Oryctolagus cuniculus) and red fox (Vulpes vulpes). The third pilot was conducted outside exclosures, with the European fallow deer (Dama dama) being most prevalent. Across all three pilots, a total of 47,597 images were annotated using the Agouti platform. All annotations were verified and quality-checked by a human expert. A total of 2,779 observations of 20 different species (including humans) were observed using 11 wildlife cameras during 2021-2023. The raw image files (excluding humans), image metadata, deployment metadata and observations from each pilot are shared using the Camtrap DP open standard and the extended data publishing capabilities of GBIF to increase the findability, accessibility, interoperability, and reusability of this data. The data are freely available and can be used for developing artificial intelligence (AI) algorithms that automatically detect and identify species from wildlife camera images.

6.
Comput Methods Programs Biomed ; 195: 105538, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32526535

RESUMO

BACKGROUND: Dyslexia is a disorder characterized by difficulty in reading such as poor speech and sound recognition. They have less capability to relate letters and form words and exhibit poor reading comprehension. Eye-tracking methodologies play a major role in analyzing human cognitive processing. Dyslexia is not a visual impairment disorder but it's a difficulty in phonological processing and word decoding. These difficulties are reflected in their eye movement patterns during reading. OBJECTIVE: The disruptive eye movement helps us to use eye-tracking methodologies for identifying dyslexics. METHODS: In this paper, a small set of eye movement features have been proposed that contribute more to distinguish between dyslexics and non-dyslexics by machine learning models. Features related to eye movement events such as fixations and saccades are detected using statistical measures, dispersion threshold identification (I-DT) and velocity threshold identification (I-VT) algorithms. These features were further analyzed using various machine learning algorithms such as Particle Swarm Optimization (PSO) based SVM Hybrid Kernel (Hybrid SVM - PSO), Support Vector Machine (SVM), Random Forest classifier (RF), Logistic Regression (LR) and K-Nearest Neighbor (KNN) for classification of dyslexics and non-dyslexics. RESULTS: The accuracy achieved using the Hybrid SVM -PSO model is 95.6 %. The best set of features that gave high accuracy are average no of fixations, average fixation gaze duration, average saccadic movement duration, total number of saccadic movements, and average number of fixations. CONCLUSION: It is observed that eye movement features detected using velocity-based algorithms performed better than those detected by dispersion-based algorithms and statistical measures.


Assuntos
Dislexia , Movimentos Sacádicos , Algoritmos , Dislexia/diagnóstico , Movimentos Oculares , Humanos , Leitura
7.
Healthc Technol Lett ; 3(3): 171-176, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27733923

RESUMO

Continuous patient monitoring systems acquire enormous amounts of data that is either manually analysed by doctors or automatically processed using intelligent algorithms. Sections of data acquired over long period of time can be corrupted with artefacts due to patient movement, sensor placement and interference from other sources. Owing to the large volume of data these artefacts need to be automatically identified so that the analysis systems and doctors are aware of them while making medical diagnosis. Three important factors are explored that must be considered and quantified for the design and evaluation of automatic artefact identification algorithms: signal quality, interpretation quality and computational complexity. The first two are useful to determine the effectiveness of an algorithm, whereas the third is particularly vital in mHealth systems where computational resources are heavily constrained. A series of artefact identification and filtering algorithms are then presented focusing on the electrocardiography data. These algorithms are quantified using the three metrics to demonstrate how different algorithms can be evaluated and compared to select the best ones for a given wireless sensor network.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA