Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Base de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Biosensors (Basel) ; 13(2)2023 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-36831970

RESUMEN

The COVID-19 pandemic revealed a pressing need for the development of sensitive and low-cost point-of-care sensors for disease diagnosis. The current standard of care for COVID-19 is quantitative reverse transcriptase polymerase chain reaction (qRT-PCR). This method is sensitive, but takes time, effort, and requires specialized equipment and reagents to be performed correctly. This make it unsuitable for widespread, rapid testing and causes poor individual and policy decision-making. Rapid antigen tests (RATs) are a widely used alternative that provide results quickly but have low sensitivity and are prone to false negatives, particularly in cases with lower viral burden. Electrochemical sensors have shown much promise in filling this technology gap, and impedance spectroscopy specifically has exciting potential in rapid screening of COVID-19. Due to the data-rich nature of impedance measurements performed at different frequencies, this method lends itself to machine-leaning (ML) algorithms for further data processing. This review summarizes the current state of impedance spectroscopy-based point-of-care sensors for the detection of the SARS-CoV-2 virus. This article also suggests future directions to address the technology's current limitations to move forward in this current pandemic and prepare for future outbreaks.


Asunto(s)
COVID-19 , Humanos , SARS-CoV-2 , Pandemias , Prueba de COVID-19 , Técnicas de Laboratorio Clínico/métodos , Sensibilidad y Especificidad
2.
Artículo en Inglés | MEDLINE | ID: mdl-38550611

RESUMEN

The ubiquity of missing values in real-world datasets poses a challenge for statistical inference and can prevent similar datasets from being analyzed in the same study, precluding many existing datasets from being used for new analyses. While an extensive collection of packages and algorithms have been developed for data imputation, the overwhelming majority perform poorly if there are many missing values and low sample sizes, which are unfortunately common characteristics in empirical data. Such low-accuracy estimations adversely affect the performance of downstream statistical models. We develop a statistical inference framework for regression and classification in the presence of missing data without imputation. Our framework, RIFLE (Robust InFerence via Low-order moment Estimations), estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model. We specialize our framework to linear regression and normal discriminant analysis, and we provide convergence and performance guarantees. This framework can also be adapted to impute missing data. In numerical experiments, we compare RIFLE to several state-of-the-art approaches (including MICE, Amelia, MissForest, KNN-imputer, MIDA, and Mean Imputer) for imputation and inference in the presence of missing values. Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small. RIFLE is publicly available at https://github.com/optimization-for-data-driven-science/RIFLE.

3.
IEEE/ACM Trans Comput Biol Bioinform ; 19(6): 3482-3496, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34613917

RESUMEN

DNA sequencing is the physical/biochemical process of identifying the location of the four bases (Adenine, Guanine, Cytosine, Thymine) in a DNA strand. As semiconductor technology revolutionized computing, modern DNA sequencing technology (termed Next Generation Sequencing, NGS) revolutionized genomic research. As a result, modern NGS platforms can sequence hundreds of millions of short DNA fragments in parallel. The sequenced DNA fragments, representing the output of NGS platforms, are termed reads. Besides genomic variations, NGS imperfections induce noise in reads. Mapping each read to (the most similar portion of) a reference genome of the same species, i.e., read mapping, is a common critical first step in a diverse set of emerging bioinformatics applications. Mapping represents a search-heavy memory-intensive similarity matching problem, therefore, can greatly benefit from near-memory processing. Intuition suggests using fast associative search enabled by Ternary Content Addressable Memory (TCAM) by construction. However, the excessive energy consumption and lack of support for similarity matching (under NGS and genomic variation induced noise) renders direct application of TCAM infeasible, irrespective of volatility, where only non-volatile TCAM can accommodate the large memory footprint in an area-efficient way. This paper introduces GeNVoM, a scalable, energy-efficient and high-throughput solution. Instead of optimizing an algorithm developed for general-purpose computers or GPUs, GeNVoM rethinks the algorithm and non-volatile TCAM-based accelerator design together from the ground up. Thereby GeNVoM can improve the throughput by up to 3.67×; the energy consumption, by up to 1.36×, when compared to an ASIC baseline, which represents one of the highest-throughput implementations known.


Asunto(s)
Algoritmos , Programas Informáticos , Genómica , Computadores , Análisis de Secuencia de ADN , Secuenciación de Nucleótidos de Alto Rendimiento , ADN/genética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA