Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS One ; 18(12): e0295502, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38134031

RESUMEN

Signals analysis for cytometry remains a challenging task that has a significant impact on uncertainty. Conventional cytometers assume that individual measurements are well characterized by simple properties such as the signal area, width, and height. However, these approaches have difficulty distinguishing inherent biological variability from instrument artifacts and operating conditions. As a result, it is challenging to quantify uncertainty in the properties of individual cells and perform tasks such as doublet deconvolution. We address these problems via signals analysis techniques that use scale transformations to: (I) separate variation in biomarker expression from effects due to flow conditions and particle size; (II) quantify reproducibility associated with a given laser interrogation region; (III) estimate uncertainty in measurement values on a per-event basis; and (IV) extract the singlets that make up a multiplet. The key idea behind this approach is to model how variable operating conditions deform the signal shape and then use constrained optimization to "undo" these deformations for measured signals; residuals to this process characterize reproducibility. Using a recently developed microfluidic cytometer, we demonstrate that these techniques can account for instrument and measurand induced variability with a residual uncertainty of less than 2.5% in the signal shape and less than 1% in integrated area.


Asunto(s)
Reproducibilidad de los Resultados , Incertidumbre , Tamaño de la Partícula , Citometría de Flujo/métodos
2.
Int J Mol Sci ; 24(21)2023 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-37958688

RESUMEN

COVID-19 has highlighted challenges in the measurement quality and comparability of serological binding and neutralization assays. Due to many different assay formats and reagents, these measurements are known to be highly variable with large uncertainties. The development of the WHO international standard (WHO IS) and other pool standards have facilitated assay comparability through normalization to a common material but does not provide assay harmonization nor uncertainty quantification. In this paper, we present the results from an interlaboratory study that led to the development of (1) a novel hierarchy of data analyses based on the thermodynamics of antibody binding and (2) a modeling framework that quantifies the probability of neutralization potential for a given binding measurement. Importantly, we introduced a precise, mathematical definition of harmonization that separates the sources of quantitative uncertainties, some of which can be corrected to enable, for the first time, assay comparability. Both the theory and experimental data confirmed that mAbs and WHO IS performed identically as a primary standard for establishing traceability and bridging across different assay platforms. The metrological anchoring of complex serological binding and neuralization assays and fast turn-around production of an mAb reference control can enable the unprecedented comparability and traceability of serological binding assay results for new variants of SARS-CoV-2 and immune responses to other viruses.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , Anticuerpos Monoclonales , Bioensayo , Análisis de Datos , Anticuerpos Antivirales , Anticuerpos Neutralizantes
3.
PLoS One ; 18(3): e0280823, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36913381

RESUMEN

The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has emphasized the importance and challenges of correctly interpreting antibody test results. Identification of positive and negative samples requires a classification strategy with low error rates, which is hard to achieve when the corresponding measurement values overlap. Additional uncertainty arises when classification schemes fail to account for complicated structure in data. We address these problems through a mathematical framework that combines high dimensional data modeling and optimal decision theory. Specifically, we show that appropriately increasing the dimension of data better separates positive and negative populations and reveals nuanced structure that can be described in terms of mathematical models. We combine these models with optimal decision theory to yield a classification scheme that better separates positive and negative samples relative to traditional methods such as confidence intervals (CIs) and receiver operating characteristics. We validate the usefulness of this approach in the context of a multiplex salivary SARS-CoV-2 immunoglobulin G assay dataset. This example illustrates how our analysis: (i) improves the assay accuracy, (e.g. lowers classification errors by up to 42% compared to CI methods); (ii) reduces the number of indeterminate samples when an inconclusive class is permissible, (e.g. by 40% compared to the original analysis of the example multiplex dataset) and (iii) decreases the number of antigens needed to classify samples. Our work showcases the power of mathematical modeling in diagnostic classification and highlights a method that can be adopted broadly in public health and clinical settings.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , COVID-19/diagnóstico , Saliva , Prueba de COVID-19 , Técnicas y Procedimientos Diagnósticos , Anticuerpos Antivirales , Sensibilidad y Especificidad
4.
Math Biosci ; 358: 108982, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36804385

RESUMEN

An accurate multiclass classification strategy is crucial to interpreting antibody tests. However, traditional methods based on confidence intervals or receiver operating characteristics lack clear extensions to settings with more than two classes. We address this problem by developing a multiclass classification based on probabilistic modeling and optimal decision theory that minimizes the convex combination of false classification rates. The classification process is challenging when the relative fraction of the population in each class, or generalized prevalence, is unknown. Thus, we also develop a method for estimating the generalized prevalence of test data that is independent of classification of the test data. We validate our approach on serological data with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) naïve, previously infected, and vaccinated classes. Synthetic data are used to demonstrate that (i) prevalence estimates are unbiased and converge to true values and (ii) our procedure applies to arbitrary measurement dimensions. In contrast to the binary problem, the multiclass setting offers wide-reaching utility as the most general framework and provides new insight into prevalence estimation best practices.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , COVID-19/diagnóstico , COVID-19/epidemiología , Prevalencia , Prueba de COVID-19
5.
J Theor Biol ; 559: 111375, 2023 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-36513210

RESUMEN

Serology testing can identify past infection by quantifying the immune response of an infected individual providing important public health guidance. Individual immune responses are time-dependent, which is reflected in antibody measurements. Moreover, the probability of obtaining a particular measurement from a random sample changes due to changing prevalence (i.e., seroprevalence, or fraction of individuals exhibiting an immune response) of the disease in the population. Taking into account these personal and population-level effects, we develop a mathematical model that suggests a natural adaptive scheme for estimating prevalence as a function of time. We then combine the estimated prevalence with optimal decision theory to develop a time-dependent probabilistic classification scheme that minimizes the error associated with classifying a value as positive (history of infection) or negative (no such history) on a given day since the start of the pandemic. We validate this analysis by using a combination of real-world and synthetic SARS-CoV-2 data and discuss the type of longitudinal studies needed to execute this scheme in real-world settings.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , COVID-19/epidemiología , Prevalencia , Estudios Seroepidemiológicos , Prueba de COVID-19 , Anticuerpos Antivirales
6.
ArXiv ; 2022 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-35795812

RESUMEN

The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has emphasized the importance and challenges of correctly interpreting antibody test results. Identification of positive and negative samples requires a classification strategy with low error rates, which is hard to achieve when the corresponding measurement values overlap. Additional uncertainty arises when classification schemes fail to account for complicated structure in data. We address these problems through a mathematical framework that combines high dimensional data modeling and optimal decision theory. Specifically, we show that appropriately increasing the dimension of data better separates positive and negative populations and reveals nuanced structure that can be described in terms of mathematical models. We combine these models with optimal decision theory to yield a classification scheme that better separates positive and negative samples relative to traditional methods such as confidence intervals (CIs) and receiver operating characteristics. We validate the usefulness of this approach in the context of a multiplex salivary SARS-CoV-2 immunoglobulin G assay dataset. This example illustrates how our analysis: (i) improves the assay accuracy (e.g. lowers classification errors by up to 42 % compared to CI methods); (ii) reduces the number of indeterminate samples when an inconclusive class is permissible (e.g. by 40 % compared to the original analysis of the example multiplex dataset); and (iii) decreases the number of antigens needed to classify samples. Our work showcases the power of mathematical modeling in diagnostic classification and highlights a method that can be adopted broadly in public health and clinical settings.

7.
Lab Chip ; 22(17): 3217-3228, 2022 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-35856829

RESUMEN

Flow cytometry is an invaluable technology in biomedical research, but confidence in single-cell measurements remains limited due to a lack of appropriate techniques for uncertainty quantification (UQ). It is particularly challenging to evaluate the potential for different instrumentation designs or operating parameters to influence the measurement physics in ways that change measurement repeatability. Here, we report a direct experimental approach to UQ using a serial flow cytometer that measured each particle more than once along a flow path. The instrument was automated for real-time characterization of measurement precision and operated with particle velocities exceeding 1 m s-1, throughputs above 100 s-1, and analysis yields better than 99.9%. These achievements were enabled by a novel hybrid inertial and hydrodynamic particle focuser to tightly control particle positions and velocities. The cytometer identified ideal flow conditions with fluorescence area measurement precision on the order of 1% and characterized tradeoffs between precision, throughput, and analysis yield. The serial cytometer is anticipated to improve single-cell measurements through estimation (and subsequent control) of uncertainty contributions from various other instrument parameters leading to overall improvements in the ability to better classify sample composition and to find rare events.


Asunto(s)
Hidrodinámica , Citometría de Flujo
8.
Math Biosci ; 351: 108858, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35714754

RESUMEN

In diagnostic testing, establishing an indeterminate class is an effective way to identify samples that cannot be accurately classified. However, such approaches also make testing less efficient and must be balanced against overall assay performance. We address this problem by reformulating data classification in terms of a constrained optimization problem that (i) minimizes the probability of labeling samples as indeterminate while (ii) ensuring that the remaining ones are classified with an average target accuracy X. We show that the solution to this problem is expressed in terms of a bathtub-type principle that holds out those samples with the lowest local accuracy up to an X-dependent threshold. To illustrate the usefulness of this analysis, we apply it to a multiplex, saliva-based SARS-CoV-2 antibody assay and demonstrate up to a 30 % reduction in the number of indeterminate samples relative to more traditional approaches.


Asunto(s)
COVID-19 , SARS-CoV-2 , Anticuerpos Antivirales , COVID-19/diagnóstico , Prueba de COVID-19 , Teoría de las Decisiones , Humanos , Saliva
9.
J Biomed Opt ; 27(1)2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-35102729

RESUMEN

SIGNIFICANCE: Performance improvements in microfluidic systems depend on accurate measurement and fluid control on the micro- and nanoscales. New applications are continuously leading to lower volumetric flow rates. AIM: We focus on improving an optofluidic system for measuring and calibrating microflows to the sub-nanoliter per minute range. APPROACH: Measurements rely on an optofluidic system that delivers excitation light and records fluorescence in a precise interrogation region of a microfluidic channel. Exploiting a scaling relationship between the flow rate and fluorescence emission after photobleaching, the system enables real-time determination of flow rates. RESULTS: Here, we demonstrate improved calibration of a flow controller to 1% uncertainty. Further, the resolution of the optofluidic flow meter improved to less than 1 nL / min with 5% uncertainty using a molecule with a 14-fold smaller diffusion coefficient than our previous report. CONCLUSIONS: We demonstrate new capabilities in sub-nanoliter per minute flow control and measurement that are generalizable to cutting-edge light-material interaction and molecular diffusion for chemical and biomedical industries.


Asunto(s)
Técnicas Analíticas Microfluídicas , Microfluídica
10.
ArXiv ; 2022 Jan 31.
Artículo en Inglés | MEDLINE | ID: mdl-35132382

RESUMEN

In diagnostic testing, establishing an indeterminate class is an effective way to identify samples that cannot be accurately classified. However, such approaches also make testing less efficient and must be balanced against overall assay performance. We address this problem by reformulating data classification in terms of a constrained optimization problem that (i) minimizes the probability of labeling samples as indeterminate while (ii) ensuring that the remaining ones are classified with an average target accuracy X. We show that the solution to this problem is expressed in terms of a bathtub principle that holds out those samples with the lowest local accuracy up to an X-dependent threshold. To illustrate the usefulness of this analysis, we apply it to a multiplex, saliva-based SARS-CoV-2 antibody assay and demonstrate up to a 30 % reduction in the number of indeterminate samples relative to more traditional approaches.

11.
Math Med Biol ; 38(3): 396-416, 2021 08 15.
Artículo en Inglés | MEDLINE | ID: mdl-34387345

RESUMEN

Formulating accurate and robust classification strategies is a key challenge of developing diagnostic and antibody tests. Methods that do not explicitly account for disease prevalence and uncertainty therein can lead to significant classification errors. We present a novel method that leverages optimal decision theory to address this problem. As a preliminary step, we develop an analysis that uses an assumed prevalence and conditional probability models of diagnostic measurement outcomes to define optimal (in the sense of minimizing rates of false positives and false negatives) classification domains. Critically, we demonstrate how this strategy can be generalized to a setting in which the prevalence is unknown by either (i) defining a third class of hold-out samples that require further testing or (ii) using an adaptive algorithm to estimate prevalence prior to defining classification domains. We also provide examples for a recently published SARS-CoV-2 serology test and discuss how measurement uncertainty (e.g. associated with instrumentation) can be incorporated into the analysis. We find that our new strategy decreases classification error by up to a decade relative to more traditional methods based on confidence intervals. Moreover, it establishes a theoretical foundation for generalizing techniques such as receiver operating characteristics by connecting them to the broader field of optimization.


Asunto(s)
Prueba Serológica para COVID-19/estadística & datos numéricos , COVID-19/diagnóstico , SARS-CoV-2 , Algoritmos , Anticuerpos Antivirales/sangre , COVID-19/clasificación , COVID-19/epidemiología , Prueba Serológica para COVID-19/clasificación , Biología Computacional , Análisis de Datos , Teoría de las Decisiones , Humanos , Inmunoglobulina G/sangre , Modelos Estadísticos , Pandemias/estadística & datos numéricos , Prevalencia , Curva ROC , Incertidumbre
12.
Int J Mol Sci ; 22(5)2021 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-33800363

RESUMEN

Quantitative and robust serology assays are critical measurements underpinning global COVID-19 response to diagnostic, surveillance, and vaccine development. Here, we report a proof-of-concept approach for the development of quantitative, multiplexed flow cytometry-based serological and neutralization assays. The serology assays test the IgG and IgM against both the full-length spike antigens and the receptor binding domain (RBD) of the spike antigen. Benchmarking against an RBD-specific SARS-CoV IgG reference standard, the anti-SARS-CoV-2 RBD antibody titer was quantified in the range of 37.6 µg/mL to 31.0 ng/mL. The quantitative assays are highly specific with no correlative cross-reactivity with the spike proteins of MERS, SARS1, OC43 and HKU1 viruses. We further demonstrated good correlation between anti-RBD antibody titers and neutralizing antibody titers. The suite of serology and neutralization assays help to improve measurement confidence and are complementary and foundational for clinical and epidemiologic studies.


Asunto(s)
Prueba Serológica para COVID-19/métodos , Prueba Serológica para COVID-19/normas , COVID-19/sangre , COVID-19/inmunología , Pruebas de Neutralización/métodos , Pruebas de Neutralización/normas , SARS-CoV-2/inmunología , Anticuerpos Neutralizantes/sangre , Anticuerpos Neutralizantes/inmunología , Anticuerpos Antivirales/sangre , Anticuerpos Antivirales/inmunología , Reacciones Cruzadas , Citometría de Flujo/métodos , Fluorescencia , Humanos , Inmunoglobulina G/sangre , Inmunoglobulina G/inmunología , Inmunoglobulina M/sangre , Inmunoglobulina M/inmunología , Microesferas , Receptores Virales/química , Receptores Virales/inmunología , Glicoproteína de la Espiga del Coronavirus/química , Glicoproteína de la Espiga del Coronavirus/inmunología
13.
PLoS One ; 16(3): e0248118, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33740004

RESUMEN

In the field of cell-based therapeutics, there is a great need for high-quality, robust, and validated measurements for cell characterization. Flow cytometry has emerged as a critically important platform due to its high-throughput capability and its ability to simultaneously measure multiple parameters in the same sample. However, to assure the confidence in measurement, well characterized biological reference materials are needed for standardizing clinical assays and harmonizing flow cytometric results between laboratories. To date, the lack of adequate reference materials, and the complexity of the cytometer instrumentation have resulted in few standards. This study was designed to evaluate CD19 expression in three potential biological cell reference materials and provide a preliminary assessment of their suitability to support future development of CD19 reference standards. Three commercially available human peripheral blood mononuclear cells (PBMCs) obtained from three different manufacturers were tested. Variables that could potentially contribute to the differences in the CD19 expression, such as PBMCs manufacturing process, number of healthy donors used in manufacturing each PBMC lot, antibody reagent, operators, and experimental days were included in our evaluation. CD19 antibodies bound per cell (ABC) values were measured using two flow cytometry-based quantification schemes with two independent calibration methods, a single point calibration using a CD4 reference cell and QuantiBrite PE bead calibration. Three lots of PBMC from three different manufacturers were obtained. Each lot of PBMC was tested on three different experimental days by three operators using three different lots of unimolar anti-CD19PE conjugates. CD19 ABC values were obtained in parallel on a selected lot of the PBMC samples using mass spectrometry (CyTOF) with two independent calibration methods, EQ4 and bead-based calibration were evaluated with CyTOF-technology. Including all studied variabilities such as PBMC lot, antibody reagent lot, and operator, the averaged mean values of CD19 ABC for the three PBMC manufacturers (A,B, and C) obtained by flow cytometry were found to be: 7953 with a %CV of 9.0 for PBMC-A, 10535 with a %CV of 7.8 for PBMC-B, and 12384 with a %CV of 16 for PBMC-C. These CD19 ABC values agree closely with the findings using CyTOF. The averaged mean values of CD19 ABC for the tested PBMCs is 9295 using flow cytometry-based method and 9699 using CyTOF. The relative contributions from various sources of uncertainty in CD19 ABC values were quantified for the flow cytometry-based measurement scheme. This uncertainty analysis suggests that the number of antigens or ligand binding sites per cell in each PBMC preparation is the largest source of variability. On the other hand, the calibration method does not add significant uncertainty to the expression estimates. Our preliminary assessment showed the suitability of the tested materials to serve as PBMC-based CD19+ reference control materials for use in quantifying relevant B cell markers in B cell lymphoproliferative disorders and immunotherapy. However, users should consider the variabilities resulting from different lots of PBMC and antibody reagent when utilizing cell-based reference materials for quantification purposes and perform bridging studies to ensure harmonization between the results before switching to a new lot.


Asunto(s)
Antígenos CD19/análisis , Linfocitos B/citología , Citometría de Flujo/métodos , Leucocitos Mononucleares/citología , Citometría de Flujo/normas , Humanos , Estándares de Referencia
14.
ACS Nano ; 15(2): 3284-3294, 2021 02 23.
Artículo en Inglés | MEDLINE | ID: mdl-33565312

RESUMEN

Understanding the folding process of DNA origami is a critical stepping stone to the broader implementation of nucleic acid nanofabrication technology but is notably nontrivial. Origami are formed by several hundred cooperative hybridization events-folds-between spatially separate domains of a scaffold, derived from a viral genome, and oligomeric staples. Individual events are difficult to detect. Here, we present a real-time probe of the unit operation of origami assembly, a single fold, across the scaffold as a function of hybridization domain separation-fold distance-and staple/scaffold ratio. This approach to the folding problem elucidates a predicted but previously unobserved blocked state that acts as a limit on yield for single folds, which may manifest as a barrier in whole origami assembly.


Asunto(s)
ADN , Nanoestructuras , Nanotecnología , Conformación de Ácido Nucleico
15.
Anal Bioanal Chem ; 412(28): 7977-7988, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32951064

RESUMEN

Motivated by the current COVID-19 health crisis, we consider data analysis for quantitative polymerase chain-reaction (qPCR) measurements. We derive a theoretical result specifying the conditions under which all qPCR amplification curves (including their plateau phases) are identical up to an affine transformation, i.e. a multiplicative factor and horizontal shift. We use this result to develop a data analysis procedure for determining when an amplification curve exhibits characteristics of a true signal. The main idea behind this approach is to invoke a criterion based on constrained optimization that assesses when a measurement signal can be mapped to a master reference curve. We demonstrate that this approach: (i) can decrease the fluorescence detection threshold by up to a decade; and (ii) simultaneously improve confidence in interpretations of late-cycle amplification curves. Moreover, we demonstrate that the master curve is transferable reference data that can harmonize analyses between different labs and across several years. Application to reverse-transcriptase qPCR measurements of a SARS-CoV-2 RNA construct points to the usefulness of this approach for improving confidence and reducing limits of detection in diagnostic testing of emerging diseases. Graphical Abstract Left: a collection of qPCR amplification curves. Right: Example of data collapse after affine transformation.


Asunto(s)
Algoritmos , Betacoronavirus/genética , Infecciones por Coronavirus/virología , Neumonía Viral/virología , ARN Viral/genética , Reacción en Cadena de la Polimerasa de Transcriptasa Inversa/métodos , Betacoronavirus/aislamiento & purificación , COVID-19 , Infecciones por Coronavirus/diagnóstico , Humanos , Pandemias , Neumonía Viral/diagnóstico , ARN Viral/análisis , Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , SARS-CoV-2
16.
Anal Biochem ; 607: 113773, 2020 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-32526200

RESUMEN

Fluorescence-based measurements are a standard tool for characterizing the thermodynamic properties of DNA systems. Nonetheless, experimental melt data obtained from polymerase chain-reaction (PCR) machines (for example) often leads to signals that vary significantly between datasets. In many cases, this lack of reproducibility has led to difficulties in analyzing results and computing reasonable uncertainty estimates. To address this problem, we propose a data analysis procedure based on constrained, convex optimization of affine transformations, which can determine when and how melt curves collapse onto one another. A key aspect of this approach is its ability to provide a reproducible and more objective measure of whether a collection of datasets yields a consistent "universal" signal according to an appropriate model of the raw signals. Importantly, integrating this validation step into the analysis hardens the measurement protocol by allowing one to identify experimental conditions and/or modeling assumptions that may corrupt a measurement. Moreover, this robustness facilitates extraction of thermodynamic information at no additional cost in experimental time. We illustrate and test our approach on experiments of Förster resonance energy transfer (FRET) pairs used study the thermodynamics of DNA loops.


Asunto(s)
ADN/análisis , Bases de Datos Factuales , Transferencia Resonante de Energía de Fluorescencia , Modelos Moleculares , Conformación de Ácido Nucleico , Reproducibilidad de los Resultados , Espectrometría de Fluorescencia , Termodinámica , Temperatura de Transición
17.
Nucleic Acids Res ; 48(10): 5268-5280, 2020 06 04.
Artículo en Inglés | MEDLINE | ID: mdl-32347943

RESUMEN

Structural DNA nanotechnology, as exemplified by DNA origami, has enabled the design and construction of molecularly-precise objects for a myriad of applications. However, limitations in imaging, and other characterization approaches, make a quantitative understanding of the folding process challenging. Such an understanding is necessary to determine the origins of structural defects, which constrain the practical use of these nanostructures. Here, we combine careful fluorescent reporter design with a novel affine transformation technique that, together, permit the rigorous measurement of folding thermodynamics. This method removes sources of systematic uncertainty and resolves problems with typical background-correction schemes. This in turn allows us to examine entropic corrections associated with folding and potential secondary and tertiary structure of the scaffold. Our approach also highlights the importance of heat-capacity changes during DNA melting. In addition to yielding insight into DNA origami folding, it is well-suited to probing fundamental processes in related self-assembling systems.


Asunto(s)
ADN/química , Termodinámica , Rastreo Diferencial de Calorimetría , Entropía , Transferencia Resonante de Energía de Fluorescencia , Colorantes Fluorescentes , Nanoestructuras/química , Conformación de Ácido Nucleico , Desnaturalización de Ácido Nucleico
18.
Anal Chem ; 91(16): 10713-10722, 2019 08 20.
Artículo en Inglés | MEDLINE | ID: mdl-31393105

RESUMEN

The ultimate performance of flow-based measurements in microfluidic systems is currently limited by their accuracy at the nanoliter-per-minute scale. Improving such measurements (especially in contexts that require continuous monitoring) is challenging because of constraints associated with shrinking system geometries and limitations imposed by making precise measurements of smaller quantities in real time. A particularly interesting limit is the relative uncertainty as flow approaches zero, which diverges for most measurement methods. To address these problems, we have developed an optofluidic measurement system that can deliver and record light in a precise interrogation region of a microfluidic channel. The system utilizes photobleaching of fluorophore dyes in the bulk flow and can identify zero flow to better than 1 nL/min absolute accuracy. The technique also provides an independent method for determining nonzero flow rates based on a robust scaling relationship between the fluorescence emission and flow. Together, these two independent approaches enable precise measurement of flow to within 5% accuracy down to 10 nL/min and validation of flow control to within 5% uncertainty down to 2 nL/min. We also demonstrate that our technique can be used to extend a calibrated flow meter well below its specified range (e.g., 500 nL/min) and to make dynamic measurements of similar relative uncertainties to the calibrated meter, which would have otherwise expanded significantly in this regime.

19.
AIAA J ; 9712019.
Artículo en Inglés | MEDLINE | ID: mdl-34149052

RESUMEN

In computational materials science, coarse-graining approaches often lack a priori uncertainty quantification (UQ) tools that estimate the accuracy of a reduced-order model before it is calibrated or deployed. This is especially the case in coarse-grained (CG) molecular dynamics (MD), where "bottom-up" methods need to run expensive atomistic simulations as part of the calibration process. As a result, scientists have been slow to adopt CG techniques in many settings because they do not know in advance whether the cost of developing the CG model is justified. To address this problem, we present an analytical method of coarse-graining rigid-body systems that yields corresponding intermolecular potentials with controllable levels of accuracy relative to their atomistic counterparts. Critically, this analysis: (i) provides a mathematical foundation for assessing the quality of a CG force field without running simulations; and (ii) provides a tool for understanding how atomistic systems can be viewed as appropriate limits of reduced-order models. Simulated results confirm the validity of this approach at the trajectory level and point to issues that must be addressed in coarse-graining fully non-rigid systems.

20.
Phys Rev Appl ; 11(3)2019.
Artículo en Inglés | MEDLINE | ID: mdl-32166098

RESUMEN

Scientists must overcome fundamental measurement problems if microfluidic devices are to become reliable and commercially viable. In particular, microfluidic devices require precise control over operating conditions such as flow-rate, υυ , which is difficult to measure continuously and in situ. Given the small scales involved, state-of-the-art approaches generally require accurate models to infer υυ on the basis of indirect measurements. However, such methods necessarily introduce model-form errors that dominate at the nL/min scale being targeted by the community. To address these problems, we develop a robust and largely assumption-free scaling method that relates the fluorescence efficiency I of fluorophores to υυ through a dosage parameter ξ, which depends on the flow rate and laser power. Notably, we show that this scaling relationship emerges as a universal feature from a general class of partial differential equations (PDEs) describing the experimental setup, which consists of an excitation beam and fluorescence detector. As a result, our approach avoids uncertainties associated with most modeling assumptions, e.g. the exact system geometry, the flow profile, the physics of fluorescence, etc. Moreover, the corresponding measurements remain valid down to the scale of 10 nL/min, with some devices potentially capable of reaching 1 nL/min. As an added benefit, the measurement procedure is mathematically simple, requiring a few trivial computations, as opposed to the full solution of a PDE. To support these claims, we discuss and quantify uncertainties associated with our method and present experimental results that confirm its validity.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...