RESUMO
Fortification of human milk (HM) is often necessary to meet the nutritional requirements of preterm infants. The present experiment aimed to establish whether the supplementation of HM with either an experimental donkey milk-derived fortifier containing whole donkey milk proteins, or with a commercial bovine milk-derived fortifier containing hydrolyzed bovine whey proteins, affects peptide release differently during digestion. The experiment was conducted using an in vitro dynamic system designed to simulate the preterm infant's digestion followed by digesta analysis by means of LC-MS-MS. The different fortifiers did not appear to influence the cumulative intensity of HM peptides. Fortification had a differential impact on the release of either donkey or bovine bioactive peptides. Donkey milk peptides showed antioxidant/ACE inhibitory activities, while bovine peptides showed opioid, dipeptil- and propyl endo- peptidase inhibitory and antimicrobial activity. A slight delay in peptide release from human lactoferrin and α-lactalbumin was observed when HM was supplemented with donkey milk-derived fortifier.
Assuntos
Digestão , Equidae , Proteínas do Leite , Leite Humano , Peptídeos , Humanos , Animais , Leite Humano/química , Leite Humano/metabolismo , Proteínas do Leite/química , Proteínas do Leite/metabolismo , Proteínas do Leite/análise , Bovinos , Peptídeos/química , Peptídeos/metabolismo , Alimentos Fortificados/análise , Espectrometria de Massas em Tandem , Modelos Biológicos , Proteínas do Soro do Leite/química , Proteínas do Soro do Leite/metabolismoRESUMO
Resumen El modelo de cinco factores de la Escala de Actitud hacia la Estadística (EAE-25) propuesta por Auzmendi (1992), para medir la predisposición de los alumnos hacia los contenidos estadísticos, ha sido cuestionada en distintas investigaciones. El propósito de este estudio fue analizar las propiedades psicométricas de la EAE-25 en una muestra de 291 estudiantes mexicanos de la licenciatura en Psicología. La validez y confiabilidad de la escala fueron examinadas mediante Análisis Factorial Confirmatorio (AFC) y el alfa de Cronbach. Los resultados del AFC muestran indicios parciales de estabilidad, cinco ítems fueron eliminados, pero permanecen los cinco factores originales. La escala así configurada muestra una consistencia interna aceptable, los ítems se relacionan positivamente entre sí en cada factor y cada ítem distingue a los universitarios de acuerdo con su actitud favorable y desfavorable hacia la estadística, así como a los de rendimiento alto de los de rendimiento bajo. Por tal motivo, la escala EAE-20 se presenta como un instrumento válido y confiable, de fácil acceso, implementación e interpretación para evaluar las actitudes hacia la estadística en estudiantes universitarios por parte de profesores e investigadores interesados en el tema.
Abstract The five-factor model of the Attitude Scale towards Statistics (EAE-25) proposed by Auzmendi (1992) to measure the predisposition of students towards statistical content has been questioned in different researches. The purpose of this study was to analyze the psychometric properties of the EAE-25 in a sample of 291 Mexican undergraduate students in Psychology. The validity and reliability of the scale were examine using Confirmatory Factor Analysis (CFA) and Cronbach's alpha. The results of the AFC show partial signs of stability, five items were eliminated, but the five original factors remain. The scale configured in this way shows an acceptable internal consistency, the items are positively related to each other in each factor and each item distinguishes between university students according to their favorable and unfavorable attitude towards statistics, as well as those with high performance from those with low performance. For this reason, the EAE-20 is presented as a valid and reliable instrument, easily accessible, implementation and interpretation to assess the attitudes towards statistics in university students by professors and researchers interested in this topic.
RESUMO
Changes in biomarker levels of Alzheimer's disease (AD) reflect underlying pathophysiological changes in the brain and can provide evidence of direct and downstream treatment effects linked to disease modification. Recent results from clinical trials of anti-amyloid ß (Aß) treatments have raised the question of how to best characterize the relationship between AD biomarkers and clinical endpoints. Consensus methodology for assessing such relationships is lacking, leading to inconsistent evaluation and reporting. In this review, we provide a statistical framework for reporting treatment effects on early and late accelerating AD biomarkers and assessing their relationship with clinical endpoints at the subject and group levels. Amyloid positron emission tomography (PET), plasma p-tau, and tau PET follow specific trajectories during AD and are used as exemplar cases to contrast biomarkers with early and late progression. Subject-level correlation was assessed using change from baseline in biomarkers versus change from baseline in clinical endpoints, and interpretation of the correlation is dependent on the biomarker and disease stage. Group-level correlation was assessed using the placebo-adjusted treatment effects on biomarkers versus those on clinical endpoints in each trial. This correlation leverages the fundamental advantages of randomized placebo-controlled trials and assesses the predictivity of a treatment effect on a biomarker or clinical benefit. Harmonization in the assessment of treatment effects on biomarkers and their relationship to clinical endpoints will provide a wealth of comparable data across clinical trials and may yield new insights for the treatment of AD.
Assuntos
Doença de Alzheimer , Biomarcadores , Tomografia por Emissão de Pósitrons , Proteínas tau , Doença de Alzheimer/diagnóstico , Humanos , Biomarcadores/sangue , Proteínas tau/sangue , Progressão da Doença , Peptídeos beta-Amiloides/sangue , Peptídeos beta-Amiloides/metabolismo , Encéfalo/metabolismo , Encéfalo/diagnóstico por imagemRESUMO
Objectives: To detect clusters of dengue hemorrhagic fever in an urbanized district of Hai Phong City, Vietnam using Poisson space-time retrospective and prospective analysis. Methods: A cross-sectional and retrospective study analyzed dengue surveillance data in the period from January 01, 2018, to December 31, 2022. Spatial-temporal scanning statistics were performed using the free software SatScan v10.1.2. Results: A total of 519 cases were recorded. The cumulative incidence per 100,000 inhabitants was 3.37, 127.36, 10.96, 0, and 296.04 in 2018, 2019, 2020, 2021, and 2022, respectively. By retrospective Poisson model-based analysis, seven clusters were detected. Six of these seven detected outbreaks occurred in November and December 2022. The largest cluster had a relative risk (RR) of 1539.5 (P <0.00001). The smallest cluster has a RR of 316.1 (P = 0.006). Prospective analysis using the Poisson model significantly detected four active case clusters at the time of the study. The largest cluster of cases with RR was 47.7 (P <0.00001) and the smallest cluster with RR was 18.2 (P <0.00001). Conclusions: This study provides a basis for improving the effectiveness of interventions and conducting further investigations into risk factors in the study area, as well as in other urban and suburban areas nationwide.
RESUMO
INTRODUCTION: Crohn's disease and ulcerative colitis are chronic inflammatory bowel diseases (IBD) with a relapsing-remitting nature. With adequate non-invasive prediction of mucosal inflammation, endoscopies can be prevented and treatment optimised earlier for better disease control. We aim to validate and recalibrate commonly used patient-reported symptom scores combined with a faecal calprotectin (FC) home test as non-invasive diagnostic tool for remote monitoring of IBD, both in daily practice and in a strict trial setting. Endoscopy will be used as the gold standard. METHODS AND ANALYSIS: In this multicentre prospective validation study, adult IBD patients are asked to fill out questionnaires regarding disease activity (Monitor IBD At Home, mobile Health Index, Manitoba IBD Index, IBD control and patient-HBI/patient-Simple Clinical Colitis Activity Index), perform a FC home test and collect a stool sample for routine laboratory FC measurement, before the start of the bowel preparation for the ileocolonoscopy. Endoscopic disease activity will be scored according to the simplified endoscopic score for Crohn's disease (CD) for CD patients or Ulcerative Colitis Endoscopic Index for Severity and Mayo Endoscopic Subscore for ulcerative colitis patients. The main study outcome is the diagnostic test accuracy of the various patient-reported scores to assess mucosal inflammation in combination with a FC home test. ETHICS AND DISSEMINATION: This study is approved by the Medical Research Ethics Committee of azM/UM in Maastricht dated 03 March 2021 (METC 20-085) and is monitored by the Clinical Trial Centre Maastricht according to Good Clinical Practice guidelines. Written informed consent will be obtained from all patients. Study results will be published in international peer-reviewed medical journals. TRIAL REGISTRATION NUMBER: NCT05886322.
Assuntos
Fezes , Complexo Antígeno L1 Leucocitário , Medidas de Resultados Relatados pelo Paciente , Humanos , Complexo Antígeno L1 Leucocitário/análise , Fezes/química , Estudos Prospectivos , Índice de Gravidade de Doença , Doença de Crohn/diagnóstico , Biomarcadores/análise , Colite Ulcerativa/diagnóstico , Estudos Multicêntricos como Assunto , Doenças Inflamatórias Intestinais/diagnóstico , Adulto , Colonoscopia/métodos , Mucosa Intestinal/patologia , Mucosa Intestinal/metabolismo , Estudos de Validação como Assunto , Inquéritos e QuestionáriosRESUMO
Many statistical genetics analysis methods make use of GWAS summary statistics. Best statistical practice requires evaluating these methods in realistic simulation experiments. However, simulating summary statistics by first simulating individual genotype and phenotype data is extremely computationally demanding. This high cost may force researchers to conduct overly simplistic simulations that fail to accurately measure method performance. Alternatively, summary statistics can be simulated directly from their theoretical distribution. Although this is a common need among statistical genetics researchers, no software packages exist for comprehensive GWAS summary statistic simulation. We present GWASBrewer, an open source R package for direct simulation of GWAS summary statistics. We show that statistics simulated by GWASBrewer have the same distribution as statistics generated from individual level data, and can be produced at a fraction of the computational expense. Additionally, GWASBrewer can simulate standard error estimates, something that is typically not done when sampling summary statistics directly. GWASBrewer is highly flexible, allowing the user to simulate data for multiple traits connected by causal effects and with complex distributions of effect sizes. We demonstrate example uses of GWASBrewer for evaluating Mendelian randomization, polygenic risk score, and heritability estimation methods.
RESUMO
The main goal of fine-mapping is the identification of relevant genetic variants that have a causal effect on some trait of interest, such as the presence of a disease. From a statistical point of view, fine mapping can be seen as a variable selection problem. Fine-mapping methods are often challenging to apply because of the presence of linkage disequilibrium (LD), that is, regions of the genome where the variants interrogated have high correlation. Several methods have been proposed to address this issue. Here we explore the 'Sum of Single Effects' (SuSiE) method, applied to real data (summary statistics) from a genome-wide meta-analysis of the autoimmune liver disease primary biliary cholangitis (PBC). Fine-mapping in this data set was previously performed using the FINEMAP program; we compare these previous results with those obtained from SuSiE, which provides an arguably more convenient and principled way of generating 'credible sets', that is set of predictors that are correlated with the response variable. This allows us to appropriately acknowledge the uncertainty when selecting the causal effects for the trait. We focus on the results from SuSiE-RSS, which fits the SuSiE model to summary statistics, such as z-scores, along with a correlation matrix. We also compare the SuSiE results to those obtained using a more recently developed method, h2-D2, which uses the same inputs. Overall, we find the results from SuSiE-RSS and, to a lesser extent, h2-D2, to be quite concordant with those previously obtained using FINEMAP. The resulting genes and biological pathways implicated are therefore also similar to those previously obtained, providing valuable confirmation of these previously reported results. Detailed examination of the credible sets identified suggests that, although for the majority of the loci (33 out of 56) the results from SuSiE-RSS seem most plausible, there are some loci (5 out of 56 loci) where the results from h2-D2 seem more compelling. Computer simulations suggest that, overall, SuSiE-RSS generally has slightly higher power, better precision, and better ability to identify the true number of causal variants in a region than h2-D2, although there are some scenarios where the power of h2-D2 is higher. Thus, in real data analysis, the use of complementary approaches such as both SuSiE and h2-D2 is potentially warranted.
RESUMO
The aim of this study was to assess the spatiotemporal variation in water quality in the Grande River and the Ondas River, in the city of Barreiras, Bahia, Brazil. Water samples were collected at 11 points along the rivers, and eight physical-chemical parameters (electrical conductivity, pH, alkalinity, apparent and true color, turbidity, dissolved oxygen, and biochemical oxygen demand) and three microbiological indicators (heterotrophic bacteria, total and thermotolerant coliforms) were analyzed. Spatiotemporal variation was assessed using the multivariate techniques of principal component analysis/factorial analysis (PCA/FA) and hierarchical cluster analysis (HCA). The results of the PCA/FA highlighted eight of the eleven parameters as the main ones responsible for the variations in water quality, with the greatest increase in these parameters being observed in the rainy season, especially among the points influenced by sewage discharges and by the influence of the urban area. The CA grouped the results from 11 points into three main groups: group 1 corresponded to points influenced by sewage discharges; group 2 grouped points with mainly urban influences; and group 3 grouped points in rural areas. These groupings showed the negative influence of urbanization and also statistically significant variations between the groups and periods. The most degraded conditions were in group 1, and the least degraded conditions were in group 3. Assessment of the variations between the monitoring periods showed that rainfall had a significant impact on the increase or decrease in the parameters assessed, as a result of surface runoff linked to urbanization and increased river flow.
Assuntos
Monitoramento Ambiental , Rios , Qualidade da Água , Brasil , Rios/química , Urbanização , Poluentes Químicos da Água/análise , CidadesRESUMO
During the automatic processing of crystallographic diffraction experiments, beamstop shadows are often unaccounted for or only partially masked. As a result of this, outlier reflection intensities are integrated, which is a known issue. Traditional statistical diagnostics have only limited effectiveness in identifying these outliers, here termed Not-Excluded-unMasked-Outliers (NEMOs). The diagnostic tool AUSPEX allows visual inspection of NEMOs, where they form a typical pattern: clusters at the low-resolution end of the AUSPEX plots of intensities or amplitudes versus resolution. To automate NEMO detection, a new algorithm was developed by combining data statistics with a density-based clustering method. This approach demonstrates a promising performance in detecting NEMOs in merged data sets without disrupting existing data-reduction pipelines. Re-refinement results indicate that excluding the identified NEMOs can effectively enhance the quality of subsequent structure-determination steps. This method offers a prospective automated means to assess the efficacy of a beamstop mask, as well as highlighting the potential of modern pattern-recognition techniques for automating outlier exclusion during data processing, facilitating future adaptation to evolving experimental strategies.
Assuntos
Algoritmos , Cristalografia por Raios X/métodos , Análise por Conglomerados , Aprendizado de Máquina SupervisionadoRESUMO
Background: The National Institutes of Health Stroke Scale (NIHSS) scores have been used to evaluate acute ischaemic stroke (AIS) severity in clinical settings. Through the International Classification of Diseases, Tenth Revision Code (ICD-10), documentation of NIHSS scores has been made possible for administrative purposes and has since been increasingly adopted in insurance claims. Per Centres for Medicare & Medicaid Services guidelines, the stroke ICD-10 diagnosis code must be documented by the treating physician. Accuracy of the administratively collected NIHSS compared with expert clinical evaluation as documented in the Paul Coverdell registry is however still uncertain. Methods: Leveraging a linked dataset comprised of the Paul Coverdell National Acute Stroke Program (PCNASP) clinical registry and matched individuals on Medicare Claims data, we sampled patients aged 65 and above admitted for AIS across nine states, from January 2017 to December 2020. We excluded those lacking documentation for either clinical or ICD-10-based NIHSS scores. We then examined score concordance from both databases and measured discordance as the absolute difference between the PCNASP and ICD-10-based NIHSS scores. Results: Among 87 996 matched patients, mean NIHSS scores for PCNASP and Medicare ICD-10 were 7.19 (95% CI 7.14 to 7.24) and 7.32 (95% CI 7.27 to 7.37), respectively. Concordance between the two scores was high as indicated by an intraclass correlation coefficient of 0.93. Conclusion: The high concordance between clinical and ICD-10 NIHSS scores highlights the latter's potential as measure of stroke severity derived from structured claims data.
RESUMO
Mendelian randomization (MR) is an emerging tool for inferring causality in genetic epidemiology. MR studies suffer bias from weak genetic instrument variables (IVs) and horizontal pleiotropy. We introduce a robust integrative framework strictly adhering with STROBE-MR guidelines to improve causality inference through MR studies. We implemented novel t-statistics-based criteria to improve the reliability of selected IVs followed by various MR methods. Further, we include sensitivity analyses to remove horizontal-pleiotropy bias. For functional validation, we perform enrichment analysis of identified causal SNPs. We demonstrate effectiveness of our proposed approach on 5 different MR datasets selected from diverse populations. Our pipeline outperforms its counterpart MR analyses using default parameters on these datasets. Notably, we found a significant association between total cholesterol and coronary artery disease (P = 1.16 × 10-71) in a single-sample dataset using our pipeline. Contrarily, this same association was deemed ambiguous while using default parameters. Moreover, in a two-sample dataset, we uncover 13 new causal SNPs with enhanced statistical significance (P = 1.06 × 10-11) for liver-iron-content and liver-cell-carcinoma. Likewise, these SNPs remained undetected using the default parameters (P = 7.58 × 10-4). Furthermore, our analysis confirmed previously known pathways, such as hyperlipidemia in heart diseases and gene ME1 in liver cancer. In conclusion, we propose a robust and powerful framework to infer causality across diverse populations and easily adaptable to different diseases.
Assuntos
Doença da Artéria Coronariana , Análise da Randomização Mendeliana , Polimorfismo de Nucleotídeo Único , Humanos , Análise da Randomização Mendeliana/métodos , Doença da Artéria Coronariana/genética , Causalidade , Estudo de Associação Genômica Ampla , Predisposição Genética para Doença , Pleiotropia GenéticaRESUMO
When analyzing spatially referenced event data, the criteria for declaring rates as "reliable" is still a matter of dispute. What these varying criteria have in common, however, is that they are rarely satisfied for crude estimates in small area analysis settings, prompting the use of spatial models to improve reliability. While reasonable, recent work has quantified the extent to which popular models from the spatial statistics literature can overwhelm the information contained in the data, leading to oversmoothing. Here, we begin by providing a definition for a "reliable" estimate for event rates that can be used for crude and model-based estimates and allows for discrete and continuous statements of reliability. We then construct a spatial Bayesian framework that allows users to infuse prior information into their models to improve reliability while also guarding against oversmoothing. We apply our approach to county-level birth data from Pennsylvania, highlighting the effect of oversmoothing in spatial models and how our approach can allow users to better focus their attention to areas where sufficient data exists to drive inferential decisions. We then conclude with a brief discussion of how this definition of reliability can be used in the design of small area studies.
RESUMO
The absence of solvent molecules in high-resolution protein crystal structure models deposited in the Protein Data Bank (PDB) contradicts the fact that, for proteins crystallized from aqueous media, water molecules are always expected to bind to the protein surface, as well as to some sites in the protein interior. An analysis of the contents of the PDB indicated that the expected ratio of the number of water molecules to the number of amino-acid residues exceeds 1.5 in atomic resolution structures, decreasing to 0.25 at around 2.5â Å resolution. Nevertheless, almost 800 protein crystal structures determined at a resolution of 2.5â Å or higher are found in the current release of the PDB without any water molecules, whereas some other depositions have unusually low or high occupancies of modeled solvent. Detailed analysis of these depositions revealed that the lack of solvent molecules might be an indication of problems with either the diffraction data, the refinement protocol, the deposition process or a combination of these factors. It is postulated that problems with solvent structure should be flagged by the PDB and addressed by the depositors.
RESUMO
Sinkhole hazards pose considerable challenges in the west-central region of Texas. This study integrates multisource datasets and innovative techniques to detect early sinkhole development and identify the processes governing their formation. The techniques employed encompass feature extraction, cluster analysis, and deformation process monitoring. Potential sinkholes were initially mapped using high-resolution elevation data, and a circularity index (CI) threshold value of 0.85 was applied to filter out depressions with noncircular shapes. Subsequently, Persistent Scatterer Interferometry (PSInSAR) followed by the Getis-Ord Gi* statistic methods were used to identify statistically significant subsidence clusters with rates of <-2â¯mm/yr, indicating active sinkhole formation. The findings reveal the following: (1) surface deformation analysis revealed significant subsidence hotspots, particularly in areas underlain by units containing significant sequences of evaporites, which nominally belong to the Clear Fork Group and the Blaine Formation, compared with those dominated by carbonates; (2) potential sinkholes are undergoing displacement up to a rate of -11.31â¯mm/yr, and (3) extreme groundwater pumping of karst aquifers and the subsequent decline in groundwater levels are found to be the leading causes of the susceptibility of the region to sinkhole hazards. In such cases, the decline of the water table deprives the overlying bedrock of buoyant support provided by the groundwater in the cavity. The bedrock will undergo sagging subsidence under the influence of gravity and eventually form a sinkhole. Extensive oil and gas activity in the study area, including extreme withdrawal rates and the injection of fluids associated with the activity, are other potential causes of the observed sinkhole susceptibility in several parts of the study area. This study underscores the importance of understanding the geologic, hydrologic, and anthropogenic factors driving sinkhole hazards for effective mitigation strategies to protect public safety, infrastructure, and the environment.
RESUMO
INTRODUCTION: Use of investigations can help support the diagnostic process of patients with cancer in primary care, but the size of variation between patient group and between practices is unclear. METHODS: We analysed data on 53 252 patients from 1868 general practices included in the National Cancer Diagnosis Audit 2018 using a sequence of logistic regression models to quantify and explain practice-level variation in investigation use, accounting for patient-level case-mix and practice characteristics. Four types of investigations were considered: any investigation, blood tests, imaging and endoscopy. RESULTS: Large variation in practice use was observed (OR for 97.5th to 2.5th centile being 4.02, 4.33 and 3.12, respectively for any investigation, blood test and imaging). After accounting for patient case-mix, the spread of practice variation increased further to 5.61, 6.30 and 3.60 denoting that patients with characteristics associated with higher use (ie, certain cancer sites) are over-represented among practices with lower than the national average use of such investigation. Practice characteristics explained very little of observed variation, except for rurality (rural practices having lower use of any investigation) and concentration of older age patients (practices with older patients being more likely to use all types of investigations). CONCLUSION: There is very large variation between practices in use of investigation in patients with cancer as part of the diagnostic process. It is conceivable that the diagnostic process can be improved if investigation use was to be increased in lower use practices, although it is also possible that there is overtesting in practices with very high use of investigations, and in fact both undertesting and overtesting may co-exist.
RESUMO
Clusters of similar or dissimilar objects are encountered in many fields. Frequently used approaches treat each cluster's central object as latent. Yet, often objects of one or more types cluster around objects of another type. Such arrangements are common in biomedical images of cells, in which nearby cell types likely interact. Quantifying spatial relationships may elucidate biological mechanisms. Parent-offspring statistical frameworks can be usefully applied even when central objects ("parents") differ from peripheral ones ("offspring"). We propose the novel multivariate cluster point process (MCPP) to quantify multi-object (e.g., multi-cellular) arrangements. Unlike commonly used approaches, the MCPP exploits locations of the central parent object in clusters. It accounts for possibly multilayered, multivariate clustering. The model formulation requires specification of which object types function as cluster centers and which reside peripherally. If such information is unknown, the relative roles of object types may be explored by comparing fit of different models via the deviance information criterion (DIC). In simulated data, we compared a series of models' DIC; the MCPP correctly identified simulated relationships. It also produced more accurate and precise parameter estimates than the classical univariate Neyman-Scott process model. We also used the MCPP to quantify proposed configurations and explore new ones in human dental plaque biofilm image data. MCPP models quantified simultaneous clustering of Streptococcus and Porphyromonas around Corynebacterium and of Pasteurellaceae around Streptococcus and successfully captured hypothesized structures for all taxa. Further exploration suggested the presence of clustering between Fusobacterium and Leptotrichia, a previously unreported relationship.
RESUMO
BACKGROUND: To expand veterans' access to health care, the Veterans Affairs (VA) Office of Connected Care explored a novel software feature called "Vitals" on its VA Video Connect telehealth platform. Vitals uses contactless, video-based, remote photoplethysmography (rPPG) through the infrared camera on veterans' smartphones (and other devices) to automatically scan their faces to provide real-time vital statistics on screen to both the provider and patient. OBJECTIVE: This study aimed to assess VA clinical provider and veteran patient attitudes regarding the usability of Vitals. METHODS: We conducted a mixed methods evaluation of Vitals among VA providers and patients, collecting data in July and August 2023 at the VA Boston Healthcare System and VA San Diego Healthcare System. We conducted analyses in October 2023. In-person usability testing sessions consisted of a think-aloud procedure while using the software, a semistructured interview, and a 26-item web-based survey. RESULTS: Usability test sessions with 20 VA providers and 13 patients demonstrated that both groups found Vitals "useful" and "easy to use," and they rated its usability highly (86 and 82 points, respectively, on a 100-point scale). Regarding acceptability or willingness/intent to use, providers and patients generally expressed confidence and trust in Vitals readings, with high ratings of 90 and 85 points, respectively. Providers and patients rated Vitals highly for its feasibility and appropriateness for context (90 and 90 points, respectively). Finally, providers noted that Vitals' flexibility makes it appropriate and advantageous for implementation in a wide range of clinical contexts, particularly in specialty care. Providers believed that most clinical teams would readily integrate Vitals into their routine workflow because it saves time; delivers accurate, consistently collected vitals; and may reduce reporting errors. Providers and veterans suggested training and support materials that could improve Vitals adoption and implementation. CONCLUSIONS: While remote collection of vital readings has been described in the literature, this is one of the first accounts of testing a contactless vital signs measurement tool among providers and patients. If ongoing initiatives demonstrate accuracy in its readings, Vitals could enhance telemedicine by providing accurate and automatic reporting and recording of vitals; sending patients' vital readings (pending provider approval) directly to their electronic medical record; saving provider and patient time; and potentially reducing necessity of some home-based biometric devices. Understanding usability issues before US Food and Drug Administration approval of Vitals and its implementation could contribute to a seamless introduction of Vitals to VA providers and patients.