RESUMO
Early detection of SARS-CoV-2 infection is key to managing the current global pandemic, as evidence shows the virus is most contagious on or before symptom onset. Here, we introduce a low-cost, high-throughput method for diagnosing and studying SARS-CoV-2 infection. Dubbed Pathogen-Oriented Low-Cost Assembly & Re-Sequencing (POLAR), this method amplifies the entirety of the SARS-CoV-2 genome. This contrasts with typical RT-PCR-based diagnostic tests, which amplify only a few loci. To achieve this goal, we combine a SARS-CoV-2 enrichment method developed by the ARTIC Network (https://artic.network/) with short-read DNA sequencing and de novo genome assembly. Using this method, we can reliably (>95% accuracy) detect SARS-CoV-2 at a concentration of 84 genome equivalents per milliliter (GE/mL). The vast majority of diagnostic methods meeting our analytical criteria that are currently authorized for use by the United States Food and Drug Administration with the Coronavirus Disease 2019 (COVID-19) Emergency Use Authorization require higher concentrations of the virus to achieve this degree of sensitivity and specificity. In addition, we can reliably assemble the SARS-CoV-2 genome in the sample, often with no gaps and perfect accuracy given sufficient viral load. The genotypic data in these genome assemblies enable the more effective analysis of disease spread than is possible with an ordinary binary diagnostic. These data can also help identify vaccine and drug targets. Finally, we show that the diagnoses obtained using POLAR of positive and negative clinical nasal mid-turbinate swab samples 100% match those obtained in a clinical diagnostic lab using the Center for Disease Control's 2019-Novel Coronavirus test. Using POLAR, a single person can manually process 192 samples over an 8-hour experiment at the cost of ~$36 per patient (as of December 7th, 2022), enabling a 24-hour turnaround with sequencing and data analysis time. We anticipate that further testing and refinement will allow greater sensitivity using this approach.
Assuntos
COVID-19 , SARS-CoV-2 , Estados Unidos , Humanos , SARS-CoV-2/genética , COVID-19/diagnóstico , Teste para COVID-19 , Sensibilidade e Especificidade , Análise de Sequência de DNARESUMO
The Encyclopedia of DNA elements (ENCODE) project is a collaborative effort to create a comprehensive catalog of functional elements in the human genome. The current database comprises more than 19000 functional genomics experiments across more than 1000 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the Homo sapiens and Mus musculus genomes. All experimental data, metadata, and associated computational analyses created by the ENCODE consortium are submitted to the Data Coordination Center (DCC) for validation, tracking, storage, and distribution to community resources and the scientific community. The ENCODE project has engineered and distributed uniform processing pipelines in order to promote data provenance and reproducibility as well as allow interoperability between genomic resources and other consortia. All data files, reference genome versions, software versions, and parameters used by the pipelines are captured and available via the ENCODE Portal. The pipeline code, developed using Docker and Workflow Description Language (WDL; https://openwdl.org/) is publicly available in GitHub, with images available on Dockerhub (https://hub.docker.com), enabling access to a diverse range of biomedical researchers. ENCODE pipelines maintained and used by the DCC can be installed to run on personal computers, local HPC clusters, or in cloud computing environments via Cromwell. Access to the pipelines and data via the cloud allows small labs the ability to use the data or software without access to institutional compute clusters. Standardization of the computational methodologies for analysis and quality control leads to comparable results from different ENCODE collections - a prerequisite for successful integrative analyses.
RESUMO
The Encyclopedia of DNA elements (ENCODE) project is a collaborative effort to create a comprehensive catalog of functional elements in the human genome. The current database comprises more than 19000 functional genomics experiments across more than 1000 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the Homo sapiens and Mus musculus genomes. All experimental data, metadata, and associated computational analyses created by the ENCODE consortium are submitted to the Data Coordination Center (DCC) for validation, tracking, storage, and distribution to community resources and the scientific community. The ENCODE project has engineered and distributed uniform processing pipelines in order to promote data provenance and reproducibility as well as allow interoperability between genomic resources and other consortia. All data files, reference genome versions, software versions, and parameters used by the pipelines are captured and available via the ENCODE Portal. The pipeline code, developed using Docker and Workflow Description Language (WDL; https://openwdl.org/) is publicly available in GitHub, with images available on Dockerhub (https://hub.docker.com), enabling access to a diverse range of biomedical researchers. ENCODE pipelines maintained and used by the DCC can be installed to run on personal computers, local HPC clusters, or in cloud computing environments via Cromwell. Access to the pipelines and data via the cloud allows small labs the ability to use the data or software without access to institutional compute clusters. Standardization of the computational methodologies for analysis and quality control leads to comparable results from different ENCODE collections - a prerequisite for successful integrative analyses.
RESUMO
Sample size determination for open-ended questions or qualitative interviews relies primarily on custom and finding the point where little new information is obtained (thematic saturation). Here, we propose and test a refined definition of saturation as obtaining the most salient items in a set of qualitative interviews (where items can be material things or concepts, depending on the topic of study) rather than attempting to obtain all the items. Salient items have higher prevalence and are more culturally important. To do this, we explore saturation, salience, sample size, and domain size in 28 sets of interviews in which respondents were asked to list all the things they could think of in one of 18 topical domains. The domains-like kinds of fruits (highly bounded) and things that mothers do (unbounded)-varied greatly in size. The datasets comprise 20-99 interviews each (1,147 total interviews). When saturation was defined as the point where less than one new item per person would be expected, the median sample size for reaching saturation was 75 (range = 15-194). Thematic saturation was, as expected, related to domain size. It was also related to the amount of information contributed by each respondent but, unexpectedly, was reached more quickly when respondents contributed less information. In contrast, a greater amount of information per person increased the retrieval of salient items. Even small samples (n = 10) produced 95% of the most salient ideas with exhaustive listing, but only 53% of those items were captured with limited responses per person (three). For most domains, item salience appeared to be a more useful concept for thinking about sample size adequacy than finding the point of thematic saturation. Thus, we advance the concept of saturation in salience and emphasize probing to increase the amount of information collected per respondent to increase sample efficiency.