RESUMO
BACKGROUND: University spring break carries a two-pronged SARS-CoV-2 variant transmission risk. Circulating variants from universities can spread to spring break destinations, and variants from spring break destinations can spread to universities and surrounding communities. Therefore, it is critical to implement SARS-CoV-2 variant surveillance and testing strategies to limit community spread before and after spring break to mitigate virus transmission and facilitate universities safely returning to in-person teaching. METHODS: We examined the SARS-CoV-2 positivity rate and changes in variant lineages before and after the university spring break for two consecutive years. 155 samples were sequenced across four time periods: pre- and post-spring break 2021 and pre- and post-spring break 2022; following whole genome sequencing, samples were assigned clades. The clades were then paired with positivity and testing data from over 50,000 samples. RESULTS: In 2021, the number of variants in the observed population increased from four to nine over spring break, with variants of concern being responsible for most of the cases; Alpha percent composition increased from 22.2% to 56.4%. In 2022, the number of clades in the population increased only from two to three, all of which were Omicron or a sub-lineage of Omicron. However, phylogenetic analysis showed the emergence of distantly related sub-lineages. 2022 saw a greater increase in positivity than 2021, which coincided with a milder mitigation strategy. Analysis of social media data provided insight into student travel destinations and how those travel events may have impacted spread. CONCLUSIONS: We show the role that repetitive testing can play in transmission mitigation, reducing community spread, and maintaining in-person education. We identified that distantly related lineages were brought to the area after spring break travel regardless of the presence of a dominant variant of concern.
Assuntos
COVID-19 , SARS-CoV-2 , Viagem , Humanos , COVID-19/transmissão , COVID-19/prevenção & controle , COVID-19/epidemiologia , COVID-19/virologia , SARS-CoV-2/genética , SARS-CoV-2/isolamento & purificação , Universidades , Sequenciamento Completo do Genoma , Filogenia , Estações do AnoRESUMO
BACKGROUND: The cat flea (Ctenocephalides felis), a parasite commonly found on both dogs and cats, is a competent vector for several zoonotic pathogens, including Dipylidium caninum (tapeworms), Bartonella henselae (responsible for cat scratch disease) and Rickettsia felis (responsible for flea-borne spotted fever). Veterinarians recommend that both cats and dogs be routinely treated with medications to prevent flea infestation. Nevertheless, surveys suggest that nearly one third of pet owners do not routinely administer appropriate preventatives. METHODS: A mathematical model based on weighted averaging over time is developed to predict outdoor flea activity from weather conditions for the contiguous United States. This 'nowcast' model can be updated in real time as weather conditions change and serves as an important tool for educating pet owners about the risks of flea-borne disease. We validate our model using Google Trends data for searches for the term 'fleas.' This Google Trends data serve as a proxy for true flea activity, as validating the model by collecting fleas over the entire USA is prohibitively costly and time-consuming. RESULTS: The average correlation (r) between the nowcast outdoor flea activity predictions and the Google Trends data was moderate: 0.65, 0.70, 0.66, 0.71 and 0.63 for 2016, 2017, 2018, 2019 and 2020, respectively. However, there was substantial regional variation in performance, with the average correlation in the East South Atlantic states being 0.81 while the average correlation in the Mountain states was only 0.45. The nowcast predictions displayed strong seasonal and geographic patterns, with predicted activity generally being highest in the summer months. CONCLUSIONS: The nowcast model is a valuable tool by which to educate pet owners regarding the risk of fleas and flea-borne disease and the need to routinely administer flea preventatives. While it is ideal for domestic cats and dogs to on flea preventatives year-round, many pets remain vulnerable to flea infestation. Alerting pet owners to the local increased risk of flea activity during certain times of the year may motivate them to administer appropriate routine preventives.
Assuntos
Doenças do Gato , Ctenocephalides , Doenças do Cão , Infestações por Pulgas , Sifonápteros , Animais , Gatos , Cães , Doenças do Cão/epidemiologia , Infestações por Pulgas/epidemiologia , Infestações por Pulgas/veterináriaRESUMO
Fitting penalized models for the purpose of merging the estimation and model selection problem has become commonplace in statistical practice. Of the various regularization strategies that can be leveraged to this end, the use of the l0 norm to penalize parameter estimation poses the most daunting model fitting task. In fact, this particular strategy requires an end user to solve a non-convex NP-hard optimization problem irregardless of the underlying data model. For this reason, the use of the l0 norm as a regularization strategy has been woefully under utilized. To obviate this difficulty, a strategy to solve such problems that is generally accessible by the statistical community is developed. The approach can be adopted to solve l0 norm penalized problems across a very broad class of models, can be implemented using existing software, and is computationally efficient. The performance of the method is demonstrated through in-depth numerical experiments and through using it to analyze several prototypical data sets.
RESUMO
Domestic dogs are susceptible to numerous vector-borne pathogens that are of significant importance for their health. In addition to being of veterinary importance, many of these pathogens are zoonotic and thus may pose a risk to human health. In the USA, owned dogs are commonly screened for exposure to or infection with several canine vector-borne pathogens. Although the screening data are widely available to show areas where infections are being diagnosed, testing of owned dogs is expected to underestimate the actual prevalence in dogs that have no access to veterinary care. The goal of this study was to measure the association between the widely available data from a perceived low-risk population with temporally and spatially collected data from shelter-housed dog populations. These data were then used to extrapolate the prevalence in dogs that generally lack veterinary care. The focus pathogens included Dirofilaria immitis, Ehrlichia spp., Anaplasma spp., and Borrelia burgdorferi. There was a linear association between the prevalence of selected vector-borne pathogens in shelter-housed and owned dog populations and, generally, the data suggested that prevalence of heartworm (D. immitis) infection and seroprevalence of Ehrlichia spp. and B. burgdorferi are higher in shelter-housed dogs, regardless of their location, compared with the owned population. The seroprevalence of Anaplasma spp. was predicted to be higher in areas that have very low to low seroprevalence, but unexpectedly, in areas of higher seroprevalence within the owned population, the seroprevalence was expected to be lower in the shelter-housed dog population. If shelters and veterinarians make decisions to not screen dogs based on the known seroprevalence of the owned group, they are likely underestimating the risk of exposure. This is especially true for heartworm. With this new estimate of the seroprevalence in shelter-housed dogs throughout the USA, shelters and veterinarians can make evidence-based informed decisions on whether testing and screening for these pathogens is appropriate for their local dog population. This work represents an important step in understanding the relationships in the seroprevalences of vector-borne pathogens between shelter-housed and owned dogs, and provides valuable data on the risk of vector-borne diseases in dogs.
Assuntos
Anaplasmose , Dirofilaria immitis , Dirofilariose , Doenças do Cão , Ehrlichiose , Doença de Lyme , Cães , Animais , Humanos , Estados Unidos/epidemiologia , Doença de Lyme/epidemiologia , Doença de Lyme/veterinária , Dirofilariose/epidemiologia , Ehrlichiose/epidemiologia , Anaplasmose/epidemiologia , Estudos Soroepidemiológicos , Doenças do Cão/epidemiologia , Ehrlichia , AnaplasmaRESUMO
In this work, we develop a novel Bayesian regression framework that can be used to complete variable selection in high dimensional settings. Unlike existing techniques, the proposed approach can leverage side information to inform about the sparsity structure of the regression coefficients. This is accomplished by replacing the usual inclusion probability in the spike and slab prior with a binary regression model which assimilates this extra source of information. To facilitate model fitting, a computationally efficient and easy to implement Markov chain Monte Carlo posterior sampling algorithm is developed via carefully chosen priors and data augmentation steps. The finite sample performance of our methodology is assessed through numerical simulations, and we further illustrate our approach by using it to identify genetic markers associated with the nicotine metabolite ratio; a key biological marker associated with nicotine dependence and smoking cessation treatment.
Assuntos
Algoritmos , Teorema de Bayes , Marcadores Genéticos , Cadeias de MarkovRESUMO
Alcohol use disorder (AUD) is a life-threatening disease characterized by compulsive drinking, cognitive deficits, and social impairment that continue despite negative consequences. The inability of individuals with AUD to regulate drinking may involve functional deficits in cortical areas that normally balance actions that have aspects of both reward and risk. Among these, the orbitofrontal cortex (OFC) is critically involved in goal-directed behavior and is thought to maintain a representation of reward value that guides decision making. In the present study, we analyzed post-mortem OFC brain samples collected from age- and sex-matched control subjects and those with AUD using proteomics, bioinformatics, machine learning, and reverse genetics approaches. Of the 4,500+ total unique proteins identified in the proteomics screen, there were 47 proteins that differed significantly by sex that were enriched in processes regulating extracellular matrix and axonal structure. Gene ontology enrichment analysis revealed that proteins differentially expressed in AUD cases were involved in synaptic and mitochondrial function, as well as transmembrane transporter activity. Alcohol-sensitive OFC proteins also mapped to abnormal social behaviors and social interactions. Machine learning analysis of the post-mortem OFC proteome revealed dysregulation of presynaptic (e.g., AP2A1) and mitochondrial proteins that predicted the occurrence and severity of AUD. Using a reverse genetics approach to validate a target protein, we found that prefrontal Ap2a1 expression significantly correlated with voluntary alcohol drinking in male and female genetically diverse mouse strains. Moreover, recombinant inbred strains that inherited the C57BL/6J allele at the Ap2a1 interval consumed higher amounts of alcohol than those that inherited the DBA/2J allele. Together, these findings highlight the impact of excessive alcohol consumption on the human OFC proteome and identify important cross-species cortical mechanisms and proteins that control drinking in individuals with AUD.
Assuntos
Alcoolismo , Humanos , Masculino , Feminino , Camundongos , Animais , Alcoolismo/metabolismo , Complexo 2 de Proteínas Adaptadoras/metabolismo , Proteoma/metabolismo , Camundongos Endogâmicos C57BL , Camundongos Endogâmicos DBA , Córtex Pré-Frontal/metabolismo , Consumo de Bebidas Alcoólicas/genética , Etanol/metabolismoRESUMO
Source and sink interactions play a critical but mechanistically poorly understood role in the regulation of senescence. To disentangle the genetic and molecular mechanisms underlying source-sink-regulated senescence (SSRS), we performed a phenotypic, transcriptomic, and systems genetics analysis of senescence induced by the lack of a strong sink in maize (Zea mays). Comparative analysis of genotypes with contrasting SSRS phenotypes revealed that feedback inhibition of photosynthesis, a surge in reactive oxygen species, and the resulting endoplasmic reticulum (ER) stress were the earliest outcomes of weakened sink demand. Multienvironmental evaluation of a biparental population and a diversity panel identified 12 quantitative trait loci and 24 candidate genes, respectively, underlying SSRS. Combining the natural diversity and coexpression networks analyses identified 7 high-confidence candidate genes involved in proteolysis, photosynthesis, stress response, and protein folding. The role of a cathepsin B like protease 4 (ccp4), a candidate gene supported by systems genetic analysis, was validated by analysis of natural alleles in maize and heterologous analyses in Arabidopsis (Arabidopsis thaliana). Analysis of natural alleles suggested that a 700-bp polymorphic promoter region harboring multiple ABA-responsive elements is responsible for differential transcriptional regulation of ccp4 by ABA and the resulting variation in SSRS phenotype. We propose a model for SSRS wherein feedback inhibition of photosynthesis, ABA signaling, and oxidative stress converge to induce ER stress manifested as programed cell death and senescence. These findings provide a deeper understanding of signals emerging from loss of sink strength and offer opportunities to modify these signals to alter senescence program and enhance crop productivity.
Assuntos
Transcriptoma , Zea mays , Zea mays/metabolismo , Transcriptoma/genética , Perfilação da Expressão Gênica , Fotossíntese/genética , Fenótipo , Regulação da Expressão Gênica de PlantasRESUMO
Alcohol use disorder (AUD) is a life-threatening disease characterized by compulsive drinking, cognitive deficits, and social impairment that continue despite negative consequences. The inability of individuals with AUD to regulate drinking may involve functional deficits in cortical areas that normally balance actions that have aspects of both reward and risk. Among these, the orbitofrontal cortex (OFC) is critically involved in goal-directed behavior and is thought to maintain a representation of reward value that guides decision making. In the present study, we analyzed post-mortem OFC brain samples collected from age- and sex-matched control subjects and those with AUD using proteomics, bioinformatics, machine learning, and reverse genetics approaches. Of the 4,500+ total unique proteins identified in the proteomics screen, there were 47 proteins that differed significantly by sex that were enriched in processes regulating extracellular matrix and axonal structure. Gene ontology enrichment analysis revealed that proteins differentially expressed in AUD cases were involved in synaptic and mitochondrial function, as well as transmembrane transporter activity. Alcohol-sensitive OFC proteins also mapped to abnormal social behaviors and social interactions. Machine learning analysis of the post-mortem OFC proteome revealed dysregulation of presynaptic (e.g., AP2A1) and mitochondrial proteins that predicted the occurrence and severity of AUD. Using a reverse genetics approach to validate a target protein, we found that prefrontal Ap2a1 expression significantly correlated with voluntary alcohol drinking in male and female genetically diverse mouse strains. Moreover, recombinant inbred strains that inherited the C57BL/6J allele at the Ap2a1 interval consumed higher amounts of alcohol than those that inherited the DBA/2J allele. Together, these findings highlight the impact of excessive alcohol consumption on the human OFC proteome and identify important cross-species cortical mechanisms and proteins that control drinking in individuals with AUD.
RESUMO
When screening a population for infectious diseases, pooling individual specimens (e.g., blood, swabs, urine, etc.) can provide enormous cost savings when compared to testing specimens individually. In the biostatistics literature, testing pools of specimens is commonly known as group testing or pooled testing. Although estimating a population-level prevalence with group testing data has received a large amount of attention, most of this work has focused on applications involving a single disease, such as human immunodeficiency virus. Modern methods of screening now involve testing pools and individuals for multiple diseases simultaneously through the use of multiplex assays. Hou et al. (2017, Biometrics, 73, 656-665) and Hou et al. (2020, Biostatistics, 21, 417-431) recently proposed group testing protocols for multiplex assays and derived relevant case identification characteristics, including the expected number of tests and those which quantify classification accuracy. In this article, we describe Bayesian methods to estimate population-level disease probabilities from implementing these protocols or any other multiplex group testing protocol which might be carried out in practice. Our estimation methods can be used with multiplex assays for two or more diseases while incorporating the possibility of test misclassification for each disease. We use chlamydia and gonorrhea testing data collected at the State Hygienic Laboratory at the University of Iowa to illustrate our work. We also provide an online R resource practitioners can use to implement the methods in this article.
Assuntos
Infecções por Chlamydia , Doenças Transmissíveis , Humanos , Infecções por Chlamydia/diagnóstico , Infecções por Chlamydia/epidemiologia , Infecções por Chlamydia/prevenção & controle , Teorema de Bayes , Prevalência , Doenças Transmissíveis/diagnóstico , Doenças Transmissíveis/epidemiologia , ProbabilidadeRESUMO
We aim to estimate the effectiveness of 2-dose and 3-dose mRNA vaccination (BNT162b2 and mRNA-1273) against general Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection (asymptomatic or symptomatic) caused by the omicron BA.1 variant. This propensity-score matched retrospective cohort study takes place in a large public university undergoing weekly Coronavirus Disease 2019 (Covid-19) testing in South Carolina, USA. The population consists of 24,145 university students and employees undergoing weekly Covid-19 testing between January 3rd and January 31st, 2022. The analytic sample was constructed via propensity score matching on vaccination status: unvaccinated, completion of 2-dose mRNA series (BNT162b2 or mRNA-1273) within the previous 5 months, and receipt of mRNA booster dose (BNT162b2 or mRNA-1273) within the previous 5 months. The resulting analytic sample consists of 1,944 university students (mean [SD] age, 19.64 [1.42] years, 66.4% female, 81.3% non-Hispanic White) and 658 university employees (mean [SD] age, 43.05 [12.22] years, 64.7% female, 83.3% non-Hispanic White). Booster protection against any SARS-CoV-2 infection was 66.4% among employees (95% CI: 46.1-79.0%; P<.001) and 45.4% among students (95% CI: 30.0-57.4%; P<.001). Compared to the 2-dose mRNA series, estimated increase in protection from the booster dose was 40.8% among employees (P=.024) and 37.7% among students (P=.001). We did not have enough evidence to conclude a statistically significant protective effect of the 2-dose mRNA vaccination series, nor did we have enough evidence to conclude that protection waned in the 5-month period after receipt of the 2nd or 3rd mRNA dose. Furthermore, we did not find evidence that protection varied by manufacturer. We conclude that in adults 18-65 years of age, Covid-19 mRNA booster doses offer moderate protection against general SARS-CoV-2 infection caused by the omicron variant and provide a substantial increase in protection relative to the 2-dose mRNA vaccination series.
RESUMO
The proportional hazards (PH) model is, arguably, the most popular model for the analysis of lifetime data arising from epidemiological studies, among many others. In such applications, analysts may be faced with censored outcomes and/or studies which institute enrollment criterion leading to left truncation. Censored outcomes arise when the event of interest is not observed but rather is known relevant to an observation time(s). Left truncated data occur in studies that exclude participants who have experienced the event prior to being enrolled in the study. If not accounted for, both of these features can lead to inaccurate inferences about the population under study. Thus, to overcome this challenge, herein we propose a novel unified PH model that can be used to accommodate both of these features. In particular, our approach can seamlessly analyze exactly observed failure times along with interval-censored observations, while aptly accounting for left truncation. To facilitate model fitting, an expectation-maximization algorithm is developed through the introduction of carefully structured latent random variables. To provide modeling flexibility, a monotone spline representation is used to approximate the cumulative baseline hazard function. The performance of our methodology is evaluated through a simulation study and is further illustrated through the analysis of two motivating data sets; one that involves child mortality in Nigeria and the other prostate cancer.
Assuntos
Algoritmos , Masculino , Criança , Humanos , Modelos de Riscos Proporcionais , Simulação por ComputadorRESUMO
BACKGROUND: Alcohol use disorder (AUD) has been described as a chronic disease given the high rates that affected individuals have in returning to drinking after a change attempt. Many studies have characterized predictors of aggregated alcohol use (e.g., percent heavy drinking days) following treatment for AUD. However, to inform future research on predicting drinking as an AUD outcome measure, a better understanding is needed of the patterns of drinking that surround a treatment episode and which clinical measures predict patterns of drinking. METHODS: We analyzed data from the Project MATCH and COMBINE studies (MATCH: n = 1726; 24.3% female, 20.0% non-White; COMBINE: n = 1383; 30.9% female, 23.2% non-White). Daily drinking was measured in the 90 days prior to treatment, 90 days (MATCH) and 120 days (COMBINE) during treatment, and 365 days following treatment. Gradient boosting machine learning methods were used to explore baseline predictors of drinking patterns. RESULTS: Drinking patterns during a prior time period were the most consistent predictors of future drinking patterns. Social network drinking, AUD severity, mental health symptoms, and constructs based on the addiction cycle (incentive salience, negative emotionality, and executive function) were associated with patterns of drinking prior to treatment. Addiction cycle constructs, AUD severity, purpose in life, social network, legal history, craving, and motivation were associated with drinking during the treatment period and following treatment. CONCLUSIONS: There is heterogeneity in drinking patterns around an AUD treatment episode. This study provides novel information about variables that may be important to measure to improve the prediction of drinking patterns during and following treatment. Future research should consider which patterns of drinking they aim to predict and which period of drinking is most important to predict. The current findings could guide the selection of predictor variables and generate hypotheses for those predictors.
RESUMO
Group testing is the process of testing items as an amalgamation, rather than separately, to determine the binary status for each item. Its use was especially important during the COVID-19 pandemic through testing specimens for SARS-CoV-2. The adoption of group testing for this and many other applications is because members of a negative testing group can be declared negative with potentially only one test. This subsequently leads to significant increases in laboratory testing capacity. Whenever a group testing algorithm is put into practice, it is critical for laboratories to understand the algorithm's operating characteristics, such as the expected number of tests. Our paper presents the binGroup2 package that provides the statistical tools for this purpose. This R package is the first to address the identification aspect of group testing for a wide variety of algorithms. We illustrate its use through COVID-19 and chlamydia/gonorrhea applications of group testing.
RESUMO
Alzheimer's disease is a neurodegenerative condition that accelerates cognitive decline relative to normal aging. It is of critical scientific importance to gain a better understanding of early disease mechanisms in the brain to facilitate effective, targeted therapies. The volume of the hippocampus is often used in diagnosis and monitoring of the disease. Measuring this volume via neuroimaging is difficult since each hippocampus must either be manually identified or automatically delineated, a task referred to as segmentation. Automatic hippocampal segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each hippocampus is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms employ voting procedures with voting weights assigned directly or estimated via optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. Our results suggest that incorporating tissue classification (e.g, gray matter) into the label fusion procedure can greatly improve segmentation when relatively homogeneous, healthy brains are used as atlases for diseased brains. The fully Bayesian approach also produces meaningful uncertainty measures about hippocampal volumes, information which can be leveraged to detect significant, scientifically meaningful differences between healthy and diseased populations, improving the potential for early detection and tracking of the disease.
RESUMO
BACKGROUND: There is a need to match characteristics of tobacco users with cessation treatments and risks of tobacco attributable diseases such as lung cancer. The rate in which the body metabolizes nicotine has proven an important predictor of these outcomes. Nicotine metabolism is primarily catalyzed by the enzyme cytochrone P450 (CYP2A6) and CYP2A6 activity can be measured as the ratio of two nicotine metabolites: trans-3'-hydroxycotinine to cotinine (NMR). Measurements of these metabolites are only possible in current tobacco users and vary by biofluid source, timing of collection, and protocols; unfortunately, this has limited their use in clinical practice. The NMR depends highly on genetic variation near CYP2A6 on chromosome 19 as well as ancestry, environmental, and other genetic factors. Thus, we aimed to develop prediction models of nicotine metabolism using genotypes and basic individual characteristics (age, gender, height, and weight). RESULTS: We identified four multiethnic studies with nicotine metabolites and DNA samples. We constructed a 263 marker panel from filtering genome-wide association scans of the NMR in each study. We then applied seven machine learning techniques to train models of nicotine metabolism on the largest and most ancestrally diverse dataset (N=2239). The models were then validated using the other three studies (total N=1415). Using cross-validation, we found the correlations between the observed and predicted NMR ranged from 0.69 to 0.97 depending on the model. When predictions were averaged in an ensemble model, the correlation was 0.81. The ensemble model generalizes well in the validation studies across ancestries, despite differences in the measurements of NMR between studies, with correlations of: 0.52 for African ancestry, 0.61 for Asian ancestry, and 0.46 for European ancestry. The most influential predictors of NMR identified in more than two models were rs56113850, rs11878604, and 21 other genetic variants near CYP2A6 as well as age and ancestry. CONCLUSIONS: We have developed an ensemble of seven models for predicting the NMR across ancestries from genotypes and age, gender and BMI. These models were validated using three datasets and associate with nicotine dosages. The knowledge of how an individual metabolizes nicotine could be used to help select the optimal path to reducing or quitting tobacco use, as well as, evaluating risks of tobacco use.
Assuntos
Cotinina , Nicotina , Cotinina/metabolismo , Estudo de Associação Genômica Ampla , Genótipo , Humanos , Nicotina/metabolismo , Fumar/genética , Fumar/metabolismoRESUMO
Data on effectiveness and protection duration of Covid-19 vaccines and previous infection against general SARS-CoV-2 infection in general populations are limited. Here we evaluate protection from Covid-19 vaccination (primary series) and previous infection in 21,261 university students undergoing repeated surveillance testing between 8/8/2021-12/04/2021, during which B.1.617 (delta) was the dominant SARS-CoV-2 variant. Estimated mRNA-1273, BNT162b2, and AD26.COV2.S effectiveness against any SARS-CoV-2 infection is 75.4% (95% CI: 70.5-79.5), 65.7% (95% CI: 61.1-69.8), and 42.8% (95% CI: 26.1-55.8), respectively. Among previously infected individuals, protection is 72.9% when unvaccinated (95% CI: 66.1-78.4) and increased by 22.1% with full vaccination (95% CI: 15.8-28.7). Statistically significant decline in protection is observed for mRNA-1273 (P < .001), BNT162b2 (P < .001), but not Ad26.CoV2.S (P = 0.40) or previous infection (P = 0.12). mRNA vaccine protection dropped 29.7% (95% CI: 17.9-41.6) six months post- vaccination, from 83.2% to 53.5%. We conclude that the 2-dose mRNA vaccine series initially offers strong protection against general SARS-CoV-2 infection caused by the delta variant in young adults, but protection substantially decreases over time. These findings indicate that vaccinated individuals may still contribute to community spread. While previous SARS-CoV-2 infection consistently provides moderately strong protection against repeat infection from delta, vaccination yields a substantial increase in protection.
Assuntos
COVID-19 , Vacinas Virais , Vacina BNT162 , COVID-19/prevenção & controle , Vacinas contra COVID-19 , Humanos , SARS-CoV-2 , Vacinas Sintéticas , Adulto Jovem , Vacinas de mRNARESUMO
Group (pooled) testing is becoming a popular strategy for screening large populations for infectious diseases. This popularity is owed to the cost savings that can be realized through implementing group testing methods. These methods involve physically combining biomaterial (eg, saliva, blood, urine) collected on individuals into pooled specimens which are tested for an infection of interest. Through testing these pooled specimens, group testing methods reduce the cost of diagnosing all individuals under study by reducing the number of tests performed. Even though group testing offers substantial cost reductions, some practitioners are hesitant to adopt group testing methods due to the so-called dilution effect. The dilution effect describes the phenomenon in which biomaterial from negative individuals dilute the contributions from positive individuals to such a degree that a pool is incorrectly classified. Ignoring the dilution effect can reduce classification accuracy and lead to bias in parameter estimates and inaccurate inference. To circumvent these issues, we propose a Bayesian regression methodology which directly acknowledges the dilution effect while accommodating data that arises from any group testing protocol. As a part of our estimation strategy, we are able to identify pool specific optimal classification thresholds which are aimed at maximizing the classification accuracy of the group testing protocol being implemented. These two features working in concert effectively alleviate the primary concerns raised by practitioners regarding group testing. The performance of our methodology is illustrated via an extensive simulation study and by being applied to Hepatitis B data collected on Irish prisoners.
Assuntos
Hepatite B , Programas de Rastreamento , Teorema de Bayes , Materiais Biocompatíveis , Simulação por Computador , Hepatite B/diagnóstico , Humanos , Programas de Rastreamento/métodosRESUMO
The opioid crisis in the United States poses a major threat to public health due to psychiatric and infectious disease comorbidities and death due to opioid use disorder (OUD). OUD is characterized by patterns of opioid misuse leading to persistent heavy use and overdose. The standard of care for treatment of OUD is medication-assisted treatment, in combination with behavioral therapy. Medications for opioid use disorder have been shown to improve OUD outcomes, including reduction and prevention of overdose. However, understanding the effectiveness of such medications has been limited due to non-adherence to assigned dose levels by study patients. To overcome this challenge, herein we develop a model that views dose history as a time-varying covariate. Proceeding in this fashion allows the model to estimate dose effect while accounting for lapses in adherence. The proposed model is used to conduct a secondary analysis of data collected from six efficacy and safety trials of buprenorphine maintenance treatment. This analysis provides further insight into the time-dependent treatment effects of buprenorphine and how different dose adherence patterns relate to risk of opioid use.
Assuntos
Buprenorfina , Overdose de Drogas , Transtornos Relacionados ao Uso de Opioides , Analgésicos Opioides/uso terapêutico , Buprenorfina/uso terapêutico , Overdose de Drogas/tratamento farmacológico , Humanos , Tratamento de Substituição de Opiáceos , Epidemia de Opioides , Transtornos Relacionados ao Uso de Opioides/tratamento farmacológico , Estados UnidosRESUMO
The impact of agonist dose and of physician, staff and patient engagement on treatment have not been evaluated together in an analysis of treatment for opioid use disorder. Our hypotheses were that greater agonist dose and therapeutic engagement would be associated with reduced illicit opiate use in a time-dependent manner. Publicly-available treatment data from six buprenorphine efficacy and safety trials from the Federally-supported Clinical Trials Network were used to derive treatment variables. Three novel predictors were constructed to capture the time weighted effects of buprenorphine dosage (mg buprenorphine per day), dosing protocol (whether physician could adjust dose), and clinic visits (whether patient attended clinic). We used time-in-trial as a predictor to account for the therapeutic benefits of treatment persistence. The outcome was illicit opiate use defined by self-report or urinalysis. Trial participants (N = 3022 patients with opioid dependence, mean age 36 years, 33% female, 14% Black, 16% Hispanic) were analyzed using a generalized linear mixed model. Treatment variables dose, Odds Ratio (OR) = 0.63 (95% Confidence Interval (95%CI) 0.59−0.67), dosing protocol, OR = 0.70 (95%CI 0.65−0.76), time-in-trial, OR = 0.75 (95%CI 0.71−0.80) and clinic visits, OR = 0.81 (95%CI 0.76−0.87) were significant (p-values < 0.001) protective factors. Treatment implications support higher doses of buprenorphine and greater engagement of patients with providers and clinic staff.
Assuntos
Buprenorfina , Alcaloides Opiáceos , Transtornos Relacionados ao Uso de Opioides , Adulto , Analgésicos Opioides/uso terapêutico , Buprenorfina/uso terapêutico , Ensaios Clínicos como Assunto , Feminino , Humanos , Masculino , Alcaloides Opiáceos/uso terapêutico , Tratamento de Substituição de Opiáceos/métodos , Transtornos Relacionados ao Uso de Opioides/tratamento farmacológicoRESUMO
BACKGROUND: Stalk lodging (breaking of agricultural plant stalks prior to harvest) is a multi-billion dollar a year problem. Stalk lodging occurs when high winds induce bending moments in the stalk which exceed the bending strength of the plant. Previous biomechanical models of plant stalks have investigated the effect of cross-sectional morphology on stalk lodging resistance (e.g., diameter and rind thickness). However, it is unclear if the location of stalk failure along the length of stem is determined by morphological or compositional factors. It is also unclear if the crops are structurally optimized, i.e., if the plants allocate structural biomass to create uniform and minimal bending stresses in the plant tissues. The purpose of this paper is twofold: (1) to investigate the relationship between bending stress and failure location of maize stalks, and (2) to investigate the potential of phenotyping for internode-level bending stresses to assess lodging resistance. RESULTS: 868 maize specimens representing 16 maize hybrids were successfully tested in bending to failure. Internode morphology was measured, and bending stresses were calculated. It was found that bending stress is highly and positively associated with failure location. A user-friendly computational tool is presented to help plant breeders in phenotyping for internode-level bending stress. Phenotyping for internode-level bending stresses could potentially be used to breed for more biomechanically optimal stalks that are resistant to stalk lodging. CONCLUSIONS: Internode-level bending stress plays a potentially critical role in the structural integrity of plant stems. Equations and tools provided herein enable researchers to account for this phenotype, which has the potential to increase the bending strength of plants without increasing overall structural biomass.