Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56.275
Filtrar
2.
Sci Rep ; 11(1): 11838, 2021 06 04.
Artículo en Inglés | MEDLINE | ID: mdl-34088959

RESUMEN

Masks are a vital tool for limiting SARS-CoV-2 spread in the population. Here we utilize a mathematical model to assess the impact of masking on transmission within individual transmission pairs and at the population level. Our model quantitatively links mask efficacy to reductions in viral load and subsequent transmission risk. Our results reinforce that the use of masks by both a potential transmitter and exposed person substantially reduces the probability of successful transmission, even if masks only lower exposure viral load by ~ 50%. Slight increases in mask adherence and/or efficacy above current levels would reduce the effective reproductive number (Re) substantially below 1, particularly if implemented comprehensively in potential super-spreader environments. Our model predicts that moderately efficacious masks will also lower exposure viral load tenfold among people who get infected despite masking, potentially limiting infection severity. Because peak viral load tends to occur pre-symptomatically, we also identify that antiviral therapy targeting symptomatic individuals is unlikely to impact transmission risk. Instead, antiviral therapy would only lower Re if dosed as post-exposure prophylaxis and if given to ~ 50% of newly infected people within 3 days of an exposure. These results highlight the primacy of masking relative to other biomedical interventions under consideration for limiting the extent of the COVID-19 pandemic prior to widespread implementation of a vaccine. To confirm this prediction, we used a regression model of King County, Washington data and simulated the counterfactual scenario without mask wearing to estimate that in the absence of additional interventions, mask wearing decreased Re from 1.3-1.5 to ~ 1.0 between June and September 2020.


Asunto(s)
COVID-19/transmisión , Máscaras , SARS-CoV-2/fisiología , Carga Viral , Número Básico de Reproducción , COVID-19/prevención & control , Humanos , Modelos Biológicos , Probabilidad
3.
Comput Methods Programs Biomed ; 206: 106115, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33992900

RESUMEN

BACKGROUND AND OBJECTIVE: With the recent surge in availability of large biomedical databases mostly derived from electronic health records, the need for the development of scalable marginal survival models with faster implementation cannot be more timely. The presence of clustering renders computational complexity, especially when the number of clusters is high. Marginalizing conditional survival models can violate the proportional hazards assumption for some frailty distributions, disrupting the connection to a conditional model. While theoretical connections between proportional hazard and accelerated failure time models exist, a computational framework to produce both for either marginal or conditional perspectives is lacking. Our objective is to provide fast, scalable bridged-survival models contained in a unified framework from which the effects and standard errors for the conditional hazard ratio, the marginal hazard ratio, the conditional acceleration factor, and the marginal acceleration factor can be estimated, and related to one another in a transparent fashion. Methods We formulate a Weibull parametric frailty likelihood for clustered survival times that can directly estimate the four estimands. Under a nonlinear mixed model specification with positive stable frailties powered by Gaussian quadrature, we put forth a novel closed form of the integrated likelihood that lowered the computational threshold for fitting these models. The method is illustrated on a real dataset generated from electronic health records examining tooth-loss. RESULTS: Our novel closed form of the integrated likelihood significantly lowered the computational threshold for fitting these models by a factor of 12 (36 compared to 3 min) for the R package parfm, and a factor of 2400 for Gaussian Quadrature (4.6 days compared to 3 min) in SAS. Moreover, each of these estimands are connected by simple relationships of the parameters and the proportional hazards assumption is preserved for the marginal model. Our framework provides a flow of analysis enabling the fit of any/all of the 4 perspective-parameterization combinations. Conclusions We see the potential usefulness of our framework of bridged parametric survival models fitted with the Static-Stirling closed form likelihood. Bridged-survival models provide insights on subject-specific and population-level survival effects when their relation is transparent. SAS and R codes, along with implementation details on a pseudo data are provided.


Asunto(s)
Modelos Estadísticos , Análisis por Conglomerados , Funciones de Verosimilitud , Distribución Normal , Probabilidad , Modelos de Riesgos Proporcionales , Análisis de Supervivencia
4.
Sci Rep ; 11(1): 10075, 2021 05 12.
Artículo en Inglés | MEDLINE | ID: mdl-33980969

RESUMEN

To estimate the reproductive number (R0) of the coronavirus in the present scenario and to predict the incidence of daily and probable cumulative cases, by 20 August, 2020 for Karnataka state in India. The model used serial interval with a gamma distribution and applied 'early R' to estimate the R0 and 'projections' package in R program. This was performed to mimic the probable cumulative epidemic trajectories and predict future daily incidence by fitting the data to existing daily incidence and the estimated R0 by a model based on the assumption that daily incidence follows Poisson distribution. The maximum-likelihood (ML) value of R0 was 2.242 for COVID-19 outbreak, as on June 2020. The median with 95% CI of R0 values was 2.242 (1.50-3.00) estimated by bootstrap resampling method. The expected number of new cases for the next 60 days would progressively increase, and the estimated cumulative cases would reach 27,238 (26,008-28,467) at the end of 60th day in the future. But, if R0 value was doubled the estimated total number of cumulative cases would increase up to 432,411 (400,929-463,893) and if, R0 increase by 50%, the cases would increase up to 86,386 (80,910-91,861). The probable outbreak size and future daily cumulative incidence are largely dependent on the change in R0 values. Hence, it is vital to expedite the hospital provisions, medical facility enhancement work, and number of random tests for COVID-19 at a very rapid pace to prepare the state for exponential growth in next 2 months.


Asunto(s)
COVID-19/epidemiología , Número Básico de Reproducción , COVID-19/diagnóstico , Humanos , Incidencia , India/epidemiología , Probabilidad , Pronóstico , SARS-CoV-2/aislamiento & purificación
5.
BMC Bioinformatics ; 22(1): 225, 2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-33932975

RESUMEN

BACKGROUND: In phylogenetic analysis, it is common to infer unrooted trees. However, knowing the root location is desirable for downstream analyses and interpretation. There exist several methods to recover a root, such as molecular clock analysis (including midpoint rooting) or rooting the tree using an outgroup. Non-reversible Markov models can also be used to compute the likelihood of a potential root position. RESULTS: We present a software called RootDigger which uses a non-reversible Markov model to compute the most likely root location on a given tree and to infer a confidence value for each possible root placement. We find that RootDigger is successful at finding roots when compared to similar tools such as IQ-TREE and MAD, and will occasionally outperform them. Additionally, we find that the exhaustive mode of RootDigger is useful in quantifying and explaining uncertainty in rooting positions. CONCLUSIONS: RootDigger can be used on an existing phylogeny to find a root, or to asses the uncertainty of the root placement. RootDigger is available under the MIT licence at https://www.github.com/computations/root_digger .


Asunto(s)
Evolución Molecular , Programas Informáticos , Modelos Genéticos , Filogenia , Probabilidad , Incertidumbre
6.
J Forensic Odontostomatol ; 1(39): 16-23, 2021 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-34057154

RESUMEN

Juvenile crime or delinquency has been increasing at an alarming rate in recent times. In many countries, including India, the minimum age for criminal responsibility is 16 years. The present study aimed to estimate the probability of a south Indian adolescent either being or being older than the legally relevant age of 16 years using Demirjian's tooth formation stages. Orthopantomograms (OPG) of 640 south Indian adolescents (320 boys and 320 girls) aged between 12 and 20 years were retrospectively analyzed. In each OPG, Demirjian's formation stage of the mandibular left third molar was recorded and the data was subjected to statistical analysis. Descriptive and Pearsons correlation statistics were performed. The empirical probabilities were provided relative to the medico-legal question of predicting 16 years of age. The distribution of age throughout the 10th, 25th, 50th, 75th and 90th percentile follows a logical distribution pattern horizontally and vertically. Pearson's correlation statistics showed a strong positive correlation between the Demirjian's stages and age for both sexes. Therefore, it can be concluded that stage "F" can be used to predict the attainment of age equal to or older than 16 years with a probability of 93.9% for boys and 96.6% for girls.


Asunto(s)
Determinación de la Edad por los Dientes , Tercer Molar , Adolescente , Adulto , Niño , Femenino , Humanos , India/epidemiología , Masculino , Tercer Molar/diagnóstico por imagen , Probabilidad , Estudios Retrospectivos , Adulto Joven
7.
Database (Oxford) ; 20212021 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-33963845

RESUMEN

Numerous studies demonstrate frequent mutations in the genome of SARS-CoV-2. Our goal was to statistically link mutations to severe disease outcome. We used an automated machine learning approach where 1594 viral genomes with available clinical follow-up data were used as the training set (797 'severe' and 797 'mild'). The best algorithm, based on random forest classification combined with the LASSO feature selection algorithm, was employed to the training set to link mutation signatures and outcome. The performance of the final model was estimated by repeated, stratified, 10-fold cross validation (CV) and then adjusted for multiple testing with Bootstrap Bias Corrected CV. We identified 26 protein and Untranslated Region (UTR) mutations significantly linked to severe outcome. The best classification algorithm uses a mutation signature of 22 mutations as well as the patient's age as the input and shows high classification efficiency with an area under the curve (AUC) of 0.94 [confidence interval (CI): [0.912, 0.962]] and a prediction accuracy of 87% (CI: [0.830, 0.903]). Finally, we established an online platform (https://covidoutcome.com/) that is capable to use a viral sequence and the patient's age as the input and provides a percentage estimation of disease severity. We demonstrate a statistical association between mutation signatures of SARS-CoV-2 and severe outcome of COVID-19. The established analysis platform enables a real-time analysis of new viral genomes.


Asunto(s)
COVID-19/genética , COVID-19/patología , Genoma Viral , Mutación , SARS-CoV-2/genética , Índice de Severidad de la Enfermedad , Área Bajo la Curva , COVID-19/virología , Conjuntos de Datos como Asunto , Humanos , Aprendizaje Automático , Probabilidad , Regiones no Traducidas
8.
J Radiat Res ; 62(Supplement_1): i88-i94, 2021 May 05.
Artículo en Inglés | MEDLINE | ID: mdl-33978175

RESUMEN

After chemical, biological, radiological, nuclear or explosive (CBRNE) disasters, trepidation and infodemics about invisible hazards may cause indirect casualties in the affected society. Effective communication regarding technical issues between disaster experts and the residents is key to averting such secondary impacts. However, misconceptions about scientific issues and mistrust in experts frequently occur even with intensive and sincere communications. This miscommunication is usually attributed to residents' conflicts with illiteracy, emotion, value depositions and ideologies. However, considering that communication is an interactive process, there are likely to be additional factors attributable to experts. This article aims to summarize the gaps in rationality between experts and residents observed after the 2011 Fukushima nuclear disaster to describe how residents perceived experts. There were discrepancies in the perception of 'facts', the perception of probability, the interpretation of risk comparison, what were included as risk trade-offs, the view of the disaster, whose behavior would be changed by the communication and whether risk should be considered a science. These findings suggest that there was a non-scientific rationality among residents, which often exercised a potent influence on everyday decision-making. It might not be residents but experts who need to change their behavior. The discrepancies described in this article are likely to apply to communications following any CBRNE disasters that affect people's lives, such as the current COVID-19 pandemic. Therefore, our experiences in Fukushima may provide clues to averting mutual mistrust between experts and achieving better public health outcomes during and after a crisis.


Asunto(s)
Comunicación , Accidente Nuclear de Fukushima , COVID-19/epidemiología , COVID-19/virología , Humanos , Probabilidad , Factores de Riesgo , SARS-CoV-2/fisiología
9.
BMC Bioinformatics ; 22(1): 254, 2021 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-34000989

RESUMEN

BACKGROUND: Colocalization is a statistical method used in genetics to determine whether the same variant is causal for multiple phenotypes, for example, complex traits and gene expression. It provides stronger mechanistic evidence than shared significance, which can be produced through separate causal variants in linkage disequilibrium. Current colocalization methods require full summary statistics for both traits, limiting their use with the majority of reported GWAS associations (e.g. GWAS Catalog). We propose a new approximation to the popular coloc method that can be applied when limited summary statistics are available. Our method (POint EstiMation of Colocalization, POEMColoc) imputes missing summary statistics for one or both traits using LD structure in a reference panel, and performs colocalization using the imputed summary statistics. RESULTS: We evaluate the performance of POEMColoc using real (UK Biobank phenotypes and GTEx eQTL) and simulated datasets. We show good correlation between posterior probabilities of colocalization computed from imputed and observed datasets and similar accuracy in simulation. We evaluate scenarios that might reduce performance and show that multiple independent causal variants in a region and imputation from a limited subset of typed variants have a larger effect while mismatched ancestry in the reference panel has a modest effect. Further, we find that POEMColoc is a better approximation of coloc when the imputed association statistics are from a well powered study (e.g., relatively larger sample size or effect size). Applying POEMColoc to estimate colocalization of GWAS Catalog entries and GTEx eQTL, we find evidence for colocalization of 150,000 trait-gene-tissue triplets. CONCLUSIONS: We find that colocalization analysis performed with full summary statistics can be closely approximated when only the summary statistics of the top SNP are available for one or both traits. When applied to the full GWAS Catalog and GTEx eQTL, we find that colocalized trait-gene pairs are enriched in tissues relevant to disease etiology and for matches to approved drug mechanisms. POEMColoc R package is available at https://github.com/AbbVie-ComputationalGenomics/POEMColoc .


Asunto(s)
Estudio de Asociación del Genoma Completo , Sitios de Carácter Cuantitativo , Desequilibrio de Ligamiento , Herencia Multifactorial , Polimorfismo de Nucleótido Simple , Probabilidad
10.
J Environ Manage ; 291: 112664, 2021 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-33975269

RESUMEN

Reliable estimates of wildlife mortality due to wildlife-vehicle collisions are key to understanding its impact on wildlife populations and developing strategies to prevent or reduce collisions. Standardised approaches for monitoring roadkill are needed to derive robust and unbiased estimates of mortality that are comparable across different study systems and ecological contexts. When designing surveys, there is a trade-off between survey frequency (and hence logistical effort and financial cost) and carcass detection. In this regard, carcass persistence (the period a carcass remains detectable before being removed by decomposition or scavengers) is important; the longer a carcass persists, the greater the likelihood it will be detected with lower survey effort by conducting more infrequent surveys. Using multi-taxon carcass data collected over a month of repeated driven surveys, combined with five covariates (species functional group, body weight, carcass position on road, carcass condition [either flattened or not after impact], and rainfall prior to each survey), we explored the drivers of carcass persistence with the overall aim of providing information to optimise the design of carcass surveys along linear infrastructure. Our methodological approach included a survival analysis to determine carcass persistence, linear regressions to test the effect of covariates, a subsampling analysis (using field data and a simulation exercise) to assess how the proportion of carcasses detected changes according to survey frequency, and an analysis to compare the costs of surveys based on study duration, transect length and survey frequency. Mean overall carcass persistence was 2.7 days and was significantly correlated with position on road and within-functional group body weight. There was no evidence for a significant effect of rainfall, while the effect of carcass condition was weakly non-significant. The proportion of carcasses detected decreased sharply when survey intervals were longer than three days. However, we showed that survey costs can be reduced by up to 80% by conducting non-daily surveys. Expanding on the call for a standardised methodology for roadkill surveys, we propose that carcass persistence be explicitly considered during survey design. By carefully considering the objectives of the survey and characteristics of the focal taxa, researchers can substantially reduce logistical costs. In addition, we developed an R Shiny web app that can be used by practitioners to compare survey costs across a variety of survey characteristics. This web app will allow practitioners to easily assess the trade-off between carcass detection and logistical effort.


Asunto(s)
Animales Salvajes , Animales , Probabilidad , Encuestas y Cuestionarios
11.
Adv Exp Med Biol ; 1269: 185-190, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33966215

RESUMEN

In radiotherapy, hypoxia is a known negative factor, occurring especially in solid malignant tumours. Nitroimidazole-based positron emission tomography (PET) tracers, due to their selective binding to hypoxic cells, could be used as surrogates to image and quantify the underlying oxygen distributions in tissues. The spatial resolution of a clinical PET image, however, is much larger than the cellular spatial scale where hypoxia occurs. A question therefore arises regarding the possibility of quantifying different hypoxia levels based on PET images, and the aim of the present study is the prescription of corresponding therapeutic doses and its exploration.A tumour oxygenation model was created consisting of two concentric spheres with different oxygen partial pressure (pO2) distributions. In order to mimic a PET image of the simulated tumour, given the relation between uptake and pO2, fundamental effects that limit spatial resolution in a PET imaging system were considered: the uptake distribution was processed with a Gaussian 3D filter, and a re-binning to reach a typical PET image voxel size was performed. Prescription doses to overcome tumour hypoxia and predicted tumour control probability (TCP) were calculated based on the processed images for several fractionation schemes. Knowing the underlying oxygenation at microscopic scale, the actual TCP expected after the delivery of the calculated prescription doses was evaluated. Results are presented for three different dose painting strategies: by numbers, by contours and by using a voxel grouping-based approach.The differences between predicted TCP and evaluated TCP indicate that careful consideration must be taken on the dose prescription strategy and the selection of the number of fractions, depending on the severity of hypoxia.


Asunto(s)
Neoplasias , Humanos , Neoplasias/diagnóstico por imagen , Oxígeno , Presión Parcial , Tomografía de Emisión de Positrones , Probabilidad
12.
Comput Math Methods Med ; 2021: 6634887, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33968159

RESUMEN

More recently in statistical quality control studies, researchers are paying more attention to quality characteristics having nonnormal distributions. In the present article, a generalized multiple dependent state (GMDS) sampling control chart is proposed based on the transformation of gamma quality characteristics into a normal distribution. The parameters for the proposed control charts are obtained using in-control average run length (ARL) at specified shape parametric values for different specified average run lengths. The out-of-control ARL of the proposed gamma control chart using GMDS sampling is explored using simulation for various shift size changes in scale parameters to study the performance of the control chart. The proposed gamma control chart performs better than the existing multiple dependent state sampling (MDS) based on gamma distribution and traditional Shewhart control charts in terms of average run lengths. A case study with real-life data from ICU intake to death caused by COVID-19 has been incorporated for the realistic handling of the proposed control chart design.


Asunto(s)
COVID-19/epidemiología , COVID-19/mortalidad , Unidades de Cuidados Intensivos , Algoritmos , China/epidemiología , Simulación por Computador , Cuidados Críticos/métodos , Humanos , Modelos Estadísticos , Probabilidad , Control de Calidad
13.
J Environ Manage ; 292: 112822, 2021 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-34030017

RESUMEN

Estimating the composition of construction waste is crucial to the efficient operation of various waste management facilities, such as landfills, public fills, and sorting plants. However, this estimating task is often challenged by the desire of quickness and accuracy in real-life scenarios. By harnessing a valuable data set in Hong Kong, this research develops a big data-probability (BD-P) model to estimate construction waste composition based on bulk density. Using a saturated data set of 4.27 million truckloads of construction waste, the probability distribution of construction waste bulk density is derived, and then, based on the Law of Joint Probability, the BD-P model is developed. A validation experiment using 604 ground truth data entries indicates a model accuracy of 90.2%, Area Under Curve (AUC) of 0.8775, and speed of around 52 s per load in estimating the composition of each incoming construction waste load. The BD-P model also informed a linear model which can perform the estimation with an accuracy of 88.8% but consuming 0.4 s per case. The major novelty of this research is to harmonize big data analytics and traditional probability theories in improving the classic challenge of predictive analyses. In the practical sphere, it satisfactorily solves the construction waste estimation problem faced by many waste management facility operators. In the academic sphere, this research provides a vivid example that big data and theories are not adversaries, but allies.


Asunto(s)
Materiales de Construcción , Administración de Residuos , Macrodatos , Hong Kong , Probabilidad , Instalaciones de Eliminación de Residuos
14.
J Phys Chem B ; 125(18): 4667-4680, 2021 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-33938737

RESUMEN

To gain insight into the reaction mechanism of activated processes, we introduce an exact approach for quantifying the topology of high-dimensional probability surfaces of the underlying dynamic processes. Instead of Morse indexes, we study the homology groups of a sequence of superlevel sets of the probability surface over high-dimensional configuration spaces using persistent homology. For alanine-dipeptide isomerization, a prototype of activated processes, we identify locations of probability peaks and connecting ridges, along with measures of their global prominence. Instead of a saddle point, the transition state ensemble (TSE) of conformations is at the most prominent probability peak after reactants/products, when proper reaction coordinates are included. Intuition-based models, even those exhibiting a double-well, fail to capture the dynamics of the activated process. Peak occurrence, prominence, and locations can be distorted upon subspace projection. While principal component analysis accounts for conformational variance, it inflates the complexity of the surface topology and destroys the dynamic properties of the topological features. In contrast, TSE emerges naturally as the most prominent peak beyond the reactant/product basins, when projected to a subspace of minimum dimension containing the reaction coordinates. Our approach is general and can be applied to investigate the topology of high-dimensional probability surfaces of other activated processes.


Asunto(s)
Alanina , Dipéptidos , Conformación Molecular , Probabilidad
15.
Nat Commun ; 12(1): 2175, 2021 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-33846353

RESUMEN

In the 1970s, Paul Martin proposed that big game hunters armed with fluted projectile points colonized the Americas and drove the extinction of megafauna. Around fifty years later, the central role of humans in the extinctions is still strongly debated in North American archaeology, but little considered in South America. Here we analyze the temporal dynamic and spatial distribution of South American megafauna and fluted (Fishtail) projectile points to evaluate the role of humans in Pleistocene extinctions. We observe a strong relationship between the temporal density and spatial distribution of megafaunal species stratigraphically associated with humans and Fishtail projectile points, as well as with the fluctuations in human demography. On this basis we propose that the direct effect of human predation was the main factor driving the megafaunal decline, with other secondary, but necessary, co-occurring factors for the collapse of the megafaunal community.


Asunto(s)
Extinción Biológica , Paleontología , Dinámica Poblacional , Animales , Arqueología , Humanos , Mamíferos/fisiología , Probabilidad , América del Sur , Especificidad de la Especie , Factores de Tiempo
16.
J Korean Med Sci ; 36(14): e101, 2021 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-33847084

RESUMEN

We evaluated the Standard Q COVID-19 Ag test for the diagnosis of coronavirus disease 2019 (COVID-19) compared to the reverse transcription-polymerase chain reaction (RT-PCR) test. We applied both tests to patients who were about to be hospitalized, had visited an emergency room, or had been admitted due to COVID-19 confirmed by RT-PCR. Two nasopharyngeal swabs were obtained; one was tested by RT-PCR and the other by the Standard Q COVID-19 Ag test. A total of 118 pairs of tests from 98 patients were performed between January 5 and 11, 2021. The overall sensitivity and specificity for detecting severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) for the Standard Q COVID-19 Ag test compared to RT-PCR were 17.5% (95% confidence interval [CI], 8.8-32.0%) and 100% (95% CI, 95.3-100.0%). Analysis of the results using RT-PCR cycle thresholds of ≤ 30 or ≤ 25 increased the sensitivity to 26.9% (95% CI, 13.7-46.1%), and 41.1% (95% CI, 21.6-64.0%), respectively.


Asunto(s)
Antígenos Virales/inmunología , Prueba de COVID-19 , COVID-19/diagnóstico , COVID-19/inmunología , Servicio de Urgencia en Hospital , Reacciones Falso Positivas , Humanos , Nasofaringe/virología , Valor Predictivo de las Pruebas , Probabilidad , Estándares de Referencia , Reproducibilidad de los Resultados , Reacción en Cadena de la Polimerasa de Transcriptasa Inversa , Sensibilidad y Especificidad
17.
BMC Ecol Evol ; 21(1): 55, 2021 04 13.
Artículo en Inglés | MEDLINE | ID: mdl-33849454

RESUMEN

Why females engage in social polygyny remains an unresolved question in species where the resources provided by males maximize female fitness. In these systems, the ability of males to access several females, as well as the willingness of females to mate with an already mated male, and the benefits of this choice, may be constrained by the socio-ecological factors experienced at the local scale. Here, we used a 19-year dataset from an individual-monitored population of pied flycatchers (Ficedula hypoleuca) to establish local networks of breeding pairs. Then, we examined whether the probability of becoming socially polygynous and of mating with an already mated male (thus becoming a secondary female) is influenced by morphological and sexual traits as proxies of individual quality relative to the neighbours. We also evaluated whether social polygyny is adaptive for females by examining the effect of females' mating status (polygamously-mated vs monogamously-mated) on direct (number of recruits in a given season) and indirect (lifetime number of fledglings produced by these recruits) fitness benefits. The phenotypic quality of individuals, by influencing their breeding asynchrony relative to their neighbours, mediated the probability of being involved in a polygynous event. Individuals in middle-age (2-3 years), with large wings and, in the case of males, with conspicuous sexual traits, started to breed earlier than their neighbours. By breeding locally early, males increased their chances of becoming polygynous, while females reduced their chances of mating with an already mated male. Our results suggest that secondary females may compensate the fitness costs, if any, of sharing a mate, since their number of descendants did not differ from monogamous females. We emphasize the need of accounting for local breeding settings (ecological, social, spatial, and temporal) and the phenotypic composition of neighbours to understand individual mating decisions.


Asunto(s)
Matrimonio , Conducta Sexual Animal , Animales , Preescolar , Femenino , Masculino , Fenotipo , Probabilidad , Reproducción
18.
BMC Bioinformatics ; 22(1): 192, 2021 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-33858319

RESUMEN

BACKGROUND: The Cox proportional hazards model is commonly used to predict hazard ratio, which is the risk or probability of occurrence of an event of interest. However, the Cox proportional hazard model cannot directly generate an individual survival time. To do this, the survival analysis in the Cox model converts the hazard ratio to survival times through distributions such as the exponential, Weibull, Gompertz or log-normal distributions. In other words, to generate the survival time, the Cox model has to select a specific distribution over time. RESULTS: This study presents a method to predict the survival time by integrating hazard network and a distribution function network. The Cox proportional hazards network is adapted in DeepSurv for the prediction of the hazard ratio and a distribution function network applied to generate the survival time. To evaluate the performance of the proposed method, a new evaluation metric that calculates the intersection over union between the predicted curve and ground truth was proposed. To further understand significant prognostic factors, we use the 1D gradient-weighted class activation mapping method to highlight the network activations as a heat map visualization over an input data. The performance of the proposed method was experimentally verified and the results compared to other existing methods. CONCLUSIONS: Our results confirmed that the combination of the two networks, Cox proportional hazards network and distribution function network, can effectively generate accurate survival time.


Asunto(s)
Proyectos de Investigación , Probabilidad , Modelos de Riesgos Proporcionales , Análisis de Supervivencia
19.
Front Public Health ; 9: 650243, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33796500

RESUMEN

With the beginning of the autumn-winter season, Italy experienced an increase of SARS-CoV-2 cases, requiring the Government to adopt new restrictive measures. The national surveillance system in place defines 21 key process and performance indicators addressing for each Region/Autonomous Province: (i) the monitoring capacity, (ii) the degree of diagnostic capability, investigation and contact tracing, and (iii) the characteristics of the transmission dynamics as well as the resilience of health services. Overall, the traffic light approach shows a collective effort by the Italian Government to define strategies to both contain the spread of COVID-19 and to minimize the economic and social impact of the epidemic. Nonetheless, on what principles color-labeled risk levels are assigned on a regional level, it remains rather unclear or difficult to track.


Asunto(s)
Algoritmos , COVID-19/epidemiología , COVID-19/transmisión , Trazado de Contacto , Gobierno , Humanos , Italia/epidemiología , Probabilidad , Medición de Riesgo
20.
Sensors (Basel) ; 21(9)2021 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-33919018

RESUMEN

Real-word errors are characterized by being actual terms in the dictionary. By providing context, real-word errors are detected. Traditional methods to detect and correct such errors are mostly based on counting the frequency of short word sequences in a corpus. Then, the probability of a word being a real-word error is computed. On the other hand, state-of-the-art approaches make use of deep learning models to learn context by extracting semantic features from text. In this work, a deep learning model were implemented for correcting real-word errors in clinical text. Specifically, a Seq2seq Neural Machine Translation Model mapped erroneous sentences to correct them. For that, different types of error were generated in correct sentences by using rules. Different Seq2seq models were trained and evaluated on two corpora: the Wikicorpus and a collection of three clinical datasets. The medicine corpus was much smaller than the Wikicorpus due to privacy issues when dealing with patient information. Moreover, GloVe and Word2Vec pretrained word embeddings were used to study their performance. Despite the medicine corpus being much smaller than the Wikicorpus, Seq2seq models trained on the medicine corpus performed better than those models trained on the Wikicorpus. Nevertheless, a larger amount of clinical text is required to improve the results.


Asunto(s)
Lenguaje , Semántica , Humanos , Procesamiento de Lenguaje Natural , Privacidad , Probabilidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...