Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61.122
Filtrar
1.
PLoS One ; 19(7): e0306028, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38950055

RESUMEN

Even with the powerful statistical parameters derived from the Extreme Gradient Boost (XGB) algorithm, it would be advantageous to define the predicted accuracy to the level of a specific case, particularly when the model output is used to guide clinical decision-making. The probability density function (PDF) of the derived intracranial pressure predictions enables the computation of a definite integral around a point estimate, representing the event's probability within a range of values. Seven hold-out test cases used for the external validation of an XGB model underwent retinal vascular pulse and intracranial pressure measurement using modified photoplethysmography and lumbar puncture, respectively. The definite integral ±1 cm water from the median (DIICP) demonstrated a negative and highly significant correlation (-0.5213±0.17, p< 0.004) with the absolute difference between the measured and predicted median intracranial pressure (DiffICPmd). The concordance between the arterial and venous probability density functions was estimated using the two-sample Kolmogorov-Smirnov statistic, extending the distribution agreement across all data points. This parameter showed a statistically significant and positive correlation (0.4942±0.18, p< 0.001) with DiffICPmd. Two cautionary subset cases (Case 8 and Case 9), where disagreement was observed between measured and predicted intracranial pressure, were compared to the seven hold-out test cases. Arterial predictions from both cautionary subset cases converged on a uniform distribution in contrast to all other cases where distributions converged on either log-normal or closely related skewed distributions (gamma, logistic, beta). The mean±standard error of the arterial DIICP from cases 8 and 9 (3.83±0.56%) was lower compared to that of the hold-out test cases (14.14±1.07%) the between group difference was statistically significant (p<0.03). Although the sample size in this analysis was limited, these results support a dual and complementary analysis approach from independently derived retinal arterial and venous non-invasive intracranial pressure predictions. Results suggest that plotting the PDF and calculating the lower order moments, arterial DIICP, and the two sample Kolmogorov-Smirnov statistic may provide individualized predictive accuracy parameters.


Asunto(s)
Presión Intracraneal , Aprendizaje Automático , Probabilidad , Humanos , Presión Intracraneal/fisiología , Femenino , Masculino , Algoritmos , Adulto , Persona de Mediana Edad
2.
Int J Epidemiol ; 53(4)2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38996447

RESUMEN

BACKGROUND: Empirical evaluation of inverse probability weighting (IPW) for self-selection bias correction is inaccessible without the full source population. We aimed to: (i) investigate how self-selection biases frequency and association measures and (ii) assess self-selection bias correction using IPW in a cohort with register linkage. METHODS: The source population included 17 936 individuals invited to the Copenhagen Aging and Midlife Biobank during 2009-11 (ages 49-63 years). Participants counted 7185 (40.1%). Register data were obtained for every invited person from 7 years before invitation to the end of 2020. The association between education and mortality was estimated using Cox regression models among participants, IPW participants and the source population. RESULTS: Participants had higher socioeconomic position and fewer hospital contacts before baseline than the source population. Frequency measures of participants approached those of the source population after IPW. Compared with primary/lower secondary education, upper secondary, short tertiary, bachelor and master/doctoral were associated with reduced risk of death among participants (adjusted hazard ratio [95% CI]: 0.60 [0.46; 0.77], 0.68 [0.42; 1.11], 0.37 [0.25; 0.54], 0.28 [0.18; 0.46], respectively). IPW changed the estimates marginally (0.59 [0.45; 0.77], 0.57 [0.34; 0.93], 0.34 [0.23; 0.50], 0.24 [0.15; 0.39]) but not only towards those of the source population (0.57 [0.51; 0.64], 0.43 [0.32; 0.60], 0.38 [0.32; 0.47], 0.22 [0.16; 0.29]). CONCLUSIONS: Frequency measures of study participants may not reflect the source population in the presence of self-selection, but the impact on association measures can be limited. IPW may be useful for (self-)selection bias correction, but the returned results can still reflect residual or other biases and random errors.


Asunto(s)
Mortalidad , Modelos de Riesgos Proporcionales , Factores Socioeconómicos , Humanos , Femenino , Masculino , Persona de Mediana Edad , Dinamarca/epidemiología , Mortalidad/tendencias , Sesgo de Selección , Escolaridad , Probabilidad , Sistema de Registros
3.
Hum Brain Mapp ; 45(10): e26759, 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-38989632

RESUMEN

The inferior frontal sulcus (ifs) is a prominent sulcus on the lateral frontal cortex, separating the middle frontal gyrus from the inferior frontal gyrus. The morphology of the ifs can be difficult to distinguish from adjacent sulci, which are often misidentified as continuations of the ifs. The morphological variability of the ifs and its relationship to surrounding sulci were examined in 40 healthy human subjects (i.e., 80 hemispheres). The sulci were identified and labeled on the native cortical surface meshes of individual subjects, permitting proper intra-sulcal assessment. Two main morphological patterns of the ifs were identified across hemispheres: in Type I, the ifs was a single continuous sulcus, and in Type II, the ifs was discontinuous and appeared in two segments. The morphology of the ifs could be further subdivided into nine subtypes based on the presence of anterior and posterior sulcal extensions. The ifs was often observed to connect, either superficially or completely, with surrounding sulci, and seldom appeared as an independent sulcus. The spatial variability of the ifs and its various morphological configurations were quantified in the form of surface spatial probability maps which are made publicly available in the standard fsaverage space. These maps demonstrated that the ifs generally occupied a consistent position across hemispheres and across individuals. The normalized mean sulcal depths associated with the main morphological types were also computed. The present study provides the first detailed description of the ifs as a sulcal complex composed of segments and extensions that can be clearly differentiated from adjacent sulci. These descriptions, together with the spatial probability maps, are critical for the accurate identification of the ifs in anatomical and functional neuroimaging studies investigating the structural characteristics and functional organization of this region in the human brain.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Adulto , Mapeo Encefálico/métodos , Lóbulo Frontal/anatomía & histología , Lóbulo Frontal/diagnóstico por imagen , Adulto Joven , Procesamiento de Imagen Asistido por Computador/métodos , Probabilidad
4.
Sci Rep ; 14(1): 15467, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969702

RESUMEN

In this article we address two related issues on the learning of probabilistic sequences of events. First, which features make the sequence of events generated by a stochastic chain more difficult to predict. Second, how to model the procedures employed by different learners to identify the structure of sequences of events. Playing the role of a goalkeeper in a video game, participants were told to predict step by step the successive directions-left, center or right-to which the penalty kicker would send the ball. The sequence of kicks was driven by a stochastic chain with memory of variable length. Results showed that at least three features play a role in the first issue: (1) the shape of the context tree summarizing the dependencies between present and past directions; (2) the entropy of the stochastic chain used to generate the sequences of events; (3) the existence or not of a deterministic periodic sequence underlying the sequences of events. Moreover, evidence suggests that best learners rely less on their own past choices to identify the structure of the sequences of events.


Asunto(s)
Juegos de Video , Humanos , Masculino , Femenino , Adulto , Aprendizaje , Probabilidad , Adulto Joven , Procesos Estocásticos
5.
BMC Med Res Methodol ; 24(1): 147, 2024 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-39003440

RESUMEN

BACKGROUND: Decision analytic models and meta-analyses often rely on survival probabilities that are digitized from published Kaplan-Meier (KM) curves. However, manually extracting these probabilities from KM curves is time-consuming, expensive, and error-prone. We developed an efficient and accurate algorithm that automates extraction of survival probabilities from KM curves. METHODS: The automated digitization algorithm processes images from a JPG or PNG format, converts them in their hue, saturation, and lightness scale and uses optical character recognition to detect axis location and labels. It also uses a k-medoids clustering algorithm to separate multiple overlapping curves on the same figure. To validate performance, we generated survival plots form random time-to-event data from a sample size of 25, 50, 150, and 250, 1000 individuals split into 1,2, or 3 treatment arms. We assumed an exponential distribution and applied random censoring. We compared automated digitization and manual digitization performed by well-trained researchers. We calculated the root mean squared error (RMSE) at 100-time points for both methods. The algorithm's performance was also evaluated by Bland-Altman analysis for the agreement between automated and manual digitization on a real-world set of published KM curves. RESULTS: The automated digitizer accurately identified survival probabilities over time in the simulated KM curves. The average RMSE for automated digitization was 0.012, while manual digitization had an average RMSE of 0.014. Its performance was negatively correlated with the number of curves in a figure and the presence of censoring markers. In real-world scenarios, automated digitization and manual digitization showed very close agreement. CONCLUSIONS: The algorithm streamlines the digitization process and requires minimal user input. It effectively digitized KM curves in simulated and real-world scenarios, demonstrating accuracy comparable to conventional manual digitization. The algorithm has been developed as an open-source R package and as a Shiny application and is available on GitHub: https://github.com/Pechli-Lab/SurvdigitizeR and https://pechlilab.shinyapps.io/SurvdigitizeR/ .


Asunto(s)
Algoritmos , Humanos , Estimación de Kaplan-Meier , Análisis de Supervivencia , Probabilidad
6.
PLoS One ; 19(7): e0305264, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39028741

RESUMEN

This study aimed to assess and compare the probability of tuberculosis (TB) transmission based on five dynamic models: the Wells-Riley equation, two Rudnick & Milton-proposed models based on air changes per hour (ACH) and liters per second per person (L/s/p), the model proposed by Issarow et al, and the Applied Susceptible-Exposed-Infected-Recovered (SEIR) TB transmission model. This study also aimed to determine the impact of model parameters on such probabilities in three Thai prisons. A cross-sectional study was conducted using data from 985 prison cells. The TB transmission probability for each cell was calculated using parameters relevant to the specific model formula, and the magnitude of the model agreement was examined by Spearman's rank correlation and Bland-Altman plot. Subsequently, a multiple linear regression analysis was conducted to investigate the influence of each model parameter on the estimated probability. Results revealed that the median (Quartiles 1 and 3) of TB transmission probability among these cells was 0.052 (0.017, 0.180). Compared with the pioneered Wells-Riley's model, the remaining models projected discrepant TB transmission probability from less to more commensurate to the degree of model modification from the pioneered model as follows: Rudnick & Milton (ACH), Issarow et al., and Rudnick & Milton (L/s/p), and the applied SEIR models. The ventilation rate and number of infectious TB patients in each cell or zone had the greatest impact on the estimated TB transmission probability in most models. Additionally, the number of inmates in each cell, the area per person in square meters, and the inmate turnover rate were identified as high-impact parameters in the applied SEIR model. All stakeholders must urgently address these influential parameters to reduce TB transmission in prisons. Moreover, further studies are required to determine their relative validity in accurately predicting TB incidence in prison settings.


Asunto(s)
Prisiones , Probabilidad , Tuberculosis , Humanos , Tailandia/epidemiología , Tuberculosis/transmisión , Tuberculosis/epidemiología , Estudios Transversales , Masculino , Pueblos del Sudeste Asiático
7.
Sci Rep ; 14(1): 16922, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39043739

RESUMEN

In this article, we considered a nonlinear compartmental mathematical model that assesses the effect of treatment on the dynamics of HIV/AIDS and pneumonia (H/A-P) co-infection in a human population at different infection stages. Understanding the complexities of co-dynamics is now critically necessary as a consequence. The aim of this research is to construct a co-infection model of H/A-P in the context of fractional calculus operators, white noise and probability density functions, employing a rigorous biological investigation. By exhibiting that the system possesses non-negative and bounded global outcomes, it is shown that the approach is both mathematically and biologically practicable. The required conditions are derived, guaranteeing the eradication of the infection. Furthermore, adequate prerequisites are established, and the configuration is tested for the existence of an ergodic stationary distribution. For discovering the system's long-term behavior, a deterministic-probabilistic technique for modeling is designed and operated in MATLAB. By employing an extensive review, we hope that the previously mentioned approach improves and leads to mitigating the two diseases and their co-infections by examining a variety of behavioral trends, such as transitions to unpredictable procedures. In addition, the piecewise differential strategies are being outlined as having promising potential for scholars in a range of contexts because they empower them to include particular characteristics across multiple time frame phases. Such formulas can be strengthened via classical techniques, power law, exponential decay, generalized Mittag-Leffler kernels, probability density functions and random procedures. Furthermore, we get an accurate description of the probability density function encircling a quasi-equilibrium point if the effect of H/A-P minimizes the propagation of the co-dynamics. Consequently, scholars can obtain better outcomes when analyzing facts using random perturbations by implementing these strategies for challenging issues. Random perturbations in H/A-P co-infection are crucial in controlling the spread of an epidemic whenever the suggested circulation is steady and the amount of infection eliminated is closely correlated with the random perturbation level.


Asunto(s)
Coinfección , Dinámicas no Lineales , Neumonía , Humanos , Infecciones por VIH/complicaciones , Síndrome de Inmunodeficiencia Adquirida , Modelos Estadísticos , Modelos Teóricos , Probabilidad
8.
Sci Rep ; 14(1): 14557, 2024 06 24.
Artículo en Inglés | MEDLINE | ID: mdl-38914736

RESUMEN

The study aims to develop an abnormal body temperature probability (ABTP) model for dairy cattle, utilizing environmental and physiological data. This model is designed to enhance the management of heat stress impacts, providing an early warning system for farm managers to improve dairy cattle welfare and farm productivity in response to climate change. The study employs the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm to analyze environmental and physiological data from 320 dairy cattle, identifying key factors influencing body temperature anomalies. This method supports the development of various models, including the Lyman Kutcher-Burman (LKB), Logistic, Schultheiss, and Poisson models, which are evaluated for their ability to predict abnormal body temperatures in dairy cattle effectively. The study successfully validated multiple models to predict abnormal body temperatures in dairy cattle, with a focus on the temperature-humidity index (THI) as a critical determinant. These models, including LKB, Logistic, Schultheiss, and Poisson, demonstrated high accuracy, as measured by the AUC and other performance metrics such as the Brier score and Hosmer-Lemeshow (HL) test. The results highlight the robustness of the models in capturing the nuances of heat stress impacts on dairy cattle. The research develops innovative models for managing heat stress in dairy cattle, effectively enhancing detection and intervention strategies. By integrating advanced technologies and novel predictive models, the study offers effective measures for early detection and management of abnormal body temperatures, improving cattle welfare and farm productivity in changing climatic conditions. This approach highlights the importance of using multiple models to accurately predict and address heat stress in livestock, making significant contributions to enhancing farm management practices.


Asunto(s)
Temperatura Corporal , Industria Lechera , Animales , Bovinos , Temperatura Corporal/fisiología , Industria Lechera/métodos , Factores de Riesgo , Enfermedades de los Bovinos/diagnóstico , Enfermedades de los Bovinos/fisiopatología , Trastornos de Estrés por Calor/veterinaria , Trastornos de Estrés por Calor/fisiopatología , Femenino , Cambio Climático , Probabilidad , Medición de Riesgo/métodos
9.
PLoS One ; 19(6): e0304345, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38857287

RESUMEN

Irreversible electroporation induces permanent permeabilization of lipid membranes of vesicles, resulting in vesicle rupture upon the application of a pulsed electric field. Electrofusion is a phenomenon wherein neighboring vesicles can be induced to fuse by exposing them to a pulsed electric field. We focus how the frequency of direct current (DC) pulses of electric field impacts rupture and electrofusion in cell-sized giant unilamellar vesicles (GUVs) prepared in a physiological buffer. The average time, probability, and kinetics of rupture and electrofusion in GUVs have been explored at frequency 500, 800, 1050, and 1250 Hz. The average time of rupture of many 'single GUVs' decreases with the increase in frequency, whereas electrofusion shows the opposite trend. At 500 Hz, the rupture probability stands at 0.45 ± 0.02, while the electrofusion probability is 0.71 ± 0.01. However, at 1250 Hz, the rupture probability increases to 0.69 ± 0.03, whereas the electrofusion probability decreases to 0.46 ± 0.03. Furthermore, when considering kinetics, at 500 Hz, the rate constant of rupture is (0.8 ± 0.1)×10-2 s-1, and the rate constant of fusion is (2.4 ± 0.1)×10-2 s-1. In contrast, at 1250 Hz, the rate constant of rupture is (2.3 ± 0.8)×10-2 s-1, and the rate constant of electrofusion is (1.0 ± 0.1)×10-2 s-1. These results are discussed by considering the electrical model of the lipid bilayer and the energy barrier of a prepore.


Asunto(s)
Electroporación , Liposomas Unilamelares , Liposomas Unilamelares/química , Cinética , Electroporación/métodos , Probabilidad , Fusión de Membrana
10.
Stat Med ; 43(18): 3463-3483, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38853711

RESUMEN

Analysis of integrated data often requires record linkage in order to join together the data residing in separate sources. In case linkage errors cannot be avoided, due to the lack a unique identity key that can be used to link the records unequivocally, standard statistical techniques may produce misleading inference if the linked data are treated as if they were true observations. In this paper, we propose methods for categorical data analysis based on linked data that are not prepared by the analyst, such that neither the match-key variables nor the unlinked records are available. The adjustment is based on the proportion of false links in the linked file and our approach allows the probabilities of correct linkage to vary across the records without requiring that one is able to estimate this probability for each individual record. It accommodates also the general situation where unmatched records that cannot possibly be correctly linked exist in all the sources. The proposed methods are studied by simulation and applied to real data.


Asunto(s)
Simulación por Computador , Registro Médico Coordinado , Modelos Estadísticos , Humanos , Registro Médico Coordinado/métodos , Interpretación Estadística de Datos , Probabilidad
11.
Stat Med ; 43(18): 3524-3538, 2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-38863133

RESUMEN

Moderate calibration, the expected event probability among observations with predicted probability z being equal to z, is a desired property of risk prediction models. Current graphical and numerical techniques for evaluating moderate calibration of risk prediction models are mostly based on smoothing or grouping the data. As well, there is no widely accepted inferential method for the null hypothesis that a model is moderately calibrated. In this work, we discuss recently-developed, and propose novel, methods for the assessment of moderate calibration for binary responses. The methods are based on the limiting distributions of functions of standardized partial sums of prediction errors converging to the corresponding laws of Brownian motion. The novel method relies on well-known properties of the Brownian bridge which enables joint inference on mean and moderate calibration, leading to a unified "bridge" test for detecting miscalibration. Simulation studies indicate that the bridge test is more powerful, often substantially, than the alternative test. As a case study we consider a prediction model for short-term mortality after a heart attack, where we provide suggestions on graphical presentation and the interpretation of results. Moderate calibration can be assessed without requiring arbitrary grouping of data or using methods that require tuning of parameters.


Asunto(s)
Simulación por Computador , Modelos Estadísticos , Humanos , Medición de Riesgo/métodos , Infarto del Miocardio/mortalidad , Estadísticas no Paramétricas , Calibración , Probabilidad
12.
Environ Monit Assess ; 196(7): 647, 2024 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-38907768

RESUMEN

In this study, the current distribution probability of Ephedra gerardiana (Somalata), a medicinally potent species of the Himalayas, was assessed, and its spatial distribution change was forecasted until the year 2100 under three Shared Socioeconomic Pathways. Here, we used the maximum entropy model (MaxEnt) on 274 spatially filtered occurrence data points accessed from GBIF and other publications, and 19 bioclimatic variables were used as predictors against the probability assessment. The area under the curve, Continuous Boyce Index, True Skill Statistics, and kappa values were used to evaluate and validate the model. It was observed that the SSP5-8.5, a fossil fuel-fed scenario, saw a maximum habitat decline for E. gerardiana driving its niche towards higher altitudes. Nepal Himalayas witnessed a maximum decline in suitable habitat for the species, whereas it gained area in Bhutan. In India, regions of Himachal Pradesh, Uttarakhand, Jammu and Kashmir, and Sikkim saw a maximum negative response to climate change by the year 2100. Mean annual temperature, isothermality, diurnal temperature range, and precipitation seasonality are the most influential variables isolated by the model that contribute in defining the species' habitat. The results provide evidence of the effects of climate change on the distribution of endemic species in the study area under different scenarios of emissions and anthropogenic coupling. Certainly, the area of consideration encompasses several protected areas, which will become more vulnerable to increased variability of climate, and regulating their boundaries might become a necessary step to conserve the regions' biodiversity in the future.


Asunto(s)
Cambio Climático , Ecosistema , Nepal , India , Bután , Ephedra , Monitoreo del Ambiente , Probabilidad , Factores Socioeconómicos , Modelos Teóricos
13.
Biom J ; 66(4): e2300156, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38847059

RESUMEN

How to analyze data when there is violation of the positivity assumption? Several possible solutions exist in the literature. In this paper, we consider propensity score (PS) methods that are commonly used in observational studies to assess causal treatment effects in the context where the positivity assumption is violated. We focus on and examine four specific alternative solutions to the inverse probability weighting (IPW) trimming and truncation: matching weight (MW), Shannon's entropy weight (EW), overlap weight (OW), and beta weight (BW) estimators. We first specify their target population, the population of patients for whom clinical equipoise, that is, where we have sufficient PS overlap. Then, we establish the nexus among the different corresponding weights (and estimators); this allows us to highlight the shared properties and theoretical implications of these estimators. Finally, we introduce their augmented estimators that take advantage of estimating both the propensity score and outcome regression models to enhance the treatment effect estimators in terms of bias and efficiency. We also elucidate the role of the OW estimator as the flagship of all these methods that target the overlap population. Our analytic results demonstrate that OW, MW, and EW are preferable to IPW and some cases of BW when there is a moderate or extreme (stochastic or structural) violation of the positivity assumption. We then evaluate, compare, and confirm the finite-sample performance of the aforementioned estimators via Monte Carlo simulations. Finally, we illustrate these methods using two real-world data examples marked by violations of the positivity assumption.


Asunto(s)
Biometría , Puntaje de Propensión , Biometría/métodos , Humanos , Causalidad , Probabilidad
14.
Sci Rep ; 14(1): 12772, 2024 06 04.
Artículo en Inglés | MEDLINE | ID: mdl-38834671

RESUMEN

The diagnosis of acute appendicitis and concurrent surgery referral is primarily based on clinical presentation, laboratory and radiological imaging. However, utilizing such an approach results in as much as 10-15% of negative appendectomies. Hence, in the present study, we aimed to develop a machine learning (ML) model designed to reduce the number of negative appendectomies in pediatric patients with a high clinical probability of acute appendicitis. The model was developed and validated on a registry of 551 pediatric patients with suspected acute appendicitis that underwent surgical treatment. Clinical, anthropometric, and laboratory features were included for model training and analysis. Three machine learning algorithms were tested (random forest, eXtreme Gradient Boosting, logistic regression) and model explainability was obtained. Random forest model provided the best predictions achieving mean specificity and sensitivity of 0.17 ± 0.01 and 0.997 ± 0.001 for detection of acute appendicitis, respectively. Furthermore, the model outperformed the appendicitis inflammatory response (AIR) score across most sensitivity-specificity combinations. Finally, the random forest model again provided the best predictions for discrimination between complicated appendicitis, and either uncomplicated acute appendicitis or no appendicitis at all, with a joint mean sensitivity of 0.994 ± 0.002 and specificity of 0.129 ± 0.009. In conclusion, the developed ML model might save as much as 17% of patients with a high clinical probability of acute appendicitis from unnecessary surgery, while missing the needed surgery in only 0.3% of cases. Additionally, it showed better diagnostic accuracy than the AIR score, as well as good accuracy in predicting complicated acute appendicitis over uncomplicated and negative cases bundled together. This may be useful in centers that advocate for the conservative treatment of uncomplicated appendicitis. Nevertheless, external validation is needed to support these findings.


Asunto(s)
Apendicectomía , Apendicitis , Aprendizaje Automático , Humanos , Apendicitis/cirugía , Apendicitis/diagnóstico , Niño , Femenino , Masculino , Adolescente , Preescolar , Enfermedad Aguda , Probabilidad , Sensibilidad y Especificidad , Algoritmos
15.
PLoS One ; 19(6): e0303432, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38848327

RESUMEN

For the purpose of this study, A statistical test of Biblical books was conducted using the recently discovered probability models for text homogeneity and text change point detection. Accordingly, translations of Biblical books of Tigrigna and Amharic (major languages spoken in Eritrea and Ethiopia) and English were studied. A Zipf-Mandelbrot distribution with a parameter range of 0.55 to 0.88 was obtained in these three Bibles. According to the statistical analysis of the texts' homogeneity, the translation of Bible in each of these three languages was a heterogeneous concatenation of different books or genres. Furthermore, an in-depth examination of the text segmentation of prat of a single genre-the English Bible letters revealed that the Pauline letters are heterogeneous concatenations of two homogeneous segments.


Asunto(s)
Biblia , Modelos Estadísticos , Humanos , Probabilidad , Lenguaje , Etiopía
16.
Korean J Anesthesiol ; 77(3): 316-325, 2024 06.
Artículo en Inglés | MEDLINE | ID: mdl-38835136

RESUMEN

The statistical significance of a clinical trial analysis result is determined by a mathematical calculation and probability based on null hypothesis significance testing. However, statistical significance does not always align with meaningful clinical effects; thus, assigning clinical relevance to statistical significance is unreasonable. A statistical result incorporating a clinically meaningful difference is a better approach to present statistical significance. Thus, the minimal clinically important difference (MCID), which requires integrating minimum clinically relevant changes from the early stages of research design, has been introduced. As a follow-up to the previous statistical round article on P values, confidence intervals, and effect sizes, in this article, we present hands-on examples of MCID and various effect sizes and discuss the terms statistical significance and clinical relevance, including cautions regarding their use.


Asunto(s)
Diferencia Mínima Clínicamente Importante , Humanos , Probabilidad , Proyectos de Investigación , Ensayos Clínicos como Asunto/métodos , Interpretación Estadística de Datos , Intervalos de Confianza
17.
Gait Posture ; 112: 22-32, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38723392

RESUMEN

PURPOSE: Accelerometers are used to objectively measure physical activity; however, the relationship between accelerometer-based activity parameters and bone health is not well understood. This study examines the association between accelerometer-estimated daily activity impact intensities and future risk estimates of major osteoporotic fractures in a large population-based cohort. METHODS: Participants were 3165 adults 46 years of age from the Northern Finland Birth Cohort 1966 who agreed to wear a hip-worn accelerometer during all waking hours for 14 consecutive days. Raw accelerometer data were converted to resultant acceleration. Impact magnitude peaks were extracted and divided into 32 intensity bands, and the osteogenic index (OI) was calculated to assess the osteogenic effectiveness of various activities. Additionally, the impact peaks were categorized into three separate impact intensity categories (low, medium, and high). The 10-year probabilities of hip and all major osteoporotic fractures were estimated with FRAX-tool using clinical and questionnaire data in combination with body mass index collected at the age of 46 years. The associations of daily activity impact intensities with 10-year fracture probabilities were examined using three statistical approaches: multiple linear regression, partial correlation, and partial least squares (PLS) regression. RESULTS: On average, participants' various levels of impact were 8331 (SD = 3478) low; 2032 (1248) medium; and 1295 (1468) high impacts per day. All three statistical approaches found a significant positive association between the daily number of low-intensity impacts and 10-year probability of hip and all major osteoporotic fractures. In contrast, increased number of moderate to very high daily activity impacts was associated with a lower probability of future osteoporotic fractures. A higher OI was also associated with a lower probability of future major osteoporotic fractures. CONCLUSION: Low-intensity impacts might not be sufficient for reducing fracture risk in middle-aged adults, while high-intensity impacts could be beneficial for preventing major osteoporotic fractures.


Asunto(s)
Acelerometría , Fracturas Osteoporóticas , Humanos , Fracturas Osteoporóticas/epidemiología , Fracturas Osteoporóticas/fisiopatología , Persona de Mediana Edad , Femenino , Masculino , Finlandia/epidemiología , Actividades Cotidianas , Ejercicio Físico/fisiología , Medición de Riesgo/métodos , Probabilidad , Estudios de Cohortes , Fracturas de Cadera/epidemiología , Fracturas de Cadera/fisiopatología
18.
Stat Med ; 43(14): 2830-2852, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38720592

RESUMEN

INTRODUCTION: There is currently no guidance on how to assess the calibration of multistate models used for risk prediction. We introduce several techniques that can be used to produce calibration plots for the transition probabilities of a multistate model, before assessing their performance in the presence of random and independent censoring through a simulation. METHODS: We studied pseudo-values based on the Aalen-Johansen estimator, binary logistic regression with inverse probability of censoring weights (BLR-IPCW), and multinomial logistic regression with inverse probability of censoring weights (MLR-IPCW). The MLR-IPCW approach results in a calibration scatter plot, providing extra insight about the calibration. We simulated data with varying levels of censoring and evaluated the ability of each method to estimate the calibration curve for a set of predicted transition probabilities. We also developed evaluated the calibration of a model predicting the incidence of cardiovascular disease, type 2 diabetes and chronic kidney disease among a cohort of patients derived from linked primary and secondary healthcare records. RESULTS: The pseudo-value, BLR-IPCW, and MLR-IPCW approaches give unbiased estimates of the calibration curves under random censoring. These methods remained predominately unbiased in the presence of independent censoring, even if the censoring mechanism was strongly associated with the outcome, with bias concentrated in low-density regions of predicted transition probability. CONCLUSIONS: We recommend implementing either the pseudo-value or BLR-IPCW approaches to produce a calibration curve, combined with the MLR-IPCW approach to produce a calibration scatter plot. The methods have been incorporated into the "calibmsm" R package available on CRAN.


Asunto(s)
Simulación por Computador , Diabetes Mellitus Tipo 2 , Modelos Estadísticos , Humanos , Diabetes Mellitus Tipo 2/epidemiología , Medición de Riesgo/métodos , Medición de Riesgo/estadística & datos numéricos , Modelos Logísticos , Calibración , Enfermedades Cardiovasculares/epidemiología , Insuficiencia Renal Crónica/epidemiología , Probabilidad
19.
Artículo en Inglés | MEDLINE | ID: mdl-38723155

RESUMEN

Lead and its compounds can have cumulative harmful effects on the nervous, cardiovascular, and other systems, and especially affect the brain development of children. We collected 4918 samples from 15 food categories in 11 districts of Guangzhou, China, from 2017 to 2022, to investigate the extent of lead contamination in commercial foods and assess the health risk from dietary lead intake of the residents. Lead was measured in the samples using inductively coupled plasma mass spectrometry. Dietary exposure to lead was calculated based on the food consumption survey of Guangzhou residents in 2011, and the health risk of the population was evaluated using the margin of exposure (MOE) method. Lead was detected in 76.5% of the overall samples, with an average lead content of 29.4 µg kg-1. The highest lead level was found in bivalves. The mean daily dietary lead intakes were as follows: 0.44, 0.34, 0.25, and 0.28 µg kg-1 body weight (bw) day-1 for groups aged 3-6, 7-17, 18-59, and ≥ 60 years, respectively. Rice and rice products, leafy vegetables, and wheat flour and wheat products were identified as the primary sources of dietary lead exposure, accounting for 73.1%. The MOE values demonstrated the following tendency: younger age groups had lower MOEs, and 95% confidence ranges for the groups aged 3-6 and 7-17 began at 0.6 and 0.7, respectively, indicating the potential health risk of children, while those for other age groups were all above 1.0. Continued efforts are needed to reduce dietary lead exposure in Guangzhou.


Asunto(s)
Exposición Dietética , Contaminación de Alimentos , Plomo , Plomo/análisis , China , Humanos , Medición de Riesgo , Exposición Dietética/análisis , Contaminación de Alimentos/análisis , Niño , Adolescente , Preescolar , Adulto , Adulto Joven , Persona de Mediana Edad , Probabilidad , Femenino , Masculino
20.
Ulster Med J ; 93(1): 18-23, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38707974

RESUMEN

Verbal probability expressions such as 'likely' and 'possible' are commonly used to communicate uncertainty in diagnosis, treatment effectiveness as well as the risk of adverse events. Probability terms that are interpreted consistently can be used to standardize risk communication. A systematic review was conducted. Research studies that evaluated numeric meanings of probability terms were reviewed. Terms with consistent numeric interpretation across studies were selected and were used to construct a Visual Risk Scale. Five probability terms showed reliable interpretation by laypersons and healthcare professionals in empirical studies. 'Very Likely' was interpreted as 90% chance (range 80 to 95%); 'Likely/Probable,' 70% (60 to 80%); 'Possible,' 40% (30 to 60%); 'Unlikely,' 20% (10 to 30%); and 'Very Unlikely' with 10% chance (5% to 15%). The corresponding frequency terms were: Very Frequently, Frequently, Often, Infrequently, and Rarely, respectively. Probability terms should be presented with their corresponding numeric ranges during discussions with patients. Numeric values should be presented as X-in-100 natural frequency statements, even for low values; and not as percentages, X-in-1000, X-in-Y, odds, fractions, 1-in-X, or as number needed to treat (NNT). A Visual Risk Scale was developed for use in clinical shared decision making.


Asunto(s)
Comunicación , Probabilidad , Humanos , Medición de Riesgo/métodos , Incertidumbre , Relaciones Médico-Paciente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA