Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 327
Filtrar
Más filtros

Intervalo de año de publicación
1.
J Transl Med ; 22(1): 16, 2024 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-38178182

RESUMEN

BACKGROUND: p value is the most common statistic reported in scientific research articles. Choosing the conventional threshold of 0.05 commonly used for the p value in research articles, is unfounded. Many researchers have tried to provide a reasonable threshold for the p value; some proposed a lower threshold, eg, 0.005. However, none of the proposals has gained universal acceptance. Using the analogy between the diagnostic tests with continuous results and statistical inference tests of hypothesis, I wish to present a method to calculate the most appropriate p value significance threshold using the receiver operating characteristic curve (ROC) analysis. RESULTS: As with diagnostic tests where the most appropriate cut-off values are different depending on the situation, there is no unique cut-off for the p significance threshold. Unlike the previous proposals, which mostly suggest lowering the threshold to a fixed value (eg, from 0.05 to 0.005), the most appropriate p significance threshold proposed here, in most instances, is much less than the conventional cut-off of 0.05 and varies from study to study and from statistical test to test, even within a single study. The proposed method provides the minimum weighted sum of type I and type II errors. CONCLUSIONS: Given the perplexity involved in using the frequentist statistics in a correct way (dealing with different p significance thresholds, even in a single study), it seems that the p value is no longer a proper statistic to be used in our research; it should be replaced by alternative methods, eg, Bayesian methods.


Asunto(s)
Curva ROC , Teorema de Bayes
2.
Expert Rev Proteomics ; 21(7-8): 271-280, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39152734

RESUMEN

INTRODUCTION: Metaproteomics offers insights into the function of complex microbial communities, while it is also capable of revealing microbe-microbe and host-microbe interactions. Data-independent acquisition (DIA) mass spectrometry is an emerging technology, which holds great potential to achieve deep and accurate metaproteomics with higher reproducibility yet still facing a series of challenges due to the inherent complexity of metaproteomics and DIA data. AREAS COVERED: This review offers an overview of the DIA metaproteomics approaches, covering aspects such as database construction, search strategy, and data analysis tools. Several cases of current DIA metaproteomics studies are presented to illustrate the procedures. Important ongoing challenges are also highlighted. Future perspectives of DIA methods for metaproteomics analysis are further discussed. Cited references are searched through and collected from Google Scholar and PubMed. EXPERT OPINION: Considering the inherent complexity of DIA metaproteomics data, data analysis strategies specifically designed for interpretation are imperative. From this point of view, we anticipate that deep learning methods and de novo sequencing methods will become more prevalent in the future, potentially improving protein coverage in metaproteomics. Moreover, the advancement of metaproteomics also depends on the development of sample preparation methods, data analysis strategies, etc. These factors are key to unlocking the full potential of metaproteomics.


Asunto(s)
Espectrometría de Masas , Proteómica , Proteómica/métodos , Espectrometría de Masas/métodos , Humanos , Microbiota
3.
Emerg Med J ; 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39271245

RESUMEN

BACKGROUND: Although one objective of NHS 111 is to ease the strain on urgent and emergency care services, studies suggest the telephone triage service may be contributing to increased demand. Moreover, while parents and caregivers generally find NHS 111 satisfactory, concerns exist about its integration with the healthcare system and the appropriateness of advice. This study aimed to analyse the advice provided in NHS 111 calls, the duration between the call and ED attendance, and the outcomes of such attendances made by children and young people (C&YP). METHODS: A retrospective cohort study was carried out of C&YP (≤17) attending an ED in the Yorkshire and Humber region of the UK following contact with NHS 111 between 1 April 2016 and 31 March 2017. This linked-data study examined NHS 111 calls and ED outcomes. Lognormal mixture distributions were fit to compare the time taken to attend ED following calls. Logistic mixed effects regression models were used to identify predictors of low-acuity NHS 111-related ED attendances. RESULTS: Our study of 348 401 NHS 111 calls found they were primarily concerning children aged 0-4 years. Overall, 13.1% of calls were followed by an ED attendance, with a median arrival time of 51 minutes. Of the 34 664 calls advising ED attendance 41% complied, arriving with a median of 38 minutes-27% of which defined as low-acuity. Although most calls advising primary care were not followed by an ED attendance (93%), those seen in an ED generally attended later (median 102 minutes) with 23% defined as low-acuity. Younger age (<1) was a statistically significant predictor of low-acuity ED attendance following all call dispositions apart from home care. CONCLUSION: More tailored options for unscheduled healthcare may be needed for younger children. Both early low-acuity attendance and late high-acuity attendance following contact with NHS 111 could act as useful entry points for clinical audits of the telephone triage service.

4.
Emerg Med J ; 41(9): 563-566, 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-38834288

RESUMEN

Electronic patient records (EPRs) are potentially valuable sources of data for service development or research but often contain large amounts of missing data. Using complete case analysis or imputation of missing data seem like simple solutions, and are increasingly easy to perform in software packages, but can easily distort data and give misleading results if used without an understanding of missingness. So, knowing about patterns of missingness, and when to get expert data science (data engineering and analytics) help, will be a fundamental future skill for emergency physicians. This will maximise the good and minimise the harm of the easy availability of large patient datasets created by the introduction of EPRs.


Asunto(s)
Registros Electrónicos de Salud , Humanos , Registros Electrónicos de Salud/estadística & datos numéricos , Interpretación Estadística de Datos , Servicios Médicos de Urgencia/métodos , Servicios Médicos de Urgencia/normas
5.
Sensors (Basel) ; 24(11)2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38894354

RESUMEN

Utility as-built plans, which typically provide information about underground utilities' position and spatial locations, are known to comprise inaccuracies. Over the years, the reliance on utility investigations using an array of sensing equipment has increased in an attempt to resolve utility as-built inaccuracies and mitigate the high rate of accidental underground utility strikes during excavation activities. Adapting data fusion into utility engineering and investigation practices has been shown to be effective in generating information with improved accuracy. However, the complexities in data interpretation and associated prohibitive costs, especially for large-scale projects, are limiting factors. This paper addresses the problem of data interpretation, costs, and large-scale utility mapping with a novel framework that generates probabilistic inferences by fusing data from an automatically generated initial map with as-built data. The probabilistic inferences expose regions of high uncertainty, highlighting them as prime targets for further investigations. The proposed model is a collection of three main processes. First, the automatic initial map creation is a novel contribution supporting rapid utility mapping by subjecting identified utility appurtenances to utility inference rules. The second and third processes encompass the fusion of the created initial utility map with available knowledge from utility as-builts or historical satellite imagery data and then evaluating the uncertainties using confidence value estimators. The proposed framework transcends the point estimation of buried utility locations in previous works by producing a final probabilistic utility map, revealing a confidence level attributed to each segment linking aboveground features. In this approach, the utility infrastructure is rapidly mapped at a low cost, limiting the extent of more detailed utility investigations to low-confidence regions. In resisting obsolescence, another unique advantage of this framework is the dynamic nature of the mapping to automatically update information upon the arrival of new knowledge. This ultimately minimizes the problem of utility as-built accuracies dwindling over time.

6.
Anal Biochem ; 674: 115198, 2023 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-37302777

RESUMEN

Western blot (WB) analysis is widely used, but obtaining consistent results can be problematic, especially when using multiple gels. This study examines WB performance by explicitly applying a method commonly used to test analytical instrumentation. Test samples were lysates from RAW 264.7 murine macrophages treated with LPS to activate MAPK and NF-kB signaling targets. Samples from the pooled cell lysates placed in every lane on multiple gels were analyzed by WBs for levels of p-ERK, ERK, IkBß and non-target protein. Different normalization methods and sample groupings were applied to the density values and the resulting coefficients of variation (CV) and ratios of maximal to minimal values (Max/Min) were compared. Ideally with identical sample replicates the CVs would be 0 and the Max/Min 1; deviation indicating introduction of variability by the WB process. Common normalizations to reduce analytical variance, total lane protein, % Control, and p-ERK/ERK ratios, did not have the lowest CVs or Max/Min values. Normalization using the sum of target protein values combined with analytical replication most effectively reduced variability, resulting CV and Max/Min values as low as 5-10% and 1.1. These methods should allow reliable interpretation of complex experiments that require samples to be placed on multiple gels.


Asunto(s)
FN-kappa B , Transducción de Señal , Animales , Ratones , Western Blotting , Macrófagos
7.
J Int Neuropsychol Soc ; 29(9): 885-892, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-36762654

RESUMEN

OBJECTIVE: For decades, quantitative psychologists have recommended that authors report effect sizes to convey the magnitude and potential clinical relevance of statistical associations. However, fewer than one-third of neuropsychology articles published in the early 2000s reported effect sizes. This study re-examines the frequency and extent of effect size reporting in neuropsychology journal articles by manuscript section and over time. METHODS: A sample of 326 empirical articles were drawn from 36 randomly selected issues of six neuropsychology journals at 5-year intervals between 1995 and 2020. Four raters used a novel, reliable coding system to quantify the extent to which effect sizes were included in the major sections of all 326 articles. RESULTS: Findings showed medium-to-large increases in effect size reporting in the Methods and Results sections of neuropsychology journal articles that plateaued in recent years; however, there were only very small and nonsignificant changes in effect size reporting in the Abstract, Introduction, and Discussion sections. CONCLUSIONS: Authors in neuropsychology journals have markedly improved their effect size reporting in the core Methods and Results sections, but are still unlikely to consider these valuable metrics when motivating their study hypotheses and interpreting the conceptual and clinical implications of their findings. Recommendations are provided to encourage more widespread integration of effect sizes in neuropsychological research.


Asunto(s)
Neuropsicología , Publicaciones Periódicas como Asunto , Humanos
8.
Anal Bioanal Chem ; 415(23): 5589-5604, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37468753

RESUMEN

Lipidomics investigates the composition and function of lipids, typically employing blood or tissue samples as the primary study matrices. Hair has recently emerged as a potential complementary sample type to identify biomarkers in early disease stages and retrospectively document an individual's metabolic status due to its long detection window of up to several months prior to the time of sampling. However, the limited coverage of lipid profiling presented in previous studies has hindered its exploitation. This study aimed to evaluate the lipid coverage of hair using an untargeted liquid chromatography-high-resolution mass spectrometry lipidomics platform. Two distinct three-step exhaustive extraction experiments were performed using a hair metabolomics one-phase extraction technique that has been recently optimized, and the two-phase Folch extraction method which is recognized as the gold standard for lipid extraction in biological matrices. The applied lipidomics workflow improved hair lipid coverage, as only 99 species could be annotated using the one-phase extraction method, while 297 lipid species across six categories were annotated with the Folch method. Several lipids in hair were reported for the first time, including N-acyl amino acids, diradylglycerols, and coenzyme Q10. The study suggests that hair lipids are not solely derived from de novo synthesis in hair, but are also incorporated from sebum and blood, making hair a valuable matrix for clinical, forensic, and dermatological research. The improved understanding of the lipid composition and analytical considerations for retrospective analysis offers valuable insights to contextualize untargeted hair lipidomic analysis and facilitate the use of hair in translational studies.


Asunto(s)
Lipidómica , Lípidos , Lipidómica/métodos , Estudios Retrospectivos , Lípidos/análisis , Cromatografía Liquida/métodos , Cabello/química
9.
Mol Cell Proteomics ; 20: 100062, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33640492

RESUMEN

We celebrate the 10th anniversary of the launch of the HUPO Human Proteome Project (HPP) and its major milestone of confident detection of at least one protein from each of 90% of the predicted protein-coding genes, based on the output of the entire proteomics community. The Human Genome Project reached a similar decadal milestone 20 years ago. The HPP has engaged proteomics teams around the world, strongly influenced data-sharing, enhanced quality assurance, and issued stringent guidelines for claims of detecting previously "missing proteins." This invited perspective complements papers on "A High-Stringency Blueprint of the Human Proteome" and "The Human Proteome Reaches a Major Milestone" in special issues of Nature Communications and Journal of Proteome Research, respectively, released in conjunction with the October 2020 virtual HUPO Congress and its celebration of the 10th anniversary of the HUPO HPP.


Asunto(s)
Proteoma , Sociedades Científicas/historia , Exactitud de los Datos , Historia del Siglo XXI , Humanos , Difusión de la Información
10.
J Foot Ankle Surg ; 62(1): 191-196, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36182644

RESUMEN

Fragility index (FI) is a metric used to interpret the results of randomized controlled trials (RCTs), and describes the number of subjects that would need to be switched from event to non-event for a result to no longer be significant. Studies that analyze FI of RCTs in various orthopedic subspecialties have shown the RCTs to be largely underpowered and highly fragile. However, FI has not been assessed in foot and ankle RCTs. The MEDLINE and Embase online databases were searched from 1/1/2011 through 11/19/2021 for RCTs involving foot and ankle conditions. FI, fragility quotient (FQ), and difference between the FI and number of subjects lost to follow-up was calculated. Spearman correlation was performed to determine the relationship between sample size and FI. Overall, 1262 studies were identified of which 18 were included in the final analysis. The median sample size was 65 (interquartile range [IQR] 57-95.5), the median FI was 2 (IQR 1-2.5), and the median FQ was 0.026 (IQR 0.012-0.033). Ten of 15 (67%) studies with non-zero FI values had FI values less than the number of subjects lost to follow-up. There was linear association between FI and sample size (R2 = 0.495, p-value: .031). This study demonstrates that RCTs in the field of foot and ankle surgery are highly fragile, similar to other orthopedic subspecialties.


Asunto(s)
Tobillo , Humanos , Tobillo/cirugía , Ensayos Clínicos Controlados Aleatorios como Asunto , Tamaño de la Muestra , Bases de Datos Factuales
11.
Aust Crit Care ; 36(5): 782-786, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36123238

RESUMEN

OBJECTIVE: Reliable and accurate temperature assessment is fundamental for clinical monitoring; noninvasive thermometers of various designs are widely used in intensive care units, sometimes without a specific assessment of their suitability and interchangeability. This study evaluated agreement of four noninvasive thermometers with a pulmonary artery catheter temperature. METHODS: This prospective method comparison study was conducted in an Australian adult intensive care unit. One hundred postoperative adult cardiothoracic surgery patients who had a pulmonary artery catheter (Edwards Lifescience) in situ were identified. The temperature reading from the pulmonary artery catheter was compared to contemporaneous measurements returned by four different thermometers-temporal Artery (TA, Technimed), Per Axilla (Axilla, Welch Allyn), Tympanic (Tymp, Covidien), and the NexTemp® (NEXT, Medical Indicators [used per axilla]). The time required to obtain each noninvasive temperature measurement was recorded. RESULTS: Agreements between each noninvasive temperature and the pulmonary artery catheter standard were assessed using summary statistics and the Bland-Altman method comparison approach. A clinically acceptable maximum difference from the standard was defined as ±0.5 °C. Temperature agreement with the pulmonary artery standard (mean difference °C [95% limits of agreement °C]) was greatest for Tymp (-0.20 [-0.92 to 0.52]), intermediate for AXILLA (-0.37 [-1.3 to 0.59]) and NEXT (-0.71 [-1.7 to 0.27]), and least for TA (-0.60 [-2.0 to 0.81]). The proportion of measurements within ±0.5 °C of the standard were TYMP (81%), AXILLA (63%), TA (45%), and NEXT (30%). The time to obtain measurements varied, with the Tymp and TA estimates immediate, the AXILLA a mean of 40 s (standard deviation = 11 s), while NEXT results were at the manufacturer-recommended 3-min point. CONCLUSIONS: Tympanic thermometers showed closest agreement with the pulmonary artery standard. Deviations by more than 0.5 °C from that standard were relatively common with all noninvasive devices.


Asunto(s)
Temperatura Corporal , Termómetros , Adulto , Humanos , Proyectos Piloto , Temperatura , Australia
12.
Eur J Epidemiol ; 37(4): 429-436, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35653006

RESUMEN

The German National Cohort (NAKO) is an ongoing, prospective multicenter cohort study, which started recruitment in 2014 and includes more than 205,000 women and men aged 19-74 years. The study data will be available to the global research community for analyses. Although the ultimate decision about the analytic methods will be made by the respective investigator, in this paper we provide the basis for a harmonized approach to the statistical analyses in the NAKO. We discuss specific aspects of the study (e.g., data collection, weighting to account for the sampling design), but also give general recommendations which may apply to other large cohort studies as well.


Asunto(s)
Proyectos de Investigación , Estudios de Cohortes , Femenino , Humanos , Estudios Longitudinales , Masculino , Estudios Prospectivos
13.
BMC Bioinformatics ; 22(1): 610, 2021 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-34949163

RESUMEN

BACKGROUND: The interpretation of results from transcriptome profiling experiments via RNA sequencing (RNA-seq) can be a complex task, where the essential information is distributed among different tabular and list formats-normalized expression values, results from differential expression analysis, and results from functional enrichment analyses. A number of tools and databases are widely used for the purpose of identification of relevant functional patterns, yet often their contextualization within the data and results at hand is not straightforward, especially if these analytic components are not combined together efficiently. RESULTS: We developed the GeneTonic software package, which serves as a comprehensive toolkit for streamlining the interpretation of functional enrichment analyses, by fully leveraging the information of expression values in a differential expression context. GeneTonic is implemented in R and Shiny, leveraging packages that enable HTML-based interactive visualizations for executing drilldown tasks seamlessly, viewing the data at a level of increased detail. GeneTonic is integrated with the core classes of existing Bioconductor workflows, and can accept the output of many widely used tools for pathway analysis, making this approach applicable to a wide range of use cases. Users can effectively navigate interlinked components (otherwise available as flat text or spreadsheet tables), bookmark features of interest during the exploration sessions, and obtain at the end a tailored HTML report, thus combining the benefits of both interactivity and reproducibility. CONCLUSION: GeneTonic is distributed as an R package in the Bioconductor project ( https://bioconductor.org/packages/GeneTonic/ ) under the MIT license. Offering both bird's-eye views of the components of transcriptome data analysis and the detailed inspection of single genes, individual signatures, and their relationships, GeneTonic aims at simplifying the process of interpretation of complex and compelling RNA-seq datasets for many researchers with different expertise profiles.


Asunto(s)
ARN , Programas Informáticos , Secuencia de Bases , Reproducibilidad de los Resultados , Análisis de Secuencia de ARN
14.
J Proteome Res ; 20(11): 4915-4918, 2021 11 05.
Artículo en Inglés | MEDLINE | ID: mdl-34597050

RESUMEN

Current single-cell mass spectrometry (MS) methods can quantify thousands of peptides per single cell while detecting peptide-like features that may support the quantification of 10-fold more peptides. This 10-fold gain might be attained by innovations in data acquisition and interpretation even while using existing instrumentation. This perspective discusses possible directions for such innovations with the aim to stimulate community efforts for increasing the coverage and quantitative accuracy of single proteomics while simultaneously decreasing missing data. Parallel improvements in instrumentation, sample preparation, and peptide separation will afford additional gains. Together, these synergistic routes for innovation project a rapid growth in the capabilities of MS based single-cell protein analysis. These gains will directly empower applications of single-cell proteomics to biomedical research.


Asunto(s)
Proteómica , Espectrometría de Masas en Tándem , Péptidos , Proteínas , Proteómica/métodos , Análisis de la Célula Individual
15.
J Proteome Res ; 20(4): 2021-2027, 2021 04 02.
Artículo en Inglés | MEDLINE | ID: mdl-33657806

RESUMEN

Chemical cross-linking mass spectrometry has become a popular tool in structural biology. Although several algorithms exist that efficiently analyze data-dependent mass spectrometric data, the algorithm to identify and quantify intermolecular cross-links located at the interaction interface of homodimer molecules was missing. The algorithm in LinX utilizes high mass accuracy for ion identification. In contrast with standard data-dependent analysis, LinX enables the elucidation of cross-linked peptides originating from the interaction interface of homodimers labeled by 14N/15N, including their ratio or cross-links from protein-nucleic acid complexes. The software is written in Java language, and its source code and a detailed user's guide are freely available at https://github.com/KukackaZ/LinX or https://ms-utils.org/LinX. Data are accessible via the ProteomeXchange server with the data set identifier PXD023522.


Asunto(s)
Péptidos , Programas Informáticos , Algoritmos , Reactivos de Enlaces Cruzados , Espectrometría de Masas
16.
Cancer ; 127(23): 4348-4355, 2021 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-34424538

RESUMEN

In research, policy, and practice, continuous variables are often categorized. Statisticians have generally advised against categorization for many reasons, such as loss of information and precision as well as distortion of estimated statistics. Here, a different kind of problem with categorization is considered: the idea that, for a given continuous variable, there is a unique set of cut points that is the objectively correct or best categorization. It is shown that this is unlikely to be the case because categorized variables typically exist in webs of statistical relationships with other variables. The choice of cut points for a categorized variable can influence the values of many statistics relating that variable to others. This essay explores the substantive trade-offs that can arise between different possible cut points to categorize a continuous variable, making it difficult to say that any particular categorization is objectively best. Limitations of different approaches to selecting cut points are discussed. Contextual trade-offs may often be an argument against categorization. At the very least, such trade-offs mean that research inferences, or decisions about policy or practice, that involve categorized variables should be framed and acted upon with flexibility and humility. LAY SUMMARY: In research, policy, and practice, continuous variables are often turned into categorical variables with cut points that define the boundaries between categories. This involves choices about how many categories to create and what cut-point values to use. This commentary shows that different choices about which cut points to use can lead to different sets of trade-offs across multiple statistical relationships between the categorized variable and other variables. These trade-offs mean that no single categorization is objectively best or correct. This context is critical when one is deciding whether and how to categorize a continuous variable.

17.
Br J Clin Pharmacol ; : 4173-4182, 2021 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-33769597

RESUMEN

AIM: To describe the trend in the prevalence of statistical inference in three influential clinical pharmacology journals METHODS: We applied a computer-based algorithm to abstracts of three clinical pharmacology journals published in 1976 to 2016 to identify statistical inference and its subtypes. Furthermore, we manually reviewed a random sample of 300 articles to access algorithm's performance in finding statistical inference in abstracts and as a screening tool for presence and absence of statistical inference in full text. RESULT: The algorithm identified 59% (13,375/22,516 [mid p 95% CI, 59%-60%]) article abstracts with statistical inference. The percentage of abstracts with statistical inference was similar in 1976 and 2016, 48% (179/377 [mid p 95%CI, 42%-52%]) versus 49% (386/791 [mid p 95%CI, 45%-52%]). Statistical reporting pattern varied among journals. Among abstracts containing any statistical inference in the publications from 1976 to 2016 null-hypothesis significance testing was the most prevalent reported statistical inference. The algorithm had high sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) for finding statistical inferences in abstract. While PPV for predicting the statistical inference in full text (including abstract, text, tables and figures) was high, NPV was low. CONCLUSION: Despite journal's editorials and statistical associations' guidelines, most authors focused on testing rather than estimation. In future, a better statistical reporting might be ensured by improving the statistical knowledge of authors and an addition of statistical guides to journals' instruction to authors to the extent that editors would like their statistical inference preferences to be incorporated into submitted manuscripts.

18.
Rheumatol Int ; 41(1): 43-55, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33201265

RESUMEN

Statistical presentation of data is key to understanding patterns and drawing inferences about biomedical phenomena. In this article, we provide an overview of basic statistical considerations for data analysis. Assessment of whether tested parameters are distributed normally is important to decide whether to employ parametric or non-parametric data analyses. The nature of variables (continuous or discrete) also determines analysis strategies. Normally distributed data can be presented using means with standard deviations (SD), whereas non-parametric measures such as medians (with range or interquartile range) should be used for non-normal distributions. While the SD provides a measure of data dispersion, the standard error provides estimates of the 95% confidence interval i.e. the actual mean in the population. Univariable analyses should be directed to denote effect sizes, as well as test a priori hypothesis (i.e. null hypothesis significance testing). Univariable analyses should be followed up by suitable adjusted multivariable analyses such as linear or logistic regression. Linear correlation statistics can help assess whether two variables change hand in hand. Concordance rather than correlation should be used to compare outcome measures of disease states. Prior sample size calculation to ensure adequate study power is recommended for studies which have analogues in the literature with SDs. Statistical considerations for systematic reviews should include appropriate use of meta-analysis, assessment of heterogeneity, publication bias assessment when there are more than ten studies, and quality assessment of studies. Since statistical errors are responsible for a significant proportion of retractions, appropriate statistical analysis is mandatory during study planning and data analysis.


Asunto(s)
Interpretación Estadística de Datos , Modelos Estadísticos , Proyectos de Investigación/normas , Humanos , Estudios Observacionales como Asunto , Reumatología/normas , Revisiones Sistemáticas como Asunto
19.
Scand J Prim Health Care ; 39(4): 448-458, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34585629

RESUMEN

OBJECTIVE: Machine learning (ML) is expected to play an increasing role within primary health care (PHC) in coming years. No peer-reviewed studies exist that evaluate the diagnostic accuracy of ML models compared to general practitioners (GPs). The aim of this study was to evaluate the diagnostic accuracy of an ML classifier on primary headache diagnoses in PHC, compare its performance to GPs, and examine the most impactful signs and symptoms when making a prediction. DESIGN: A retrospective study on diagnostic accuracy, using electronic health records from the database of the Primary Health Care Service of the Capital Area (PHCCA) in Iceland. SETTING: Fifteen primary health care centers of the PHCCA. SUBJECTS: All patients that consulted a physician, from 1 January 2006 to 30 April 2020, and received one of the selected diagnoses. MAIN OUTCOME MEASURES: Sensitivity, Specificity, Positive Predictive Value, Matthews Correlation Coefficient, Receiver Operating Characteristic (ROC) curve, and Area under the ROC curve (AUROC) score for primary headache diagnoses, as well as Shapley Additive Explanations (SHAP) values of the ML classifier. RESULTS: The classifier outperformed the GPs on all metrics except specificity. The SHAP values indicate that the classifier uses the same signs and symptoms (features) as a physician would, when distinguishing between headache diagnoses. CONCLUSION: In a retrospective comparison, the diagnostic accuracy of the ML classifier for primary headache diagnoses is superior to GPs. According to SHAP values, the ML classifier relies on the same signs and symptoms as a physician when making a diagnostic prediction.KeypointsLittle is known about the diagnostic accuracy of machine learning (ML) in the context of primary health care, despite its considerable potential to aid in clinical work. This novel research sheds light on the diagnostic accuracy of ML in a clinical context, as well as the interpretation of its predictions. If the vast potential of ML is to be utilized in primary health care, its performance, safety, and inner workings need to be understood by clinicians.


Asunto(s)
Inteligencia Artificial , Médicos Generales , Humanos , Aprendizaje Automático , Curva ROC , Estudios Retrospectivos
20.
Sensors (Basel) ; 21(20)2021 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-34696098

RESUMEN

Strain data of structural health monitoring is a prospective to be made full use of, because it reflects the stress peak and fatigue, especially sensitive to local stress redistribution, which is the probably damage in the vicinity of the sensor. For decoupling structural damage and masking effects caused by operational conditions to eliminate the adverse impacts on strain-based damage detection, small time-scale structural events, i.e., the short-term dynamic strain responses, are analyzed in this paper by employing unsupervised modeling. A two-step approach to successively processing the raw strain monitoring data in the sliding time window is presented, consisting of the wavelet-based initial feature extraction step and the decoupling step to draw damage indicators. The principal component analysis and a low-rank property-based subspace projection method are adopted as two alternative decoupling methodologies. The approach's feasibility and robustness are substantiated by analyzing the strain monitoring data from a customized truss experiment to successfully remove the masking effects of operating loads and identify local damages even concerning accommodating situations of missing data and limited measuring points. This work also sheds light on the merit of a low-rank property to separate structural damages from masking effects by comparing the performances of the two optional decoupling methods of the distinct rationales.


Asunto(s)
Estudios Prospectivos , Análisis de Componente Principal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA