Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Indian J Med Res ; 157(4): 353-357, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-37282397

RESUMEN

Background & objectives: Due to lack of appropriate statistical knowledge, published research articles contain various errors related to the design, analysis and interpretation of results in the area of biomedical research. If research contains statistical error, however, costly, it may be of no use and the purpose of the investigation gets defeated. Many biomedical research articles published in different peer reviewed journals may retain several statistical errors and flaws in them. This study aimed to examine the trend and status of application of statistics in biomedical research articles. Study design, sample size estimation and statistical measures are crucial components of a study. These points were evaluated in published original research articles to understand the use or misuse of statistical tools. Methods: Three hundred original research articles from the latest issues of selected 37 journals were reviewed. These journals were from the five internationally recognized publication groups (CLINICAL KEY, BMJ Group, WILEY, CAMBRIDGE and OXFORD) accessible through the online library of SGPGI, Lucknow, India. Results: Among articles assessed under present investigation, 85.3 per cent (n=256) were observational, and 14.7 per cent (n=44) were interventional studies. In 93 per cent (n=279) of research articles, sample size estimation was not reproducible. The simple random sampling was encountered rarely in biomedical studies even though none of the articles was adjusted by design effect and, only five articles had used randomized test. The testing of assumption of normality was mentioned in only four studies before applying parametric tests. Interpretation & conclusions: In order to present biomedical research results with reliable and precise estimates based on data, the role of engaging statistical experts need to be appreciated. Journals must have standard rules for reporting study design, sample size and data analysis tools. Careful attention is needed while applying any statistical procedure as, it will not only help readers to trust in the published articles, but also rely on the inferences the published articles draw.


Asunto(s)
Investigación Biomédica , Proyectos de Investigación , Humanos , Recolección de Datos , India
2.
J Comput Chem ; 44(14): 1347-1359, 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-36811192

RESUMEN

Analysis of the mean squared displacement of species k , r k 2 , as a function of simulation time t constitutes a powerful method for extracting, from a molecular-dynamics (MD) simulation, the tracer diffusion coefficient, D k * . The statistical error in D k * is seldom considered, and when it is done, the error is generally underestimated. In this study, we examined the statistics of r k 2 t curves generated by solid-state diffusion by means of kinetic Monte Carlo sampling. Our results indicate that the statistical error in D k * depends, in a strongly interrelated way, on the simulation time, the cell size, and the number of relevant point defects in the simulation cell. Reducing our results to one key quantity-the number of k particles that have jumped at least once-we derive a closed-form expression for the relative uncertainty in D k * . We confirm the accuracy of our expression through comparisons with self-generated MD diffusion data. With the expression, we formulate a set of simple rules that encourage the efficient use of computational resources for MD simulations.

3.
Data Brief ; 42: 108240, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35592769

RESUMEN

In practice, field measurements often show missing data due to several dynamic factors. However, the complete data about a given environment is key to characterizing the radio features of the terrain for a high quality of service. In order to address this problem, field data were collected from a dense urban environment, and the missing parameters were predicted using the Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) algorithm. The field measurement was taken around Victoria Island and Ikoyi in Lagos, Nigeria. The test equipment comprises a Global Positioning System (GPS) and a Fourth Generation (4G) Long Term Evolution (LTE) modem equipped with a 2×2 MIMO antenna, employing 64 Quadrature Amplitude Modulation (QAM). The Modem was installed on a personal computer and assembled inside a test vehicle driven at a near-constant speed of 30 km/h to minimize possible Doppler effects. Specifically, the test equipment records 67 LTE parameters at 1 s intervals, including the time and coordinates of the mobile station. Thirty-two parameters were logged at 42,498 instances corresponding to 11 h, 48 min and 18 s of data logging on the mobile terminal. Sixteen important 4G LTE parameters were extracted and analyzed. The statistical errors were calculated when the missing values were exempted from the analyses and when the missing values were incorporated using the PCHIP algorithm. In particular, this update paper estimated the missing values of critical network parameters using the PCHIP algorithm, which was not covered in the original article. Also, the error statistics between the data (histograms) and the corresponding probability density function curves for the measured data with missing values and the data filled with the missing values using the PCHIP algorithm are derived. Additionally, the accuracy of the PCHIP algorithm was analysed using standard statistical error analysis. More network parameters have been tested in the update article than in the original article, presenting only basic statistics and fewer network parameters. Overall, results indicate that only the parameters which measure the throughput values follow the half-normal distribution while others follow the normal distribution.

4.
Behav Res Methods ; 54(6): 3100-3117, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35233752

RESUMEN

In a sequential hypothesis test, the analyst checks at multiple steps during data collection whether sufficient evidence has accrued to make a decision about the tested hypotheses. As soon as sufficient information has been obtained, data collection is terminated. Here, we compare two sequential hypothesis testing procedures that have recently been proposed for use in psychological research: Sequential Probability Ratio Test (SPRT; Psychological Methods, 25(2), 206-226, 2020) and the Sequential Bayes Factor Test (SBFT; Psychological Methods, 22(2), 322-339, 2017). We show that although the two methods have different philosophical roots, they share many similarities and can even be mathematically regarded as two instances of an overarching hypothesis testing framework. We demonstrate that the two methods use the same mechanisms for evidence monitoring and error control, and that differences in efficiency between the methods depend on the exact specification of the statistical models involved, as well as on the population truth. Our simulations indicate that when deciding on a sequential design within a unified sequential testing framework, researchers need to balance the needs of test efficiency, robustness against model misspecification, and appropriate uncertainty quantification. We provide guidance for navigating these design decisions based on individual preferences and simulation-based design analyses.


Asunto(s)
Proyectos de Investigación , Humanos , Teorema de Bayes
5.
Front Mol Biosci ; 8: 694130, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34124166

RESUMEN

The reliability and usefulness of molecular dynamics simulations of equilibrium processes rests on their statistical precision and their capability to generate conformational ensembles in agreement with available experimental knowledge. Metadynamics Metainference (M&M), coupling molecular dynamics with the enhanced sampling ability of Metadynamics and with the ability to integrate experimental information of Metainference, can in principle achieve both goals. Here we show that three different Metadynamics setups provide converged estimate of the populations of the three-states populated by a model peptide. Errors are estimated correctly by block averaging, but higher precision is obtained by performing independent replicates. One effect of Metadynamics is that of dramatically decreasing the number of effective frames resulting from the simulations and this is relevant for M&M where the number of replicas should be large enough to capture the conformational heterogeneity behind the experimental data. Our simulations allow also us to propose that monitoring the relative error associated with conformational averaging can help to determine the minimum number of replicas to be simulated in the context of M&M simulations. Altogether our data provides useful indication on how to generate sound conformational ensemble in agreement with experimental data.

6.
Mar Drugs ; 20(1)2021 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-35049868

RESUMEN

Floating chirality restrained distance geometry (fc-rDG) calculations are used to directly evolve structures from NMR data such as NOE-derived intramolecular distances or anisotropic residual dipolar couplings (RDCs). In contrast to evaluating pre-calculated structures against NMR restraints, multiple configurations (diastereomers) and conformations are generated automatically within the experimental limits. In this report, we show that the "unphysical" rDG pseudo energies defined from NMR violations bear statistical significance, which allows assigning probabilities to configurational assignments made that are fully compatible with the method of Bayesian inference. These "diastereomeric differentiabilities" then even become almost independent of the actual values of the force constants used to model the restraints originating from NOE or RDC data.


Asunto(s)
Organismos Acuáticos , Productos Biológicos/química , Espectroscopía de Resonancia Magnética , Conformación Proteica , Animales , Teorema de Bayes , Modelos Moleculares
7.
Angew Chem Int Ed Engl ; 60(7): 3412-3416, 2021 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-33137233

RESUMEN

The certainty of configurational assignments of natural products based on anisotropic NMR parameters, such as residual dipolar couplings (RDCs), must be amended by estimates on structural noise emerging from thermal vibrations. We show that vibrational analysis significantly affects the error margins with which RDCs can be back-calculated from molecular models, and the implications of thermal motions on the differentiability of diastereomers are derived.

8.
Biol Rev Camb Philos Soc ; 95(6): 1759-1797, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32869488

RESUMEN

Inferring the body mass of fossil taxa, such as non-avian dinosaurs, provides a powerful tool for interpreting physiological and ecological properties, as well as the ability to study these traits through deep time and within a macroevolutionary context. As a result, over the past 100 years a number of studies advanced methods for estimating mass in dinosaurs and other extinct taxa. These methods can be categorized into two major approaches: volumetric-density (VD) and extant-scaling (ES). The former receives the most attention in non-avian dinosaurs and advanced appreciably over the last century: from initial physical scale models to three-dimensional (3D) virtual techniques that utilize scanned data obtained from entire skeletons. The ES approach is most commonly applied to extinct members of crown clades but some equations are proposed and utilized in non-avian dinosaurs. Because both approaches share a common goal, they are often viewed in opposition to one another. However, current palaeobiological research problems are often approach specific and, therefore, the decision to utilize a VD or ES approach is largely question dependent. In general, biomechanical and physiological studies benefit from the full-body reconstruction provided through a VD approach, whereas large-scale evolutionary and ecological studies require the extensive data sets afforded by an ES approach. This study summarizes both approaches to body mass estimation in stem-group taxa, specifically non-avian dinosaurs, and provides a comparative quantitative framework to reciprocally illuminate and corroborate VD and ES approaches. The results indicate that mass estimates are largely consistent between approaches: 73% of VD reconstructions occur within the expected 95% prediction intervals of the ES relationship. However, almost three quarters of outliers occur below the lower 95% prediction interval, indicating that VD mass estimates are, on average, lower than would be expected given their stylopodial circumferences. Inconsistencies (high residual and per cent prediction deviation values) are recovered to a varying degree among all major dinosaurian clades along with an overall tendency for larger deviations between approaches among small-bodied taxa. Nonetheless, our results indicate a strong corroboration between recent iterations of the VD approach based on 3D specimen scans suggesting that our current understanding of size in dinosaurs, and hence its biological correlates, has improved over time. We advance that VD and ES approaches have fundamentally (metrically) different advantages and, hence, the comparative framework used and advocated here combines the accuracy afforded by ES with the precision provided by VD and permits the rapid identification of discrepancies with the potential to open new areas of discussion.


Asunto(s)
Dinosaurios , Animales , Evolución Biológica , Fósiles
9.
Accid Anal Prev ; 144: 105589, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32593780

RESUMEN

Numerous studies have developed intersection crash prediction models to identify crash hotspots and evaluate safety countermeasures. These studies largely considered only micro-level crash contributing factors such as traffic volume, traffic signals, etc. Some recent studies, however, have attempted to include macro-level crash contributing factors, such as population per zone, to predict the number of crashes at intersections. As many intersections are located between multiple zones and thus affected by factors from the multiple zones, the inclusion of macro-level factors requires boundary problems to be resolved. In this study, we introduce an advanced multilevel model, the multiple membership multilevel model (MMMM), for intersection crash analysis. Our objective was to reduce heterogeneity issues between zones in crash prediction model while avoiding misspecification of the model structure. We used five years of intersection crash data (2009-2013) for the City of Regina, Saskatchewan, Canada and identified micro- and macro-level factors that most affected intersection crashes. We compared the fitting performance of the MMMM with that of two existing models, a traditional single model (SM) and a conventional multilevel model (CMM). The MMMM outperformed the SM and CMM in terms of fitting capability. We found that the MMMM avoided both the underestimation of macro-level variance and the type I statistical error that tend to occur when the crash data are analyzed using a SM or CMM. Statistically significant micro-level and macro-level crash contributing factors in Regina included major roadway AADT, four legs, traffic signals, speed, young drivers, and different types of land use.


Asunto(s)
Accidentes de Tránsito/prevención & control , Accidentes de Tránsito/estadística & datos numéricos , Entorno Construido/estadística & datos numéricos , Humanos , Modelos Estadísticos , Análisis Multinivel , Saskatchewan
10.
Res Synth Methods ; 11(5): 574-579, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32275351

RESUMEN

We present the R package and web app statcheck to automatically detect statistical reporting inconsistencies in primary studies and meta-analyses. Previous research has shown a high prevalence of reported p-values that are inconsistent - meaning a re-calculated p-value, based on the reported test statistic and degrees of freedom, does not match the author-reported p-value. Such inconsistencies affect the reproducibility and evidential value of published findings. The tool statcheck can help researchers to identify statistical inconsistencies so that they may correct them. In this paper, we provide an overview of the prevalence and consequences of statistical reporting inconsistencies. We also discuss the tool statcheck in more detail and give an example of how it can be used in a meta-analysis. We end with some recommendations concerning the use of statcheck in meta-analyses and make a case for better reporting standards of statistical results.


Asunto(s)
Metaanálisis como Asunto , Psicología/métodos , Proyectos de Investigación , Estadística como Asunto , Algoritmos , Humanos , Modelos Estadísticos , Prevalencia , Lenguajes de Programación , Reproducibilidad de los Resultados , Interfaz Usuario-Computador
11.
J Neurotrauma ; 36(18): 2732-2742, 2019 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-30864876

RESUMEN

Clinical trials of novel therapies for acute spinal cord injury (SCI) are challenging because variability in spontaneous neurologic recovery can make discerning actual treatment effects difficult. Unbiased Recursive Partitioning regression with Conditional Inference Trees (URP-CTREE) is a novel approach developed through analyses of a large European SCI database (European Multicenter Study about Spinal Cord Injury). URP-CTREE uses early neurologic impairment to predict achieved motor recovery, with potential to optimize clinical trial design by optimizing patient stratification and decreasing sample sizes. We performed external validation to determine how well a previously reported URP-CTREE model stratified patients into distinct homogeneous subgroups and predicted subsequent neurologic recovery in an independent cohort. We included patients with acute cervical SCI level C4-C6 from a prospective registry at a quaternary care center from 2004-2018 (n = 101) and applied the URP-CTREE model and evaluated Upper Extremity Motor Score (UEMS) recovery, considered correctly predicted when final UEMS scores were within a pre-specified threshold of 9 points from median; sensitivity analyses evaluated the effect of timing of baseline neurological examination. We included 101 patients, whose mean times from injury baseline and follow-up examinations were 6.1 days (standard deviation [SD] 17) and 235.0 days (SD 71), respectively. Median UEMS recovery was 7 points (interquartile range 2-12). One of the predictor variables was not statistically significant in our sample; one group did not fit progressively improving UEMS scores, and three of five groups had medians that were not significantly different from adjacent groups. Overall accuracy was 75%, but varied from 82% among participants whose examinations occurred at <12 h, to 64% at 12-24 h, and 58% at >24 h. A previous URP-CTREE model had limited ability to stratify an independent into homogeneous subgroups. Overall accuracy was promising, but may be sensitive to timing of baseline neurological examinations. Further evaluation of external validity in incomplete injuries, influence of timing of baseline examinations, and investigation of additional stratification strategies is warranted.


Asunto(s)
Modelos Lineales , Recuperación de la Función , Traumatismos de la Médula Espinal/clasificación , Adulto , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
12.
Accid Anal Prev ; 106: 305-314, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28686881

RESUMEN

This study analyzes 86,622 commercial motor vehicle (CMV) crashes (large truck, bus and taxi crashes) in South Korea from 2010 to 2014. The analysis recognizes the hierarchical structure of the factors affecting CMV crashes by examining eight factors related to individual crashes and six additional upper level factors organized in two non-nested groups (company level and regional level factors). The study considers four different crash severities (fatal, major, minor, and no injury). The company level factors reflect selected characteristics of 1,875 CMV companies, and the regional level factors reflect selected characteristics of 230 municipalities. The study develops a single-level ordinary ordered logit model, two conventional multilevel ordered logit models, and a cross-classified multilevel ordered logit model (CCMM). As the study develops each of these four models for large trucks, buses and taxis, 12 different statistical models are analyzed. The CCMM outperforms the other models in two important ways: 1) the CCMM avoids the type I statistical errors that tend to occur when analyzing hierarchical data with single-level models; and 2) the CCMM can analyze two non-nested groups simultaneously. Statistically significant factors include taxi company's type of vehicle ownership and municipality's level of transportation infrastructure budget. An improved understanding of CMV related crashes should contribute to the development of safety countermeasures to reduce the number and severity of CMV related crashes.


Asunto(s)
Accidentes de Tránsito/estadística & datos numéricos , Modelos Logísticos , Vehículos a Motor/estadística & datos numéricos , Transportes/estadística & datos numéricos , Accidentes de Tránsito/clasificación , Humanos , Análisis Multinivel , República de Corea
13.
Appl Spectrosc ; 71(7): 1665-1676, 2017 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28447492

RESUMEN

Due to the complex nature of near-infrared (NIR) spectra, it is usually very difficult to provide quantitative interpretations of spectral data. As a consequence, careful building and validation of calibration models are of fundamental importance prior to development of useful applications of NIR technologies. For this reason, this work presents a statistical study about the NIR spectroscopy, analyzing the NIR behavior when the experimental conditions are changed. Near-infrared spectra were measured at different temperatures and stirring velocities for systems containing a pure solvent and a suspension of polymer powder in order to perform the error analysis. Then, mixtures of xylene and toluene were analyzed through NIR at different temperatures and stirring velocities and the obtained data were used to build calibration models with multivariate techniques. The results showed that the precision of the NIR measurements depends on the analytical conditions and that unavoidable fluctuations of spectral data (or spectral data variability) are strongly correlated, leading to full covariance matrices of spectral fluctuations, which has been surprisingly neglected during quantitative analyses. In particular, modeling of the xylene/toluene NIR data performed with different multivariate techniques revealed that the principal directions are not preserved when the real covariance matrix of measurement errors is taken into account.

14.
Evol Bioinform Online ; 12: 165-74, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27486297

RESUMEN

The Binary State Speciation and Extinction (BiSSE) method is one of the most popular tools for investigating the rates of diversification and character evolution. Yet, based on previous simulation studies, it is commonly held that the BiSSE method requires phylogenetic trees of fairly large sample sizes (>300 taxa) in order to distinguish between the different models of speciation, extinction, or transition rate asymmetry. Here, the power of the BiSSE method is reevaluated by simulating trees of both small and large sample sizes (30, 60, 90, and 300 taxa) under various asymmetry models and root state assumptions. Results show that the power of the BiSSE method can be much higher, also in trees of small sample size, for detecting differences in speciation rate asymmetry than anticipated earlier. This, however, is not a consequence of any conceptual or mathematical flaw in the method per se but rather of assumptions about the character state at the root of the simulated trees and thus the underlying macroevolutionary model, which led to biased results and conclusions in earlier power assessments. As such, these earlier simulation studies used to determine the power of BiSSE were not incorrect but biased, leading to an overestimation of type-II statistical error for detecting differences in speciation rate but not for extinction and transition rates.

15.
World J Gastroenterol ; 22(9): 2867-8, 2016 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-26973426

RESUMEN

We report invalidating errors related to the statistical approach in the analysis and data inconsistencies in a published single cohort study of patients with Crohn's disease. We provide corrected calculations from the available data and request that a corrected analysis be provided by the authors. These errors should be corrected.


Asunto(s)
Enfermedad de Crohn/terapia , Metabolismo Energético , Nutrición Enteral , Femenino , Humanos , Masculino
16.
Anal Biochem ; 496: 1-3, 2016 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-26562324

RESUMEN

Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program.


Asunto(s)
Análisis de los Mínimos Cuadrados , Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , Incertidumbre
17.
Anal Biochem ; 464: 94-102, 2014 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-24991688

RESUMEN

Most methods for analyzing real-time quantitative polymerase chain reaction (qPCR) data for single experiments estimate the hypothetical cycle 0 signal y0 by first estimating the quantification cycle (Cq) and amplification efficiency (E) from least-squares fits of fluorescence intensity data for cycles near the onset of the growth phase. The resulting y0 values are statistically equivalent to the corresponding Cq if and only if E is taken to be error free. But uncertainty in E usually dominates the total uncertainty in y0, making the latter much degraded in precision compared with Cq. Bias in E can be an even greater source of error in y0. So-called mechanistic models achieve higher precision in estimating y0 by tacitly assuming E=2 in the baseline region and so are subject to this bias error. When used in calibration, the mechanistic y0 is statistically comparable to Cq from the other methods. When a signal threshold yq is used to define Cq, best estimation precision is obtained by setting yq near the maximum signal in the range of fitted cycles, in conflict with common practice in the y0 estimation algorithms.


Asunto(s)
Reacción en Cadena de la Polimerasa/métodos , Incertidumbre , Calibración , Análisis de los Mínimos Cuadrados
18.
J Neurotrauma ; 31(18): 1540-7, 2014 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-24811484

RESUMEN

Clinical trials of therapies for acute traumatic spinal cord injury (tSCI) have failed to convincingly demonstrate efficacy in improving neurologic function. Failing to acknowledge the heterogeneity of these injuries and under-appreciating the impact of the most important baseline prognostic variables likely contributes to this translational failure. Our hypothesis was that neurological level and severity of initial injury (measured by the American Spinal Injury Association Impairment Scale [AIS]) act jointly and are the major determinants of motor recovery. Our objective was to quantify the influence of these variables when considered together on early motor score recovery following acute tSCI. Eight hundred thirty-six participants from the Rick Hansen Spinal Cord Injury Registry were analyzed for motor score improvement from baseline to follow-up. In AIS A, B, and C patients, cervical and thoracic injuries displayed significantly different motor score recovery. AIS A patients with thoracic (T2-T10) and thoracolumbar (T11-L2) injuries had significantly different motor improvement. High (C1-C4) and low (C5-T1) cervical injuries demonstrated differences in upper extremity motor recovery in AIS B, C, and D. A hypothetical clinical trial example demonstrated the benefits of stratifying on neurological level and severity of injury. Clinically meaningful motor score recovery is predictably related to the neurological level of injury and the severity of the baseline neurological impairment. Stratifying clinical trial cohorts using a joint distribution of these two variables will enhance a study's chance of identifying a true treatment effect and minimize the risk of misattributed treatment effects. Clinical studies should stratify participants based on these factors and record the number of participants and their mean baseline motor scores for each category of this joint distribution as part of the reporting of participant characteristics. Improved clinical trial design is a high priority as new therapies and interventions for tSCI emerge.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Recuperación de la Función/fisiología , Traumatismos de la Médula Espinal , Índices de Gravedad del Trauma , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Canadá , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Traumatismos de la Médula Espinal/diagnóstico , Traumatismos de la Médula Espinal/patología , Traumatismos de la Médula Espinal/terapia , Adulto Joven
19.
J Res Natl Bur Stand A Phys Chem ; 72A(2): 187-205, 1968.
Artículo en Inglés | MEDLINE | ID: mdl-31824089

RESUMEN

Various features of the spectral profile of an x-ray line can be measured with an uncertainty which is only a small fraction of the observed line width. With recent improvements in measurement techniques, statistical errors due to the random fluctuations of the intensities in counter recordings may become significant. The present study considers the effect of such errors on several features of the line profile which could be used for definition of its wavelength. These may be broadly classified into three groups, viz, the peak, the centroid, and the median. In the present analysis the statistical errors associated with these features are compared theoretically, with the assumption of negligible error in angular measurement. Certain systematic errors are also briefly examined. The effects of truncation range, asymmetry, and background intensity are considered, as well as possible optimization of the data-taking procedure. In general, σ, the standard deviation of the wavelength, is given by σ/W = F/(I p T)1/2, where W is the full width at half-maximum intensity, I p the peak intensity, T the total counting time, and F a dimensionless factor of the order of unity. Thus F may be regarded as a factor of merit for comparing the various cases, a low value of F being desirable. When the form of the line profile is known a priori, it is usually best to make use of this knowledge; e.g., a Lorentzian can be thus fitted with F ≈ 0.8 for any of the three wavelength features. Using optimized truncation ranges and including the error in locating end points, one obtains approximately this same F for the centroid or median even without prior knowledge of the profile. In the latter case the value of F for the peak usually ranges from about 1.6 to 2.1. However, the peak is less subject to certain systematic errors and is preferable from the viewpoint of simplicity and historical precedent. It is recommended that use of the peak be continued at present; further study of the problem from the viewpoint of atomic energy level interpretation would be desirable.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...